Facebook’s self-regulatory ‘Oversight Board’ (FOB) has given its first batch of decisions on content combinations after contesting elections. First case.
a A lot of time to makeFOB is part of Facebook’s crisis. PR push to distance your business from the impact of controversial content moderation decisions – by creating a review body to handle a small fraction of the complaints that attract its content. It began accepting submissions for review in October 2020 – and faced criticism for the slow pace of being off the field.
announcement of Today’s first decision shows that Tech has overturned four of the giant technology decisions to have only one of the first content moderation decisions made by Facebook to be updated.
Decisions on the cases were made by a five-member panel, which included at least one member of the area in question and a mixture of genders according to the FOB. A majority of the full board then had to review each panel’s findings to approve the decision before issuing it.
The only case where the board upheld Facebook’s decision to remove content Case 2020-003-FB-UA – Where Facebook deleted a post on Hate Speech under its Community Standards that used the Russian word “тазики” (“taziks”) to describe Azerbaijan, claiming the user to be compared to Armenians I do not have any history.
In four other cases, the board has overturned the Facebook takedown, rejecting earlier assessments made by tech giants regarding hate speech, adult nudity, dangerous individuals / organizations and policies on violence and harassment. (You can read the outline of these cases Website.)
Each decision relates to a specific piece of content but the board has also issued nine policy recommendations.
These also include suggestions Facebook [emphasis ours]:
- Create a new community standard on health misinformationConsolidate and clarify existing rules in one place. It should define key words like “misinformation”.
- Use less intrusive tools to implement your health related misinformation policies Where the content does not reach Facebook’s threshold of imminent physical harm.
- Increase transparency How does it control misinformation around health, Which includes publishing a transparency report on how community standards have been implemented during the COVID-19 epidemic. This recommendation is drawn on public comments received by the board.
- Ensure that users are always informed of the reasons for any enforcement of community standards against them, including the specific rule Facebook. (The board made two similar policy recommendations related to the case in two cases that were considered, with the second indecent language case also worth noting that “Facebook’s lack of transparency left its decision on wrongdoing Given that the company removed the content because the user expressed an idea to disagree with it.)
- Explain and provide the application of key words from the policy of dangerous individuals and organizations, Including the meaning of “praise”, “support” and “representation”. When discussing dangerous individuals or organizations, the community standard should advise users to better clarify their intent.
- Provide a public list of organizations and individuals designated as ‘dangerous’ under the Community Standards for Dangerous Persons and Organizations Or, at the very least, a list of examples.
- Notify users when automated enforcement is used to moderate their content, ensure that users can in some cases make automated decisions for a person, And improve the automatic detection of images with text-overlays so that posts that raise awareness of breast cancer symptoms are not marked incorrectly for review. Facebook should improve its transparency reporting on the use of automated enforcement.
- Modify Instagram’s Community Guidelines to Show Female Nipples to Raise Breast Cancer Awareness And clarify where there are inconsistencies between Instagram’s Community Guidelines and Facebook’s Community Standards, the latter being the antecedents.
Where it has removed Facebook, the board says it expects Facebook to restore specific pieces of content removed within seven days.
In addition, the board writes that Facebook “will also examine whether similar content with parallel references to the board’s decisions should remain on its platform”. And says that Facebook has 30 days to publicly respond to its policy recommendations.
So it would certainly be interesting to see how tech giants react to the laundry list of proposed policy changes – perhaps particularly recommendations for increased transparency (including suggestions that inform users when content is completely off their AI Has been removed by) – and whether Facebook is happy to align with the policy guidance issued by a fully self-regulatory vehicle (or not).
Facebook created the board’s structure and charter and appointed its members – but has encouraged the notion of being ‘independent’ from Facebook, even though it established the FOB (indirectly, via a premise to administer the body has been done).
And while the board claims that its review decisions are binding on Facebook, Facebook has no such requirement to follow its policy recommendations.
It is also notable that FOB’s review effort focuses solely on takedowns – rather than whether Facebook chooses to host on its platform.
Given how much it is impossible to influence Facebook’s influence on Facebook Oversight Board decisions. And even though Facebook swallows all of the above policy recommendations – or more likely – a PR line welcoming the FOB’s ‘thoughtful’ contribution to a ‘complex area’, it says it ‘as taking them further’ Will take into account ‘- doing so from a place where it has retained maximum control of content review by defining, shaping and funding’ oversight ‘.
tl; dr: It is not a real Supreme Court.
In the coming weeks, possibly following a violent attack in the US capital earlier this month, FOB will be most closely watched on a recently accepted case – related to Facebook’s indefinite suspension of former US President Donald Trump.
The board notes that it will “shortly” make public comment on that matter.
“Recent events in the United States and around the world have highlighted the enormous impact of content decisions taken by Internet services on human rights and free expression,” it writes, adding that: “The challenges and limitations of the current approach Attracting content to draw attention to the value of independent oversight of the most consequential decisions by companies like Facebook. “
But of course this ‘oversight board’ is unable to be completely independent of its founder, Facebook.