Better Than Nothing: A Look at Content Moderation in 2020

“I do not think so Censoring the news in politicians or democracy is right for a private company. ”- Mark Zuckerberg, October 2018: 9

“Facebook deleted Trump’s post about Kovid-19, which contains misinformation rules” – Wall Street Journal, October 6, 2020

for more than In a decade, the attitude of the largest social media companies towards policing misinformation on their platforms was best met by Mark Zuckerberg Frequentlyrepeated Warning: “I strongly believe Facebook There should not be an arbiter of the truth of everything that people say online. “Even after the 2016 election, as Facebook, Twitter and YouTube faced increasing backlash for their role in conspiracy theories and the spread of lies, companies have been reluctant to take action against it.

Then came 2020.

Under pressure from politicians, activists, and the media, Facebook, Twitter and YouTube all made policy changes and enforcement decisions this year, which they had protested for a long time – from labeling misinformation from major accounts to thwarting viral dissemination Tried to take down the posts for the President of the United States. It is difficult to say how successful these changes were, or how to define success. But the fact is that he took all the steps as a dramatic change.

“I think we’ll see 2020 as the year when they finally acknowledge they have some responsibility for the content on their platforms,” ​​said Evelyn Doyak, an associate at Harvard’s Berkman Klein Center for Internet and Society. “They could go further, there’s more they could do, but we should celebrate that they are in the ballgame at least now.”

Social media was never total-free; Platforms have long treated illicit and indecent behavior. What emerged this year was a renewed desire to take action against certain types of content because it is false – expanding categories of prohibited content and more aggressively to enforce policies on books already. The proximal cause was the coronovirus epidemic, which gave rise to an information crisis during a public health emergency. Social media executives quickly used the capability of their platforms as vectors of lies about coronoviruses, which were believed to be fatal. They soon swore on both to keep dangerous false claims from their platforms and to direct users to accurate information.

One wonders whether these companies will overcome the extent to which the epidemic becomes political, and the confrontation between Donald Trump’s leading researcher of dangerous policies – his letter of policies and his reluctance to enforce rules against powerful public officials Force it. By August, the temple will be built even for Facebook take down The Trump Post in which the president suggested that the children were “virtually immune” to coronovirus.

“Taking things down to be false, is the line they won’t cross first,” said Doc. “Earlier, he said, ‘Falsehood alone is not enough.” This turned into an epidemic, and we started seeing them actually getting ready to take things, because they were liars. “

Nowhere did public health and politics negotiate more actively than the debate over mail-in voting that had arisen as a safe alternative to in-person polling places – and Trump immediately to steal the election Was demonstrated by the Democratic plan. The platforms, perhaps eager to wash away the bad taste of 2016, tried to get ahead of vote-by-mail publicity. It was mail-in voting that caused Twitter to break the seal Fact-checking label In a tweet from Trump, in May, that made false claims about California’s mail-in voting process.

The trend reached its apathy in the November election, as Trump aired his intention to challenge the validity of any vote that went against him. In response, Facebook and Twitter announced detailed plans to counter that push, including disclaimers in premature claims of victory and specifying which credible organizations would trust to validate the election results. (YouTube, in particular, did very less To prepare.) Other steps included prohibiting political advertising on Facebook, increasing the use of human restraint, putting trusted information in users’ feeds, and even adding. Interfering manually To prevent the potentially misleading spread of viral disinfection. In form of new York Times Author Kevin Rose Views, These steps include “slowing down, shutting down or otherwise disrupting core parts of their products – in fact, protecting democracy by making their apps worse.”

.