Facebook’s world-renowned AI guru, Yan Laken, had some trouble with an article he wrote about his company yesterday. So he did any of us, he went on social media to air his grievances.
Only, he does not fight Facebook as you would expect. Instead, in a few hours, he connected with many people on Twitter.
No it is not.
Apparently, a journalist can write about AI fairness regardless of fairness.
One of those cases where reality is known reads an article about it, and “WTF?”
– Yan Lacan (@ylecun) March 12, 2021
Can we just pause for a moment and appreciate it, on a random Thursday in March the father of Facebook’s AI program gets Twitter To argue about a piece from journalist Karen Hao, an AI reporter for MIT’s Technology Review?
Hao wrote An incredibly long form feature On Facebook’s Content Moderation Problem. The piece is called “How Facebook became accustomed to spreading misinformation,” and the sub-title is a dose:
The company’s AI algorithm gave it an incomparable habit for false and vulgar language. Now the man who made them cannot fix the problem.
I will quote here just one paragraph from Hao’s article which shows its essence:
Everything the company does and chooses not to flow with the same motivation: Zuckerberg’s relentless desire for growth… [Facebook AI lead Joaquin Quiñonero Candela’s] AI expertise supercharged that development. His team dove into targeting AI bias, as I learned in my reporting, because preventing such bias helps the company avoid the proposed regulation, which if passed would hinder that growth Can. The Facebook leadership has also repeatedly weakened or halted several initiatives to clean up misinformation on the platform because doing so would undermine that development.
There is a lot to unpack, but it is a sign that Facebook is inspired by the singular goal of “development”. The same can be said about cancer.
LeCun, apparently, did not like the article. He hopped on the app that Jack created and shared his thoughts, including a personal attack questioning Hao’s journalistic integrity:
Yes, this is why your piece was a surprise to me and many of my colleagues.
From my point of view, the AAP piece is full of factual errors and misconceptions of wrong intent.
What happened to you?
– Yan Lacan (@ylecun) March 12, 2021
Yesterday his umbrage blamed talk radio and journalism for his company’s woes:
More importantly, increased polarization is a typical US phenomenon (which began around FB before it began)
Many other countries have seen a decrease in polarization * in the last decade. And they use FB as much.
I talk on cable news and radio.
– Yan Lacan (@ylecun) March 12, 2021
Really yan? Is increasing polarization through disintegration uniquely American? Have you met my friend “for the reason that every single war has ever been fought in history?”
This will not be the first time Took to twitter To argue in defense of his company, but more was going on than yesterday. LeCun’s molestation announcement begins with a tweet New research On fairness from the Facebook Artificial Intelligence Team (FAIR).
According to Hao, Facebook coordinated the paper’s release to coincide with the Tech Review article:
People are asking: Has FB published this in response to your story? No, I should clarify. They wanted this paper to be in my story and I would be given a quick draft. Then in anticipation of me being in the story, they timed its publication with my hopes that the two would complement. https://t.co/pST8LYsTmG
– Karen Hao (@_KarenHao) March 11, 2021
Based on the evidence, it appears that Facebook was absolutely inconsistent with Hao’s reporting. It seems that the social network was expecting a feature in progress to change its algorithms, detect bias, and counter abusive language. Instead, Hao poses the essential problem with Facebook: it is a spider web.
Those are my words, not Hao’s. What he wrote was:
Near the end of our hour-long interview… [Quiñonero] Insisted that AI was often depicted as a “criminal”. Regardless of whether Facebook has used AI or not, he said, people will still make lies and hate speeches, and this content will still spread across the platform.
If I had that effect, I could have said something like, “Whether our company puts petrol on the ground and provides everyone with a book of books, we’re still going to set fire to the forest.” But, again, those are my words.
And when I say that Facebook is a spider web, I mean: spider webs are good, until they reach too far. For example, if you see a spider web in the corner of your barn, that’s great! This means that you are a small warrior who helps you keep out the nastier worms. But if you see a spider web covering your entire city, such as “Kingdom of the Spiders”, that’s a very bad thing.
And it is clear that LeCun knows this because yesterday his entire Twitter field was only a huge penetration that Facebook is beyond anyone’s control. Here are excerpts from his tweets on the subject:
… When it became clear that things like this were happening, FB took corrective steps [measures] Quickly…..
Certainly in the meantime it is not quick enough to stop some bad things (hence the UN report).
But taking those corrective measures is neither immediate, nor easy, nor cheap….
When Govind of Myanmar began to spread hateful hatred against the Rohingyas, the FB removed their fake-puppet accounts, and hired Burmese-speaking intermediaries.
But the volume was such that AI systems had to be developed to automate.
So FB developed the best Burmese-English translation system in the world, so that English-speaking moderators could help….
In addition, it developed hate and violent speech detectors for Burmese. But data was scarce.
Therefore in the last 2 years, FB has developed a multilingual system that can detect abusive language in any language and does not require a lot of training data.
All of this takes expertise, time, money, and in this case, the latest AI successes.
Is interesting. LeCun’s main claim seems to be to prevent misinformation really hard. Well, it is true. There are a lot of things that are really hard that we haven’t found out.
But, as the Ford company realized decades ago, according to working scientists, nuclear technology is simply not advanced enough to make consumer vehicles safe enough. Nuclear great for aircraft carriers and submarines, but not so much for family station wagons.
I would argue that Facebook’s impact on humanity is almost certainly far-fetched, far more damaging and wide-reaching than a minor small nuclear meltdown in the trunk of a Ford Mustang. Ultimately, only 31 people died as a direct result of the Chernobyl nuclear meltdown and Expert data The maximum amount of about 4,000 was indirectly affected (in terms of health, anyway).
Facebook has 2.45 billion users. And whenever its platform causes or exacerbates a problem for one of those users, the answer is one version or another will “see it”. The only place for such a reactive response to technology imbalances is actually the Whac-A-Mole game serving the public.
If Facebook was a nuclear power plant trying to fix a leak that sent nuclear waste into our drinking water every time someone misused the power grid: we shut it down until it leaked.
But we do not close Facebook because it is not really business. It is a trillion-dollar PR machine for a self-governing entity. It is a country. And we must either take it for granted or treat it as a hostile force until it only does something to stop the misuse of its reactor until the fan reaches.
And, if we can’t keep nuclear waste out of our drinking water, or build a safe car with a nuclear reactor in its trunk, maybe we should shut down the plants or wait until we can plan Are, can not plan. This worked well for Ford.
Maybe, just maybe, that is why journalists like Hao and me and politicians around the world cannot offer solutions to the problems that Facebook has because there is none.
Perhaps hiring the smartest AI researchers on the planet and surrounding them with the world’s largest PR machine is not enough to overcome the problem of humans poisoning each other for fun and profit on a vast irregular social network .
There are some problems you can’t just throw money at and press release.
My hat off to karen hao for doing this Excellent reporting And for the staff of technology review to speak the truth in front of power.
Published on 12 March 2021 – 19:37 UTC