Timnit Gebru’s Exit From Google Exposes a Crisis in AI 

This year has held many things, among them bold claims of artificial intelligence successes. Industry commentators speculated that the language-generation model GPT-3 achieved an “A.”rtific common sense, While others praised Deepind’s protein-folding algorithm – alphafold – and a subsidiary of Alphabet for its potential.Changing biology. “While the basis for such claims is thinner than sensory headlines, it has not dampened enthusiasm in the industry, whose profits and reputation depend on the spread of AI.

It was against this background that Google Timit Gabru fired, Our dear friend and colleague, and a pioneer in the field of artificial intelligence. She is one of the few black women in AI Research and an incomplete advocate for bringing more BIPOC, women, and non-Westerners into the field. By any means, He excelled On the job, Google hired her to perform, including demonstrating racial and gender disparities in facial analysis techniques and developing reporting guidelines for data sets and AI models. Ironically, there is also a reason for her vocal advocacy for those in AI research, she says. The company fired him. According to Gabru, he and his colleagues withdraw a research paper after demanding Fragile The (profitable) large-scale AI system, Google Research, told his team that he had accepted his resignation, despite the fact that he resigned. (Google declined to comment for this story.)

Google’s Frightening treatment Gebru highlights a double crisis in AI research. field Is subordinate By an elite, predominantly white male workforce, and it is controlled and funded primarily by large industry players — Microsoft, Facebook, Amazon, IBM, and yes, Google. With the firing of Gebru, there has been a politics of citizenship giving rise to a young attempt to build the necessary railings around A.I. Tear apart, Bringing the question about racial homogeneity of the AI ​​workforce and inefficiency of corporate diversity programs to the center of the discourse. But this situation has also made it clear that – although a company may seem honest like Google’s promises – corporate-funded research can never be divorced from the realities of power, and the flow of revenue and capital is.

This should worry all of us. With the spread of AI in domains such as Healthcare, criminal justice, And Education, Researchers and advocates are raising immediate concerns. These systems make determinations that stay directly in shape, at the same time they are embedded in. Organizations structured To strengthen the history of racial discrimination. The AI ​​system also focuses power into the hands of those who design and use it, while following the responsibility (and liability) behind the veneer of complex computations. The risks are deep, and the incentives are definitely distorted.

The current crisis exposes structural barriers limiting our ability to build effective security around AI systems. This is particularly important because populations subject to disadvantage and prejudice from AI predictions and determinations are primarily BIPOC people, women, religious and gender minorities, and the poor — who suffer the brunt of structural discrimination. Here we have a clear racial divide between the people who benefit – corporations and mainly white male researchers and developers – and those are most likely to be harmed.

For example, take facial recognition techniques that Has been shown “Identified” less frequently than those with darker skin than those with darker skin. This alone is worrying. But these racial “errors” are not the only problems with facial recognition. Tavana Petty, Director of Events at Data for Black Lives, Tells That these systems are uneven Posted primarily in black Neighborhoods and cities, while cities that have been successful in pushing back against the ban and use of facial recognition, are predominantly white.

Without independent, critical research that centers on the perspectives and experiences of those who have suffered the loss of these technologies, our ability to understand and fight overheated claims made by the industry is greatly hindered. Google’s treatment of Gebru makes it clear where the company’s priorities lie when important work comes back to its business incentives. This makes it almost impossible to ensure that AI systems are responsible for making people vulnerable to their losses.

Checks on the industry are further compromised by close ties between tech companies and unstable independent educational institutions. Researchers from corporations and academics publish papers together and rub elbows at the same conferences, with some researchers also holding concurrent positions at tech companies and universities. It blurs the boundary between educational and corporate research and obscures the incentives that underlie such work. It also means that the two groups look alike – AI research in education suffers from the same dangerous racial and gender homogeneity issues that its corporate counterparts have. Also, top computer science department Accept abundant Big Tech Research Funding. All we have to do is look at Big Tobacco and Big Oil for disturbing templates, which affect how much public understanding of complex scientific issues can affect when large companies leave knowledge creation in their hands. .

.