Video URL
Timnit Gebru spoke at a BIDS discussion group on April 4. (Video/ BIDS)

Google forced out Timnit Gebru in 2020 after trying to stop the release of her team’s paper on the risks of large language models, the kind of tool underpinning the giant’s search engine.

The leading AI ethics researcher’s exit showed why it’s dangerous for so much of AI research to be funded by big technology companies, some said. But, at a UC Berkeley discussion group, Gebru said the problem is much larger. It’s the incentive structures embedded in prestigious higher education institutions and scientific publishing, too.

“Name a technology in history that has successfully been redistributed completely equitably from the bastions of privilege and power to the have-nots in society. We've yet to achieve this with even the most basic resources like water, paper, electricity and internet,” said Gebru. 

“All of the elite institutions are extremely intertwined with this paradigm,” Gebru said. “What I want to see is a number of smaller independent institutes that each might have… a slightly different direction, so that then this kind of tech development could be done from the grassroots rather than from these vast units of privilege.”

Enter the Distributed Artificial Intelligence Research Institute, an independent research group Gebru founded in December to make her aspirations for AI research into a reality. The non-profit is learning from existing institutions’ problems and creating its own value system that guides its work.

Gebru made these comments at a Berkeley Institute for Data Science (BIDS) Machine Learning and Science Forum on April 4. BIDS is an affiliate of Berkeley’s Division of Computing, Data Science, and Society.

The fundamentally flawed incentives

Developing AI methods quickly without accounting for the ethical implications of how it was created, its impact on our physical world and the potential risks of the technology itself can make a lot of money for the people in power quickly, Gebru said. It can result in harmful AI, too.

Break just one part of this cycle by putting adequate thought behind it or changing how you value it, and you slow the technology development process down, said Gebru. That’s what we need, she added.

For example, consider if technology companies began paying their “data underclass” of data annotators and others a living wage, Gebru said. That expense would prompt more thought about the usefulness and feasibility of maintaining tools before they’re created, she said.

“If these social media companies, for instance, paid the content moderators a living wage and not like $5,000 a year like they were doing to the ones in Kenya, you would think twice about even creating a social media platform,” said Gebru. “Because you can't make that kind of money if you actually put in all of the resources required to make it safe.”

Higher education institutions aren’t immune from problems, Gebru said. While academia does have more freedom, she said not everyone in academia has equal access to it. 

There is plenty of gatekeeping there, too, she said. Gebru pointed to recent revelations about the sexual harassment of female graduate students at Harvard University and attempts at retaliating against them as an example.

“How do you have academic freedom when this is the kind of environment that people are supposed to work in,” Gebru asked. “You can talk all you want about the techniques and math and all that stuff, but if workers can't have the power to stop something, or to speak up against something, we're never gonna have responsible AI or whatever you want to call it.”

The incentive problems affect the research itself in academia and publishing, too. To get tenure at universities, you need to be productive and that generally translates to being published often. But scientific journals often glorify generally applicable artificial intelligence research and new technical developments rather than work that solves specific real-world problems, Gebru said.

Consider DAIR’s own research on the legacy of spatial apartheid in South Africa. This paper, which built a novel dataset using satellite imagery to study the evolution of segregation, could be used by policymakers to better understand the impact of spatial apartheid on communities. But they had trouble finding a journal to publish it because it was “just a dataset,” Gebru said.

Gebru said she also had trouble finding a journal to publish one of her best known papers, which found that automated facial analysis tools have much higher accuracy for lighter-skinned men than for darker-skinned women.

“My most interesting papers, to me, have always had millions of issues getting accepted anywhere,” said Gebru. “It's always a battle.”

Timnit Gebru
Timnit Gebru is the founder of the Distributed Artificial Intelligence Research Institute, an independent research group. (Photo/ BIDS)

Questioning the formula

Many of the issues within AI development and research comes down to who is impacted and who is valued, Gebru said. The United States and Silicon Valley’s technology companies affect people around the globe, but won’t take their impacts on other “countries that are not considered important” seriously, she said.

Ultimately, Gebru hopes that DAIR can help break these kinds of cycles and show a new, more ethical pathway forward for research. 

That means doing research that helps communities like those in Africa and the African diaspora that may not otherwise have a voice. That means having members of those communities lead the research. That means looking at the problem and then deciding if AI is part of the solution.

It means giving researchers the time and support to do thoughtful, holistic work, while also encouraging them to have a life. And it means communicating the findings of that research to the people affected by it, not just the people who read scientific journals.

“We know how to survive in our field, and we kind of learn what to do in order to survive. Then we're not allowed to question why that’s the formula,” said Gebru. “We want to have a different incentive structure for AI development than we currently have."