Kate Crawford
Kate Crawford is one of seven people to lecture as part of the Obert C. Tanner Lecture Series on Artificial Intelligence and Human Values. (Photo/ Cath Muscat)

We need to urgently rethink how we create, use, communicate and educate around artificial intelligence, said a multidisciplinary group of experts during a recent UC Berkeley Tanner Lecture discussion.

Scientists, engineers and others must examine privileges, risks and classifications within the data and algorithms when deciding whether to build a new machine learning system and how to go about harnessing it. But they also must consider broader social issues like how much workers are paid to create and manage training data and how much computing contributes to climate change.

“The legacy of ground truth in AI reveals a failure of imagination, a reduction of human meaning and a loss of perspective,” said Kate Crawford, a research professor at the University of Southern California, senior principal researcher at Microsoft Research, and inaugural chair of AI and Justice at the École Normale Supérieure in Paris.

“This failure also represents a chance, a concrete opportunity to reimagine how machine learning works from the ground up to think anew about where it should and should not be applied in a way that emphasizes creativity, solidarity and potentiality,” said Crawford during her Obert C. Tanner Lecture Series on Artificial Intelligence and Human Values.

The discussion comes as the amount of data collected and used grows larger and faster every day. Algorithms harnessing big data are also growing. As AI becomes increasingly intertwined with our everyday lives and as global challenges arise, researchers want to intentionally consider and limit the harms that could stem from this technology and to limit where it is used.

This series of Tanner lectures provides leading international researchers a platform to present and debate perspectives on artificial intelligence and human values. Crawford’s March 3 lecture was followed by a panel discussion about her talk on March 4. 

Jennifer Chayes, associate provost of Berkeley’s Division of Computing, Data Science, and Society; Marion Fourcade, director of Berkeley’s Social Science Matrix; Trevor Paglen, an artist and geographer; Sonia ​​Katyal, co-director of Berkeley’s Center for Law and Technology; and Ken Goldberg, Berkeley’s William S. Floyd Jr. Distinguished Chair in Engineering, participated on the panel commenting on Crawford’s lecture.

Video URL
Kate Crawford lectured about excavating "ground truth" in artificial intelligence on Thursday, March 3. (Video/ CC BY UC Berkeley Events)

There’s no such thing as a neutral dataset

Crawford pointed to problematic practices in machine learning spanning the collection, classification and control of the data used to train AI systems.

She said there’s not enough consideration about whose viewpoint is reflected in the training data, and the contested ideas that are standing in as ‘ground truth’ in machine learning. And she outlined the ways in which AI systems both centralize power and drive greater asymmetries of wealth and opportunity.  

Scientists and engineers should consider more often whether they should use machine learning in different instances at all, Crawford said. Just because AI can be used to address a problem doesn’t mean it is the right solution for it.

Sometimes the problem is one that must be fixed through other methods like the reallocation of resources, said Fourcade. For example, we only need an algorithm to decide who should receive supportive housing because there are too many people who need it and too small a supply, she said. 

“Perhaps if we build more universal and supportive social institutions, we will lessen our need for algorithms altogether and with it our obsession with social hierarchy and social difference,” Fourcade said.

The researchers also urged for scientists and engineers to bring in more diverse voices from backgrounds like anthropology and sociology, who can put these potential algorithms into social and historical contexts. These kinds of discussions could help limit the potential of machine learning “solutions” to perpetuate prejudices and structural inequities of society.

It will help if students in the sciences and engineering have their own historical understanding, as well, the experts said, allowing them to understand the impacts and unintended consequences that have arisen from similar decisions in the past. 

Video URL
Kate Crawford and several peers discussed her lecture at a public discussion Friday, March 4. (Video/ CC BY UC Berkeley Events)

“We’re not going to fundamentally change the way this society allocates or doesn’t allocate resources by you and I having a discussion. But we can try to quantify in the algorithm: what is the impact of these decisions?” Chayes said. “These are the kinds of questions we should be posing.”

The new generation of computer science and engineering undergraduate and early-level graduate students seem ready for social considerations, Crawford said. Students are now asking these kinds of political questions as they’re building new machine learning systems, she added. Crawford is heartened by that.

“There is a sea change coming,” she said.