November 22, 2021

As artificial intelligence (AI) takes on an ever-higher profile in discussions about the economy, social and political issues and what the future holds, a trio of data science experts are calling for a reset in our mindset, and point to a 1950 paper written by noted mathematician Alan Turing.

In an article published Nov. 8 by Wired, Michael Jordan of UC Berkeley, Daron Acemoglu of MIT and Glen Weyl of Microsoft assert that, rather than developing AI methods seeking to imitate human intelligence, society in general -- and business in particular -- would benefit far more by developing approaches that complement human intelligence and support new kinds of economic and political engagement among people.

In his 1950 “imitation game” paper, Turing proposed a test that defined machine intelligence by imagining a computer program that can so successfully imitate a human in a conversation that it’s impossible to tell whether one is talking with a machine or a person. But Turing and others realized that this would be only one perspective on intelligence and also “understood that computers would be most useful to business and society when they augmented and complemented human capabilities, not when they competed directly with us,” the authors write. Search engines, spreadsheets and databases are examples of such complementary forms of information technology that arose in past decades, and the emerging “creator economy” is opening new opportunities for information technology that complements human behavior and skills.

Jordan, the Pehong Chen Distinguished Professor in Berkeley's Department of Electrical Engineering and Computer Science and the Department of Statistics, said he and his co-authors decided to write The Turing Test Is Bad For Business” because many AI investors are focusing on technologies with the goal of exceeding human performance on specific tasks, such as natural language translation or game-playing.

“From an economic point of view, the goal of exceeding human performance raises the specter of massive unemployment,” Jordan said. “An alternative goal for AI is to discover and support new kinds of interactions among humans that increase job possibilities.”

The three authors brought an economic perspective to the article, writing that “businesses succeed at scale when they successfully divide labor internally and bring diverse skill sets into teams that work together to create new products and services. Markets succeed when they bring together diverse sets of participants, facilitating specialization in order to enhance overall productivity and social welfare.”

This is not a new idea -- economist Adam Smith wrote about it over 250 years ago, the authors point out. The modern version of the question is the role that information technology will play in our economic interactions and in our culture.  While computers will be ever smarter, they will remain blind to many aspects of the human experience, and it’s essential to conceive of ways to blend humans and computers to solve our most complex problems.

An example that Jordan refers to is the ability of computer networks and data analysis to connect human producers and consumers in domains such as music, which are currently dominated by centralized companies and a business model that is based on advertising and subscriptions.  While AI technology has been developed to support that business model, it could just as well support direct connections between musicians and listeners, providing new sources of income for those who create the music.

This is not a new topic for Jordan, who published a paper titled “Artificial Intelligence: The Revolution Hasn’t Happened Yet” in the Harvard Data Science Review in 2019.  In that paper, he discussed a distinction between “human-imitative AI” and “intelligent infrastructure,” with the latter offering new opportunities for society and business.

The authors of the article, along with colleagues in a range of other fields, are preparing a longer report, How AI Fails Us, based on the premise that “the dominant vision of artificial intelligence imagines a future of large-scale autonomous systems outperforming humans in an increasing range of fields. This ‘actually existing AI’ vision misconstrues intelligence as autonomous rather than social and relational. It is both unproductive and dangerous.”