Lecture | November 30 | 4:10-5:30 p.m. | 202 South Hall
Co-sponsored by the Goldman School of Public Policy
When discussing the ills afflicting social media, there is a great deal of concern about the role played by polarization. Among other negative consequences, dislike of counter-partisans â and political motivations more generally â have been suggested to promote belief in, and sharing of, misinformation.
While polarization may be part of the misinformation problem, here I will argue that political motivations are also essential for one of the only possibilities for identifying and combatting misinformation at scale â the wisdom of crowds. While professional fact-checkers play a critical role in countering misinformation, they are relatively few in number, and cannot possibly keep up with the vast amount of content posted on social media every day. Recent work has suggested that it is possible to supplement professional fact-checking by harnessing the wisdom of crowds: the ratings of fairly small politically-balanced groups of laypeople can generate high levels of agreement with professional fact-checkers.
A central challenge for conducting crowd-based evaluations at scale, however, is the problem of encouraging participation â why should people bother to flag misleading content? In this talk, I will argue that not wanting people to be exposed to posts by counter-partisans helps to solve this participation problem by motivating people to flag. Although extreme partisans would flag all counter-partisan content as misleading regardless of its actual truth value, such extreme partisans are rare. A much larger group of people care somewhat about both truth and partisanship, such that they would only be sufficiently motivated to flag when content is both misleading and counter-partisan. For these people, the partisan motivation is needed to drive participation â without any partisan motive they would flag nothing.
I will present data from survey experiments conducted on Lucid and observational analyses of data from Twitter's crowdsourced fact-checking program. Consistent with the theoretical predictions, the results demonstrate that:
misleading counter-partisan content is flagged more than misleading co-partisan content,
non-misleading content is rarely flagged, and
more politically extreme users, rather than undermining the system, produce more and better flags.
Thus, crowdsourced misinformation identification may succeed because of, rather than in spite of, polarization and political motivations.
This lecture will also be live streamed via Zoom.
Join the Zoom live stream
Catherine Cronquist Browning, email@example.com,