November 13, 2018

If Jane Austen ran an internet platform, what would it look like? That’s how Alex Feerst, the Head of the Legal team at Medium, opened a recent Human Contexts and Ethics Conversation on moderating online content. The conversation, which took place just before the 2016 mid-term elections, was organized by the UC Berkeley Division of Data Sciences to build on student discussions in the Human Contexts and Ethics of Data class about public spheres and democracy in the datafied world.

Feerst asked the group gathered at Hearst Field Annex to consider how the incisive social commentator’s theoretical site, Pember.ly, might have fostered open exchanges while managing the harms that inevitably arise in social discourse.

The challenge of ensuring trust and safety has only escalated as public conversation has moved online, connecting millions of people well beyond borough boundaries.

Feerst detailed the complexities: “How can you know if something is true? How do you keep people safe, act quickly, act fairly, protect free speech, and also be transparent?”

These questions confronting technology leaders and societies in the datafied world, where a range of discussions have moved online to social media, have accompanied different communication platforms throughout history. Some of the earliest forms of communication, such as the printing press, also raised questions about who has control over information content and flows. But, as Feerst pointed out, "today's paperboy has amassed a lot of power" to both empower individual voices and amplify them in potentially dangerous and controversial ways.

Today's paperboy has amassed a lot of power.

Different social media platforms use different combinations of human and technical approaches to manage the challenges. Sites like Reddit use community-based moderators to ensure the safety and privacy of users, Feerst said, while YouTube and Facebook hire employees to take on the job. While Reddit reliessubstantially on volunteer moderators to uphold the community’s values, Facebook and YouTubereportedlyregulate content by pairing machine learning and artificial intelligence with human reviewby paid workers

Feerst encouraged more education around cybersecurity and privacy and stressed the importance of incorporating these concerns in initial product design and development rather than trying to address them after the fact. “Product developers are optimists,” he said; they like to think about the benefits that people will experience from using the product. But we also need people who will think about potential consequences of the products that may not always be beneficial. “We need the optimists to make the car, and the pessimists to make the seat belt.” Feerst hopes that the complex understanding of technology's place in society might one day be a skill that both product designers and lawyers share.

Renata BarretoFollowing Feerst's talk, Renata Barreto, a graduate student studying law at Berkeley, spoke about her research on airline passenger screening methods, such as the Automated Passport Kiosks, and the potential for biases to be introduced into the algorithms that determine which passengers should receive extra scrutiny. Going through US customs

after returning from an international trip, Barreto noticed how many passengers of color, including herself, were unfairly being singled out for extra screening. This experience of what she referred to as "algorithmic rejection" inspired Barreto to study law and issues related to algorithmic fairness.

Barreto is currently interning at Twitter, where she focuses on machine learning applications to flag problematic content and keep conversation vibrant and fair. Among the challenges that arise from machine learning, she said, are the inability to identify the subtlest ambiguities of human speech and the limitations of categorizing content. For example, she said, when considering Jigsaw's Perspective Application Programming Interface, “Suppose someone posts, ‘I hate group X’, and this gets classified as harmful content. Then, if someone else posts, ‘I am group X’, this may also get classified as harmful content, implying a need for human review.”

While machine learning has helped moderators keep pace with the immense amount of content constantly being generated, humans remain an essential part of the equation for promoting safe, fair, and free exchanges. Ultimately, as Feerst observed, the issues related to truth, trust, and safety in today's high-speed and global scale exchanges on social media platforms are “Fundamentally human problems, not tech problems.” Someone as attuned to the sublties of human language and feelings as Jane Austen would certainly have agreed.