Anni Hellman (Photo/ Berkeley Institute for Data Science)
Anni Hellman is a visiting fellow at the Berkeley Institute for Data Science and a 2021-2022 EU fellow with the Institute of European Studies at UC Berkeley. (Photo/ BIDS)

Disinformation has existed for centuries. But in recent years false information has proliferated faster because of social media and the wider web, harming public health, public institutions and more. Former President Barack Obama last week called for swift action to stem the flow of disinformation, saying democracy is at stake.

We spoke to Anni Hellman, the deputy head of unit for the European Union Commission Directorate-General for Communications Networks, Content and Technology, about what the online disinformation battle looks like in Europe and about her role in understanding and combating fake news there. Hellman, who is currently a visiting fellow at UC Berkeley’s Berkeley Institute for Data Science, also discussed what she’s learned about why disinformation spreads and what interventions public institutions should try next.

This interview has been edited for length and clarity.

 

Q: Tell us a little about what you currently do.

A: I'm here in Berkeley for this academic year as an EU fellow, which is a great and unique opportunity to focus on research as a civil servant. My research topics are AI [artificial intelligence] and disinformation.

Q: Your unit did some of the earliest work understanding and tackling online disinformation. How did that happen?

A: My unit started some five years ago to investigate online disinformation. We did a lot of, and had a lot of research done. We drafted a number of commission official documents called Communications. They are not legislation, but they describe the European Commission’s approach and often propose several actions.

We originally identified four main pillars. One was working with the big platforms like Google, Facebook, Twitter and encouraging them strongly to do a number of tasks to combat disinformation. Also, in Europe, we didn't really have “fact checkers” as a profession, and what was there was quite scattered. There’s also the special issue that we have 21 official languages in the European Union, and collaboration between these different member states and across these languages would be essential to combat spread of disinformation. So we initiated the creation of the fact-checkers network. We also started to update our media literacy policy, which talks about what the public should be aware of and how to behave online. And then the fourth pillar was supporting research on disinformation.

During the European Parliament election process in 2019, we collaborated within the European Commission as well as with the technology platforms to monitor and make people aware of possible disinformation. It seemed to work. Disinformation was less prominent.

Now, our work focuses very much on some parts of these four pillars, especially collaborating with the platforms on a Code of Practice. The platforms commit to take down fake accounts and fake websites and try to avoid or reduce advertisement money flowing to them.

Q: What’s been working particularly well? What hasn’t been working?

A: Everything is working, but it’s not working as effectively as we would kind of have wanted, for many reasons. It's not a simple thing. A lot of reasons why people share and believe are beyond the logic of, “Here is a fake text. We will explain to you that this text is fake, and that way, you will not share it, and you will not believe it, and you will be better aware.” This logic just doesn't always work, as we’ve seen in some concrete cases in your country and in Europe. People do believe even the most outrageous statements, if those statements are said or shared or written by somebody that they strongly believe in. This is the part where the current actions – as good as they are – may not work very well.

Q: That’s what you’ve been researching at Berkeley, right?

A:  Yes, partly. What I've been looking at regarding disinformation has been what other issues impact online behavior besides content. My belief is that we all want to belong somewhere. We’ve always belonged to our villages, which is our whole framework of life from healthcare to religion. Our village was our safety net. Those who come from outside also may be a threat. 

Especially with COVID, we had to transfer a big part of our life online. So this online life has become our online village, or tribe, of people who somehow we feel close to. It can be that they have the same passion for, I don't know, cats, or the same passion for religion or similar nationalistic views. When we are there, we feel that we are amongst our people. And if then those people, especially somebody that we consider a leader in that tribe, says something online, our first thought is not, “Well, let's fact check this.” No. Our liking it online means showing that we like the person and like belonging to that tribe. Our sharing and liking are membership fees to belong to that tribe. Because I want to belong to this tribe, I want to prove to myself and others that I belong, so I share. I don't really care whether it’s true. 

Research has shown that sometimes when a person you believe in is caught lying and is corrected, it actually strengthens your belief and commitment to that person. This comes from our feelings and our need to keep our belief in the tribe whole. We should look at this much more. I will continue to do research on this topic, with an aim to employ mathematics as means to help better understand how the desire to belong plays a role in the spread of disinformation. There could be some elements in there to help battle disinformation.

Q: How does that translate into action that limits disinformation’s spread?

A: That is difficult. That’s the part that I’m saying that we should really seriously look at much more. I think one of the weaknesses in what we have been looking at so far is that sharing or spreading of disinformation is not an IT question. We can understand how it spreads, and we can do some takedowns on websites. But this problem is not solved through those actions. This is a social, psychological issue. So I think we do need to start approaching it from those angles.

I admit that making a change there is really difficult. If we think about a “tribe” with a very strong leader – and we’ve seen these tribes all throughout history – what would it take for these people to realize that their tribe is wrong? The problem is how to tackle people’s emotions because you make them vulnerable if you take away their safety net and you say this tribe is fake. 

There are a number of cults we’ve seen, which are horrible. There’s apparently a cult for Putin, for example. I wonder what those people in that cult do now. Probably, they will choose to believe the truth that strengthens their belief in their cult. How do you tackle that? How many pictures of dead children do you need to understand that this is not good? Because what I’ve heard from Russia is that “Putin-believers” just say these pictures are fake. What else could they do? Otherwise their cult would crumble and fall. So this is complicated, and I don’t have an answer. But what I would like to do, and what I would like more research on, is to understand it so well that we eventually can somehow address it. 

Another challenge is that there is some disinformation that is illegal, and there is a clear obligation for the platforms to take it down within a certain amount of time. That’s clear. But then there’s this part of this information, which is harmful, but not illegal per se. Tackling that in any regulatory measure is also very difficult, because what is the law against it? There’s no law against lying or being ignorant or stupid. Very soon, if we start to say, “Well, you cannot say that to your people, dear leader, because that is not true,” we will be against the basic principles of each free country. So it is complicated. I don’t have the solution. But I think we should do our very best to find one.

"Sharing or spreading of disinformation is not an IT question."

Q: As you prepare to head back to the European Commission, what are some action items that you’re taking with you from your time at Berkeley?

A: One thing that we haven’t discussed, which is always interesting, is the different approaches on different continents. The U.S. approach and the European approach to disinformation is slightly different. In Europe, we are risk-averse and careful, and because of that, we take time to take action. We want to be sure that the action is correct. (Admittedly, in many of our disinformation interventions we have been fantastically fast.)

Being risk-averse is difficult in the online world because everything moves so fast. Like currently, what we should start looking at vigorously – and with some fact-checkers, we’re looking at it already – is the deepfakes and recognition that disinformation is more and more not just written word. It’s pictures. It’s videos. It’s deepfakes. It’s something that we are not used to yet as a phenomenon, and then that phenomenon is directly exploited by people who want to benefit from it. Then we are talking about the Metaverse and this AR - VR [augmented reality - virtual reality]. What about disinformation there? I think that is an area where Europe may be slightly lagging behind from the U.S. We recognize this. 

In the U.S., freedom of speech is very, very important. On the other hand, you can be more entrepreneurial and in that sense, there can be solutions found. The U.S. is more, “Let’s do it and think about the consequences later.”

Q: How much of addressing online disinformation needs a global solution, rather than solutions by individual governing bodies?

A: It definitely is a global phenomenon. Increasingly, more European legislation is becoming global. The General Data Protection Regulation has become a global regulatory basis because many of the players are global. And, of course, what the platforms do regarding the obligation to take down and not target advertisements to fake accounts is also global. The starting point was, “You need to do this in Europe.” But I think they are increasingly doing these corrective measures globally because it's beneficial for them. It’s a very good profile-lifting operation, as well. So it is definitely a global phenomenon. 

We very much need to work globally. We are already working with and for 27 member states in Europe, so that’s already a big amount of people with different cultural backgrounds and different languages, different political environments, etc. Sometimes that’s a challenge, because sometimes the disinformation sources seem to be close to some EU governments and we want to work in agreement with all member states. Of course, we should aim for a global agreement on what disinformation is and how to, in collaboration, limit it, but again, there are different understandings and approaches even in the big countries, China, the U.S., Russia. Russia has a very different approach to disinformation, in so many ways today. So, it should be global. But it’s not easy.

Q: How can action by more powerful countries help less powerful countries in this space?

A: That's the core of the challenge because any action by another country is often perceived very negatively, like, “You are controlling what we can say and what we cannot say.” That’s part of the dilemma. If a bigger country would help a smaller country to regulate their output – what is disinformation there? – it's the same thing. “So, you are coming to us telling us what we can say” and whatnot. We don’t want to be the “Tower of Truth.” What we can do is offer any means that we have to help the smaller countries develop their own safety mechanisms.

Disinformation is very powerful. I mean, think: If there wasn’t such widespread disinformation, would there be a war accepted by Russians in Ukraine? We have to do something quickly for all the different countries and the world. One thing that will be important is for research communities to work together more across borders. The wonderful work that has been done in the U.S. can be shared with researchers in smaller countries. That’s a more peaceful way of going ahead.