n the wake of the 2020 election, we’ve witnessed the SolarWinds cyberattack by Russian nationals against US infrastructure, deep mistrust of US election fidelity, and an attack on the US Capitol Building spurred on by President Trump’s tweets and a plethora of extreme right social media accounts. Cyberspace has been a fraught place here and abroad, and several Stanford groups have worked to make the internet a safer place for us all. One such group is the Internet Observatory, a branch of the Freeman Spogli Institute’s Cyber Policy Center.
The Observatory’s goal, in the words of its research manager, Renée DiResta, is to study and develop solutions for the “misuse of the internet,” including election disinformation, trust and safety engineering issues and the social impact of web-to-web encryption. They have recently focused on the 2020 election in the United States and disinformation campaigns. I spoke to DiResta and two other researchers at the Observatory, Shelby Grossman and Samantha Bradshaw, about the nature of disinformation, election security and the Observatory’s role in preserving democracy and truth around the world.
Stanford Politics (SP): In broad strokes, how would you describe the impact of foreign election disinformation in the 2016 U.S. presidential election?
Samantha Bradshaw: I think this is a little bit of a complicated question, because we still don’t have very good evidence that what the Russians did actually had an impact. And when we do look to the things that people remember about the 2016 election – take the Pizzagate conspiracy, for example – we can say that these narratives worked in a sense, because a proportion of the American population still believes that Hillary Clinton and her campaign managers were involved in a pedophile ring.
What we do know in terms of impact [on the 2016 election]is very, very little. It’s really hard to draw the connection between someone seeing a post on Facebook and then changing their minds and going to vote Trump or deciding that they’re going to stay home and not participate in the election. There are lots of different factors that go into how people formulate their political opinion. It’s not something that just happens after seeing one piece of news or seeing one story. And, you know, seeing that content online, it’s a mix of factors: our environment, the other kinds of media that we’re consuming, the friends that we have, the conversations that we have, the radio pieces that newspapers, the shows that we watch on TV. Measuring actual impact is really hard.
The second challenge is that even if we do have a concrete example of something like the Pizzagate conspiracy, it wasn’t necessarily a Russian-based disinformation campaign. It didn’t come directly from them; it started in 4chan and 8chan and some of those other more domestic channels. There’s an overlap in what the Russians do and what is happening domestically. [Russia] will pick up certain narratives that will fit what they’re trying to do in terms of polarizing the US electorate and creating resentments towards the other side of the political spectrum.
There are bits of evidence that I would cite to say that, yes, Russia did have an impact, including some of the work that we’ve done on the computational propaganda project at Oxford. We were analyzing what people were sharing and the lead up to the vote. On Twitter, we specifically looked at the quality of news that people were sharing. We found that in swing states, there was a higher proportion of what we call ‘junk news’ being shared by users. One hypothesis could be that these were the targeted areas where disinformation campaigns by domestic actors and by the Russians were being coordinated, because in the battleground states one or two votes can make a difference. So there are hypotheses about the impact and whether or not the Russians were successful in their campaigns. But there isn’t really any good concrete evidence. You don’t you have a control group, let alone a clear outlining of who saw what and what else was going on in their media environment.
SP: Do you think social media influence from Russia and other foreign actors had the same role in the 2020 election that it did in 2016?
Renée DiResta: What we saw in 2016 were influence operations that involved the creation of pages, fake accounts and dynamics in which the adversary created audiences of hundreds of thousands for its subversive propaganda accounts. And nothing on that scale was discovered this time around. What we saw instead were attempts to use residual assets that remained on the platform from previously disrupted networks, but they didn’t get any traction and their content didn’t really go anywhere.
There were still a lot of myths and disinformation that made the rounds, and there were still a lot of false stories and narratives and wild claims and allegations, but most of them came from hyper-partisan actors that were incentivized to participate in the process. That dynamic — of the allegations coming from somebody who is demonstrably American — means that it’s treated a little bit differently than a fake account owned by Russia, China, etc. With a state actor account, that content will come down under inauthenticity rules. If an American expresses the same opinion, particularly an influential American, there’s nothing inauthentic about that: it is a free expression issue at that point. And the question for the platforms is how to label it or address it without saying that it was an inauthentic account.
SP: Who is targeted in foreign social media disinformation campaigns during U.S. election seasons? Is the disinformation applied evenly across demographics or concentrated on certain groups?
Bradshaw: It’s definitely right-heavy. If we look at the content coming out of Russia and its disinformation campaigns, it was all about promoting Trump and attacking Hillary to undermine her campaign. We also saw a lot of campaigns targeting Black American voters in order to suppress them from participating. There were a lot of demobilization messages going towards Black American voters, particularly because these are the groups that would tend to vote Clinton or be Democrats as well. A large part of the Russian campaign was tying into these pre-existing racial tensions that exist within the United States and demobilizing support among Black Americans.
SP: Can you describe the nature of disinformation campaigns beyond the American context?
Shelby Grossman: You can think of it as two buckets: there are foreign disinformation campaigns where a foreign actor is trying to interfere in another country’s politics. And then there are domestic operations with domestic actors trying to influence things in their own country. And you see both of these in much of the world. And you see both of these in much of the world.
In Africa, both types of operations have been found. At the end of last year, my team put out a report on Russian interference in a number of African countries. We found, in cooperation with Facebook, an operation linked to Yevgeny Prigozhin, who has been linked to social media meddling in the United States in 2016. This was largely a Facebook network that was targeting Sudan, Mozambique, Central African Republic, the Democratic Republic of the Congo, a bunch of other countries. These were Facebook pages and accounts that were pushing content supportive of the ruling party in those countries. In Mozambique, the pages were active during Mozambique’s election from last year. They pushed content that was in favor of the ruling party. It wasn’t fake news, per se, it was just hyper-partisan cheerleading content — ‘the current president has done so much to bring stability to Mozambique, that’s why he should be reelected’ – that kind of stuff. That kind of content is obviously problematic, not because it’s untrue; it’s not falsifiable. It’s problematic because the people who are running the pages are being deceptive about who they are, and they’re linked to another country.
But there have also been domestic operations in Africa and elsewhere. There was a Facebook takedown of a network of accounts, maybe a year and a half ago, that targeted Nigeria, and were linked to an Israeli digital marketing firm. But it seems like the operation was run by actors within Nigeria, and they had just outsourced their activity to this Israeli company. And then you have good old-fashioned domestic troll networks that are being paid by the government and that kind of stuff.
SP: In non-democratic societies where elections do not occur, what do foreign actors hope to achieve with social media disinformation?
Grossman: I was just working on a Saudi disinformation campaign that targeted Qatar. Saudi Arabia and Qatar are regional rivals, and there are dissident members of the Qatari ruling family who are living in exile in Saudi Arabia. So the Saudi government created fake Twitter accounts for these individuals, and then used those sock puppet accounts to spread rumors of a coup in Qatar in May. These accounts were like, ‘Hey, I just heard like gunshots.’ They were basically using these accounts to try to make it seem like the Qatari government was really weak. They’ll also try to mock the role of Qatar and the international community.
In the parts of the world that I study, it’s about trying to embarrass some political actor or bolster another political actor. Do they have really defined goals in mind? I think often they don’t. I think often they’re just hoping to sway opinion and that it will have some long-term effects.
SP: Among the speculated goldmines of artificial intelligence (AI) development is content moderation on large social media networks. Do you think AI will be able to solve disinformation and election hacking problems on social media, and to what extent can it help?
Bradshaw: I think we definitely need AI to be a part of the solution. Given the extent to which content is uploaded and shared on the internet, we absolutely need better AI: better machine learning algorithms to be able to detect this information and take it down or prevent it from even being uploaded or shared in the first place. But I don’t think it’s the silver bullet solution to our problems. AI and the [machine learning]models right now are still really, really messy. We’re getting better at doing textual analysis and looking for patterns or identifying certain disinformation narratives, but it’s still not great. A perfect example of this is the Christ Church shootings. A lot of the video that the shooter had uploaded was supposed to be blocked by the platforms, but it kept reappearing and reappearing. People would slice and dice the video and insert segments of it into other clips to mask it so that it could continue to be shared. This is a good example of how AI isn’t good enough to stop the stuff that we have already identified as being bad, let alone the stuff that is new and not yet identified.
SP: How is content moderation handled in countries other than the United States? Do companies like Facebook successfully moderate content in other languages, especially rare ones?
DiResta: It really runs the gamut. The American election is unique in terms of its platform investigation processes. I don’t think that that’s been done for every election everywhere in the world with that degree of care. The challenge becomes: do the platforms have enough people who speak or read the local language to detect operations? Do they prioritize elections far enough in advance where they can find and mitigate things related to those particular elections? Do they apply the same standard? Our team looked at some actions in Guinea related to astroturfing that were directly connected to the ruling party. It was framed as ‘Oh, these are just volunteers who have made some pages and they’re just passionate locals who are producing this content,’ when in reality, that sort of behavior would be seen as highly manipulative in the United States, but that’s also potentially in part because of how our laws work here.
Grossman: Facebook and Twitter, especially over the past two years, have put a huge amount of effort into finding disinformation campaigns in other countries and making them public. Twitter is really the leader here, When Twitter does a takedown of a network link to a state actor, they actually make all the tweets or almost all the tweets public, so that you can go on a website right now and download these data sets that are linked to the Saudi government or linked to a digital marketing firm in the United Arab Emirates (UAE) that has links to the government of the UAE. Every month Facebook is publicizing stuff about these takedowns. Just a few weeks ago, they did a takedown of a network linked to the Islamic movement in Nigeria, which has ties to Iran. I think there’s increasingly a lot of attention to find disinformation campaigns in other countries, which I think is awesome. I think the hope is that Google and YouTube move in that direction as well.
SP: What roles should be played by the U.S. government and tech companies when it comes to social media disinformation and influence by both foreign and domestic actors?
DiResta: Interestingly, it’s arguably harder for the government to step in and create [rules]because the government in the United States can’t regulate speech. Other countries have been a little more proactive in creating laws related to what can appear within their borders. Twitter has certain restrictions on speech related to Nazism and things that are illegal under Germany’s hate speech laws. That same speech is legal in the U.S., so there are certain tweets that will appear in one country that will not appear in another country because the platforms try to adhere to the laws of the land. That said, when we think about moderation of these things, the challenge is the platforms are quite powerful, so there is no democratic process that goes into the determination of how these moderation frameworks work. There’s no point at which users really weigh in on the policy changes made or actions that they don’t agree with. So the question right now to ask about moderation is: when there are instances in which people believe that the platform has overstepped, is there redress for people whose accounts have been taken down? Facebook has begun to create this ‘Facebook Supreme Court’ dynamic in which these cases will be adjudicated.
SP: What would you like to see change within social media companies, government or society to make the internet safer and more protective of democracies everywhere?
Bradshaw: Working more with researchers is a really good thing, but the platforms have still fallen really, really short. The datasets that the platforms curate for academics contain very little information about whether or not they’re representative samples. There are also restrictions on who gets access to do these kinds of studies. While the platforms are saying they’re working more with academic partners, they’re simultaneously closing access to their Application Programming Interfaces, which would allow for more independent data collection and analysis that doesn’t come from these predefined platform data sets. I see a lot of problems with these kinds of solutions to the disinformation problem as well. They’re doing the bare minimum, and it’s still very much in the interest of their shareholders rather than the interest of making our democracies and making our platforms a better place for democracy.
Grossman: I think in general that Facebook and Twitter are leading the way and being transparent about it. I think transparency is the most important thing so that people know what’s being taken down and what we can learn from these operations [to take down inauthentic behavior]. I think Facebook and Twitter are increasingly being really transparent about this. My hope would be that Google and YouTube move in that direction as well. I think YouTube is taking down stuff related to disinformation campaigns, but we don’t always know what is going on. They started being a little more transparent, but I think it’d be great if they could be more transparent and work with independent researchers who can analyze this kind of content before it comes down.
Avery Rogers is a senior junior studying Economics and Computer Science.
Big Tech was caught in the political headlights in 2020. As Americans migrated en masse…
On May 28, 2020, the Stanford Faculty Senate convened to vote on a resolution supporting…
In late February 1954, Arsenal Elementary School in Pennsylvania gained nationwide fame for hosting the…
minah locked her bike and shuffled up the steps to Cemex Auditorium. Despite finals creeping…
n August 9th, 2020, it was announced that Alexander Lukashenko, the president of Belarus, had…
Editor's Note ur second issue of the year arrives at a difficult time for our…