Misinformation and social media in the US election: comparing 2016 and 2020

Written by Dr Julio Amador Diaz Lopez, Research Fellow at Imperial College Business School

Part of the narrative used to explain the Trump presidency has been foreign misinformation in the 2016 election. A lot of research — investigative such as the Mueller report as well as academic such as ours at Imperial — has been centred around foreign influence operations. The FBI has concluded that the Internet Research Agency (IRA) conducted active measures to influence public opinion well before the election but constantly in the days leading to Election Day. These measures included well-coordinated efforts to imitate Americans and passing forward information to polarise the public, the objective being to de-incentivise the public from engaging in healthy democratic practices ranging from maintaining civil discussions to voting. In fact, our own research has shown these measures included pushing disinformation related to the latter. In particular, we identified these agents tried to cast doubt about the number of people voting, suggesting a lot of people that were not allowed to vote, were doing so. This may sound familiar.

Voting rights — who can vote, requirements to cast a vote, and, now, in the times of coronavirus, which absentee ballots will be valid — have been on the mainstream political debate ever since the Bush administration. The rationale — or at least a blunt assessment of it — being: minorities in the US have increasingly become a political force. Hence, making it harder for them to vote will benefit the Republicans as these groups often associate with Democrats. Or, if we follow the GOP’s rationale: Democrats are recruiting people that are not allowed to vote to cast ballots for them.

In our 2016 data, we found that many social media posts pushed forward by the IRA indeed used this narrative. However, as this information was being posted by foreigners (remember, the IRA was pushing forward some of these messages), we were able to exploit misspellings and semantics to identify which message came from a foreign influence campaign and which came from within the US. (Remember, regardless of your point of view, it is not illegal to post these messages. It is harmful, however, if a foreigner wants to influence US domestic politics).

Different from the 2016 election, this time most of the misinformation related to voting rights (from slanted opinions to outright lies) is being pushed by the president of the United States. As such, much of the disinformation being discussed in the US is now created and propagated from actors within the US. Therefore, we cannot effectively follow the same strategy to identify misinformation. Most important, within the context of free speech in the United States, this misinformation — the one created and pushed by American citizens — is allowed and, some argue, even in the public interest (not because of the content itself, but —the argument goes — because citizens would be able to identify who is engaging in bad behaviour and be able to electorally punish them). Hence, attention from policymakers has shifted from identifying and banning misinformation to contextualising it; for example, Twitter has opted not for tracking and erasing all posts but putting them behind a warning and precluded its diffusion.

This seems a very reasonable, promising approach. As of now, our understanding of misinformation — from providing a unified definition to characterising it — is limited; even more so our ability to catch all pieces of misinformation in the web. Therefore, identifying prominent influencers capable of affecting political discourse and concentrating efforts in contextualising every time they push blatant lies may be reasonable. But this opens another can of worms: do we want social media firms doing this? Do we want governments to do so?

Leave a Reply

Your email address will not be published. Required fields are marked *