AI

AI and Election Misinformation

AI and Election Misinformation

Introduction

Artificial intelligence (AI) has entered every aspect of our lives. From smart home assistants like Alexa to reading, shopping, or viewing suggestions we receive on the likes of Amazon and Netflix, AI is working in the background. Arguably, many of those innovations have made our lives better.

However, there is another side to the application of AI. The technology has been used to create and disseminate election misinformation. Deepfake social media profiles and sophisticated bots have influenced voters’ decisions and changed the way political campaigns are run almost beyond recognition.

In that sense, AI has contributed to the problem of election misinformation. The same technology, however, holds the key to solving the problem and preventing the spread of misinformation in the future. With the right policies in place, AI can become our most powerful tool for preventing the spread of fake news.

Elections are an essential part of our democracy, allowing each eligible citizen to participate in the choice of political leaders. In recent years, presidential and midterm elections in the United States and beyond have become increasingly influenced by election misinformation. Artificial intelligence (AI) technology has contributed to the problem but also offers solutions.

AI has entered every part of human life, including news distribution and sourcing. On social media networks, deepfake accounts utilize AI to mimic human accounts through imagery and human-like posting, making it hard to distinguish between real and fake information.

Deepfake content has influenced voter behavior in recent presidential and midterm elections in the United States, putting our democracy in danger.

AI-powered bots are faster than humans at multiplying fake news by sharing, re-posting, and commenting on deepfake content that will eventually cross over into real human content. AI may be one of the causes of misinformation, but this technology is also part of the solution because it can supercharge and scale human efforts to spot and eliminate fake news and other detrimental information.

Efforts to combat election misinformation are being made at several levels, including governmental and economic organizations. Social media networks themselves are also exploring options to help limit the amount of fake content distributed through their channels. Attempts at regulation need to be balanced carefully with concerns for freedom of speech and the pluralism of traditional and social media. Like free and fair elections, freedom of speech and media pluralism are hallmarks of modern democracies.

Source: YouTube

Also Read: Artificial Intelligence and disinformation.

Why Are Elections Necessary in a Democracy?

Modern democracies are built on the principle of representation. Because it is simply not possible for each individual citizen to make decisions at a state or even national level, we choose people we trust to represent our values, beliefs, and ideas on our behalf. Elections are the tool that facilitates that choice.

During each election cycle, whether it is a presidential election or midterm election, candidates present themselves and their policies to the public during their election campaigns. Classic campaigning still includes knocking on doors and speaking in town squares, but technology is becoming increasingly influential in campaigns and democratic elections.

Social media platforms allow candidates to reach much larger audiences than campaigning door-to-door or traveling across the country could. These platforms are essential to help multiply campaign messages so that they can be heard by more potential voters.

To win an election, candidates need to reach and persuade as many voters as possible that their position is preferable to their opponent’s views. If they can achieve that, their likelihood of becoming elected as a representative increases markedly. In an ideal world, citizens would hear a similar amount of information from each candidate.

They would be able to weigh the validity of the candidates’ arguments before making their decision on election day. However, Americans are surrounded by a growing amount of misinformation on several levels:

Candidates deliberately misrepresent their opponents’ views, leading to voter confusion

Misinformation about the election process itself contributes to uncertainty about the validity of elections

As misrepresentations and misinformation are spreading faster than ever before, they are also becoming harder to identify and distinguish from real information. That is how misinformation endangers elections and democracies as a whole.

Implications of Artificial Intelligence in Political Campaigns

Artificial intelligence has the potential to supercharge the reach of political campaigns in a way that benefits our democracy. However, when we read about AI and election misinformation, it is more often in the context of voter manipulation and the spread of misinformation. This dual-headed character of AI applications has been the subject of much research.

Allegations like these first arose around the 2016 election campaign. During that presidential election campaign, Russian AI technology was suspected of influencing American voters and the outcome of that year’s presidential elections.

The World Economic Forum (WEF) has previously referred to the “manipulative influence of artificial intelligence” and called for a new “ethical agenda” to help govern political advertising and content on online platforms. But how could AI manipulate election outcomes?

Deepfakes have become one of the main emerging threats, which we will analyze in more detail below. In addition, one of the most powerful features of AI applications is their ability to target audiences precisely and at the most critical time, for example on the morning of an election day.

How does AI Contribute to the Spread of Election Misinformation?

To understand how AI contributes to the spread of election misinformation, we need to take a closer look at how American elections work. In any given election cycle, there are certain areas where competition is stronger than in others.

Presidential elections, for example, are won or lost in swing states. These states are more interesting for anyone trying to manipulate the outcome of the election because their results can change – swing – who ends up in the White House for the next four years.

In midterm elections, incumbent representatives and senators tend to have an advantage over their opponents. However, when there are open seats or where an opponent is gaining momentum against an incumbent, political races become more interesting for potential attackers.

AI-based manipulation is generally rooted in a long-term strategy. As it becomes clear which races will be harder fought, the groundwork is laid for the spread of misinformation. Social media networks have become the leading source of news for many Americans, making them an ideal platform to distribute content.

In order to share misinformation fast, attackers create fake accounts. Just a few years ago, it was fairly simple to distinguish between human accounts and fake accounts. Today, AI technology has made it easier and faster to recreate the appearance of legitimate accounts by recreating images and sharing just enough content to create an air of authenticity.

Once an election campaign reaches its critical stages, attacker-controlled accounts are ready to be used to disseminate targeted misinformation about the opposing candidate, for example. Because all those accounts are controlled from one single source, information multiplies fast. Social media algorithms will pick up the piece and prioritize it even further due to its apparent popularity.

As a result, voters may be put off from voting for a specific candidate if they believe in the authenticity of a piece of misinformation. If this happens during the early stages of a campaign, the team may be able to mitigate it. However, repeated, well-orchestrated attacks are harder to ward off and disprove, especially if the campaign has reached its final stages.

Deepfake Democracy

The WEF fears that the world is headed for an “infodemic”, a situation in which the spread of misinformation becomes inevitable and almost impossible to avoid. The organization believes that the most recent advances in AI have made this transition possible. In fact, it is the democratization of AI that has led to some of the scarier, recent developments.

Thanks to advanced, open-source software products, potential attackers no longer need specialist qualifications to deploy machine learning processes. There is also no longer a need for time-consuming graphical rendering to create deepfake content. This development is especially frightening in relation to video creation.

Just a few years ago, it was obvious when you were looking at real footage as opposed to content that had been somehow manipulated. AI subfields, like General Adversarial Networks, are on course to entirely erase these distinctions between real and fake. The consequences could be dire: how can electoral candidates prove that video content about them is fake?

And, perhaps more importantly, how can voters recognize what is real and what is fake? Experts believe that to be believed, a fake narrative only needs to be backed up by a source that carries a semblance of authority, such as an academic title or a list of publications to their name. Unfortunately, the same AI that combines facts and misinformation would also be capable of faking much of the identity of the authority used to back up manipulative information.

Source: YouTube

Bot Proliferation Of Fake News

Cast your mind back to just a few years ago, and most people’s idea of a bot or robot would have resembled Star Wars’ R2D2. In the 21st century, bots are no longer so quaint, and they are not physical robots either.

Instead, today’s bots reside on social media platforms alongside human accounts to spread fake news and inflate their popularity and reach even further. In an article about the spread of fake news, University of Santa Barbara researchers cite a 2017 estimate, according to which between 5% (Facebook) and 8% (Twitter and Instagram) of all social media accounts are bot-based.

As opposed to the somewhat time-consuming upkeep of a human account, bots work according to an algorithm. Once programmed, they can work repetitively and autonomously. Plus, AI technology makes it possible for them to learn from their results and improve their performance.

One of the most highly publicized deployments of bots happened during the 2016 presidential elections when Russian Twitter bots posed as Midwestern swing-voter Republicans. Their influence on other, human swing-voter Republicans was enormous simply because the bots were so relatable. Their lifestyle, values, and concerns mirrored those of real voters, automatically creating a relationship of trust.

That trust inspired social interactions with real voters who shared the misinformation propagated by the bots into their networks. Cybersecurity experts believe that the Russian influence did not stop when now-former President Trump won the election. The bots continued to connect to the administration by reacting to or sharing posts.

Also Read: How to Spot a Deepfake: Tips for Combatting Disinformation

They also placed some of their own misinformation in the hope that the administration would pick it up and legitimate it. This is what happened when a conspiracy theory about wiretapping Trump Tower started circulating. Bots placed a story accusing the administration of President Barack Obama to have initiated the wiretap. Once the Trump administration picked up on the train of thought, bots continued to fuel the fire with more interactions on social platforms.

Source: YouTube

Micro targeting

Micro targeting is one of the strategies employed by bots to help target an audience that is most likely to be open to their messages. So, how do bots know who might be receptive to fake news? Cookies are among the most effective ways of determining the interests of internet users. These small files that websites install on a user’s web browser save the user’s preferences. The next time the website is opened, the browser knows which settings to apply. So far, so useful.

However, cookies also track users’ actions across websites, and they may also come from third parties, including websites a user has not even visited. These trackers are often deployed by companies that analyze a user’s movements for personalization purposes. Social media networks have become masters at tracking to help put the most relevant adverts in front of their users.

Social media tracking cookies may be placed on your browser every time you visit a website that features an icon of the relevant network. It is important to understand the extent of tracking. Most trackers not only record that a user visited a site. They also save information on time spent, links followed, and movements across certain pages.

This information is then sold to companies interested in targeting internet users to this extent. The richer and more detailed the information is the higher its value. By gaining access to these levels of detailed information, anyone looking to propagate fake news or conspiracy theories can target their audience minutely.

AI-Generated Misinformation

To analyze the impact of AI-generated misinformation further, we will return to our example of disinformation being placed on the morning of election day. Once polls open in a swing state, bots programmed by malicious actors might place a tweet saying that polls were closed and voters redirected to a different, fake location.

Immediately, other bots pick up this piece of online disinformation and continue to spread it. It is only a question of time until a story initiated by a bot is picked up by a human and placed into their social network. Next, stories about supposedly closed polling places are picked up by traditional broadcasters, most likely as breaking news. At this point, social media-based fake news has transitioned into mainstream media.

Of course, a short time later, posts will start appearing that discredit the fake story. However, as those posts are not powered by thousands of AI-enabled bots, they will take much longer to spread. As John Villasenor puts it in his report for the Brookings Institution, “the damage is already done.” No matter how many corrections are being broadcast throughout the day, the early rumors will continue to course through the system of social and traditional media alike. Voters may decide to leave voting until later or avoid getting drawn into potential voting chaos altogether.

These rapid disinformation attacks are particularly hard to address as the damage is done so quickly. Any actions put in place to minimize the effects of online disinformation most likely come too late.

How Can AI Help Spot Misinformation?

If AI is instrumental in the spread of misinformation, perhaps the technology can be equally effective in identifying and eradicating fake news and conspiracy theories. The answer is yes, according to the WEF, which is calling for a new ethical agenda for the use of technology.

The question remains how this agenda can be shaped. A few years ago, scientists at the Pew Research Center asked technology experts, corporate representatives, government leaders, and academics where they saw the future of online discourse. Nearly four out of ten respondents believed that bad actors and negative narratives would shape the online discourse.

They attributed the anonymity afforded by online discourse as a source of continued bad behavior online. Limiting anonymity and holding users accountable for their behaviors, including bad behavior online, could be one way of creating a more fruitful online discourse.

Artificial Intelligence and misinformation – Source – CNN and YouTube.

Policy Considerations

Malicious actors are a concern beyond election misinformation. They can be found in every form of online discourse and finding a formal way to police online interactions might be difficult or even unnecessary.

The WEF suggests that disarming deepfakes is down to communities and individuals who take action to raise the standards of how they create and interact with online content. The organization is calling on voters to “stand up for facts and truth in online discourse” as a means of changing interactions. The WEF also highlights the subjectivity of online platforms and the content shared on those as a potential source of problems. With so many different voices sharing information, it is hard to determine what is real and what is fake news.

How do We Protect Future Elections from AI Misinformation?

Whilst the WEF suggests an approach that equals a kind of self-policing of deepfake accounts, social media organizations themselves are starting to fight disinformation more proactively.

Meta, the parent company of Facebook, is deploying AI for good. According to the social network, artificial intelligence is being used to augment the work of humans when it comes to detecting and removing fake content or highlighting posts that may otherwise be harmful.

Their approach is relatively simple. Thanks to a new policy, independent third-party fact-checkers add warnings to content and limit its distribution. Their efforts are limited by the number of users and the volume of content being published around the clock. That is where artificial intelligence technology comes in to step up the effort.

With the help of machine learning algorithms, AI can automatically catch new versions of previously disabled content and prevent their spread. At the same time, Meta’s fact-checkers focus on new content that may be potentially harmful.

On the other side of the Atlantic, the European Parliament is considering regulatory initiatives to combat the use of AI for the spread of misinformation. A 2019 study published by the Scientific Foresight Unit of the European Parliamentary Research Service (EPRS) highlights the potential of using AI to reverse or minimize the spread of disinformation.

Whilst the study’s authors consider regulatory policy options, they are also aware of having to navigate the overlap between technology, media pluralism, and freedom of expression. The latter are hallmarks of advanced Western democracies and need to be carefully balanced with any regulations.

As far as policy options are concerned, the study begins by highlighting the need for media literacy training that includes information about how social media feeds are curated. The EPRS researchers believe that using AI as a regulatory tool should only be considered if it is combined with human review processes and opportunities to appeal.

They also point out that without greater transparency and more detailed research, regulatory policies are likely to cause more harm than support the needs of citizens.

The WEF’s suggestion of self-policing, the in-house efforts of social media networks, and the considerations of global governments all provide foundations for a solution to the deepfake problem. However, at the time of writing, there is not yet a single obvious solution that can rule out misinformation being spread during future elections.

Also Read: Democracy will win with improved artificial intelligence.

Conclusion

Artificial intelligence technology is here to stay, and we humans will interact with these technologies in all aspects of our lives. Whilst it is true that AI has caused some of the problems we are currently facing concerning deliberately false information, artificial intelligence will also be critical in deploying a solution.

Combined with and augmented by human intelligence, AI has the power to scale human efforts to identify and disable deepfakes before they cause damage. To prevent election misinformation from influencing voter behavior, efforts to police deepfakes need to be combined with consumer education about how information travels.

Humans are wired to interact socially. We learn by sharing stories, so it is only natural to interact on social media and share content with others. These behaviors are deeply ingrained in our DNA. Becoming more aware of where we source our information and how we share it will be critical to combat deepfake news with the help of human intelligence.

References

Puutio, Alexander, and David Alexandru Timis. “Deepfake Democracy: Here’s How Modern Elections Could Be Decided by Fake News.” World Economic Forum, 5 Oct. 2020, https://www.weforum.org/agenda/2020/10/deepfake-democracy-could-modern-elections-fall-prey-to-fiction/. Accessed 20 Feb. 2023.

Business, Cnn. “How to Spot Misinformation Online.” YouTube, Video, 26 Oct. 2020, https://www.youtube.com/watch?v=0LCzu8pEN4M. Accessed 20 Feb. 2023.

CNN. “This Fake News Machine Gears up for 2020.” YouTube, Video, 15 Sept. 2017, https://www.youtube.com/watch?v=tNsbKtckzcM. Accessed 20 Feb. 2023.

Finance, Yahoo. “Lawmakers Raise Concerns about Election Interference and Deep Fake Videos.” YouTube, Video, 13 June 2019, https://www.youtube.com/watch?v=OYCk_FfgVgM. Accessed 20 Feb. 2023.

“How Is Fake News Spread? Bots, People like You, Trolls, and Microtargeting.” Center for Information Technology and Society – UC Santa Barbara, https://www.cits.ucsb.edu/fake-news/spread. Accessed 20 Feb. 2023.

O’Connor, Gabe, and Avie Schneider. “How Russian Twitter Bots Pumped Out Fake News During The 2016 Election.” NPR, 3 Apr. 2017, https://www.npr.org/sections/alltechconsidered/2017/04/03/522503844/how-russian-twitter-bots-pumped-out-fake-news-during-the-2016-election. Accessed 20 Feb. 2023.

Puutio, Alexander, and David Alexandru Timis. “Deepfake Democracy: Here’s How Modern Elections Could Be Decided by Fake News.” World Economic Forum, 5 Oct. 2020, https://www.weforum.org/agenda/2020/10/deepfake-democracy-could-modern-elections-fall-prey-to-fiction/. Accessed 20 Feb. 2023.

—. “Deepfake Democracy: Here’s How Modern Elections Could Be Decided by Fake News.” World Economic Forum, 5 Oct. 2020, https://www.weforum.org/agenda/2020/10/deepfake-democracy-could-modern-elections-fall-prey-to-fiction/. Accessed 20 Feb. 2023.

Rainie, Lee, et al. “The Future of Free Speech, Trolls, Anonymity and Fake News Online.” Pew Research Center: Internet, Science & Tech, 29 Mar. 2017, https://www.pewresearch.org/internet/2017/03/29/the-future-of-free-speech-trolls-anonymity-and-fake-news-online/. Accessed 20 Feb. 2023.

“Regulating Disinformation with Artificial Intelligence : Effects of Disinformation Initiatives on Freedom of Expression and Media Pluralism.” Publications Office of the EU, 28 May 2019, https://op.europa.eu/publication/manifestation_identifier/PUB_QA0119194ENN.

Villasenor, John. “How to Deal with AI-Enabled Disinformation.” Brookings, 23 Nov. 2020, https://www.brookings.edu/research/how-to-deal-with-ai-enabled-disinformation/. Accessed 20 Feb. 2023.