The battle to tackle U.S. election propaganda heats up by Yoshiyuki Sagara


Geoeconomic Briefing No.178 February 13, 2024

The battle to tackle U.S. election propaganda heats up

Yoshiyuki Sagara
Senior Research Fellow

PDF

More than half of the world’s population will elect their political leaders in 2024, with votes taking place in over 60 countries, from Indonesia and South Korea to India and the United Kingdom, as well as for the European Parliament. Among those, the most important election in terms of global implications is the November presidential election in the United States, the world’s oldest continuing democracy. The U.S. presidential election is becoming more and more intense amid increased political polarization and pluralistic values. With the winds of authoritarianism and populism strengthening in many democratic countries, will democracies be forced to retreat further, or will they show their resilience in the face of headwinds?

Whether or not the U.S. presidential and congressional polls will be conducted as “free and fair elections” will have a major impact on the future of the international order. Elections around the world, especially those in the U.S., are threatened by “election propaganda,” which means election campaigns that use misinformation, disinformation and malinformation (MDM), as well as conspiracy theories. While election interference from authoritarian states is still the existing concern, unjustified domestic election propaganda deployed to garner votes in American society is the enemy of free and fair elections.

Election propaganda 2024

U.S. President Donald Trump’s administration lasted only one term, and in the 2022 midterm elections, Republican candidates were defeated across the board and Democrats fared well. In the 2024 presidential election, however, the hard-line conservative wing of the Republican Party, especially Trump, is in the lead. It has been eight years since Trump first won the presidency. The situation regarding election propaganda has become increasingly serious since then.

First, the dramatic development of artificial intelligence (AI) has exacerbated the ripple effect caused by disinformation. The use of misinformation and malinformation in negative campaigns against opponents in the U.S. presidential election is not new in itself. During the 2016 presidential election, Cambridge Analytica, a now-defunct British political consulting firm that worked closely on Trump’s 2016 presidential campaign, repeatedly posted a video clip of his opponent Hillary Clinton coughing and leaving a Sept. 11 memorial ceremony early, giving the impression of her supposedly failing health. But this was a fragment of the actual footage.

However, the social implementation of generative AI has made it easy to create deepfakes, even with entirely fake videos and audio. In 2023, the Republican National Committee’s dramatic negative commercial criticizing U.S. President Joe Biden’s administration attracted attention as it was created entirely with generative AI. If elaborate fake videos and fake news become a topic of conversation not only on social media but also in mass media such as newspapers, television and cable TV, and if they are used repeatedly in election campaigns, the electorate’s voting behavior will be affected.

Particularly, after parties choose their candidates and presidential debates start in September, election campaigns are expected to turn into county-by-county contests in swing states. With generative AI, there is a risk of deepfake videos and fake news stories spreading, targeting voters in swing states and swing counties over specific issues that are of particular concern in those places. The number of such fake videos and fake articles could be in the order of thousands. Such a “deepfake saturation attack” could seriously affect the outcome of elections in all swing counties, posing a major threat.

Second, the use of major social media sites such as Facebook, YouTube, X, Instagram and TikTok as platforms for disinformation dissemination has been steadily increasing. Partisanship in major news outlets such as Fox News and CNN, as well as cable TV news, has been pointed out as a factor in the acceleration of political polarization in the U.S. However, a Pew Research Center study released in November shows that while only a third of Americans said they often get news from television, more than half of respondents said they often turn to digital services for news, with the percentage increasing year by year.

In terms of monthly active users (MAU), a common indicator of the size of social media, Twitter in January-March 2016, when Trump was using the platform for his presidential campaign, counted 310 million worldwide. After it was acquired by Elon Musk and became X, its MAU increased to 500 million worldwide as of December, including 95 million in the U.S. alone. X and Facebook are still the most popular social media sites for regular news viewing, but their use for news browsing is on a slightly declining trend. Instead, TikTok usage has skyrocketed, with 43% of TikTok users saying they regularly access news via the app, according to the Pew Research Center survey.

U.S. voters are increasingly viewing news through social media rather than through mass media such as newspapers and television. This means that voters are spending more time looking at information sent directly from politicians, political action committees and unspecified people, rather than information reported and edited by the mass media. Furthermore, the “news feed” invented by Facebook is addictive, with new items appearing one after another simply through vertical scrolling, and “likes” motivating users to transmit content. Many users are unwittingly drawn into an echo chamber, where certain opinions and ideas are amplified and become influential as people with similar values interact and empathize with each other.

Third, there is a clear difference among platform companies in their policies regarding the removal of inappropriate content. The platform companies are in charge of monitoring and taking down inappropriate content. If the companies operate unchecked, they risk losing users, and if appropriate action is not taken, they risk being forced to shut down their platforms.

However, even if police or a court notifies a platform company to take down illegal content, there are cases in which the company can decide at its own discretion not to remove it. This is because in some cases, governments in authoritarian countries demand that information unfavorable to authorities be removed. How to keep a distance from politics is a difficult issue for platform companies.

Nevertheless, on May 26, 2020, Twitter attached a fact-check warning label on a series of tweets by Trump over false claims about mail-in ballots and provided a link to a page that described them as “unsubstantiated.” As a countermeasure against disinformation, such “redirects” to accurate information is an effective technique that is expected to correct perceptions. Twitter later froze Trump’s account following the Jan. 6, 2021 attack on the U.S. Capitol by a mob of his supporters. However, Musk rebranded Twitter to X and reinstated Trump’s account. The link to the verification page disappeared.

Major social media outlets have also released transparency reports on how much inappropriate content they have removed. YouTube, for example, removed more than 8 million videos between July and September. The majority of the videos were taken down due to child safety concerns or because they included dangerous content, and more than 160,000 videos that promoted violent extremism were also removed. On the other hand, Twitter stopped publishing transparency reports after Musk acquired the platform in 2022. It is clear that usage of the platform has expanded after it became X, but it is not clear how much illicit content has actually been removed.

Due to the growing power of disinformation, the risk of deepfake saturation attacks, the growing use of social media as news media and the disorganized response to the removal of inappropriate content, election propaganda in this year’s U.S. presidential election may have a more serious impact on voter behavior than in the past. Of course, voters do not only check social media. It is very possible that hearing a candidate’s speech in person and shaking hands can change their voting behavior.

Nonetheless, electoral propaganda in the U.S. is troubling because of the impact it has on democracy not only in the country but around the world. In the past, U.S. and European governments, as well as nongovernmental organizations, have played a major role in ensuring that free and fair elections are held in emerging and developing countries. However, the U.S., the standard-bearer of free and fair elections, and European states are now suffering from disinformation and populism. The U.S. and European nations are losing their ability to persuade emerging and developing countries, where authoritarianism is creeping in, to correct their electoral propaganda.

 

Tackling election propaganda

At a time when there are fears that democracy is in retreat around the world, the U.S. must show the world through its presidential election that it is a resilient democracy, not succumbing to unjustifiable electoral propaganda that is rife with disinformation and misinformation.

To deal with electoral propaganda, the following measures should be taken:

  • Political leaders demonstrate a strong willingness to counter unjustifiable election propaganda.
  • Electoral Management Body (EMB) monitors election propaganda and works with law enforcement and cybersecurity agencies, fact-checking centers and social media platforms to quickly detect MDM.
  • Social media platforms take down fraudulent content as quickly as possible based on their own monitoring and requests by public agencies, and freeze accounts when necessary.
  • Strategic communication — governments disseminate correct information with a strong message to the public in a flexible manner using mass media and social media.
  • Authorities crack down on candidates, political fundraisers and political parties that have conducted election campaigns in violation of laws and regulations.

That being said, in the U.S., the Federal Election Commission is the oversight agency for election financing, and the determination of whether free and fair elections are being conducted is left to state election officials. In other words, in the U.S., the functions of the EMB and the administration of elections are decentralized to state and local governments.

Such a structure makes it difficult for the U.S. federal government to conduct centralized operations to deal with election propaganda. It is also inevitable that, reflecting state partisanship, there will be variations in response among states and counties.

 

Learning from counterterrorism operations

Despite these structural challenges in the U.S., the principle remains that the federal and state governments should work with social media platforms to counter election propaganda. An example of good practice that should be recalled here is the counterterrorism operations of the 2010s. To counter the dissemination of content produced by terrorist organizations, public-private partnerships have been effective in preventing radicalization through the use of so-called counternarratives. Terrorist organizations such as al-Qaida and the Islamic State (IS) group have excelled at propaganda via the web and social media; videos spread by IS on social media sites such as Twitter, YouTube and social networking services such as WhatsApp, have captured the hearts and minds of young people in Europe and the Maghreb countries and have driven them to Syria and Iraq.

Terrorist groups, of course, do not exist only in Muslim societies. In March 2019, a white supremacist in his 20s opened fire in mosques in Christchurch, New Zealand, killing 51 people. Not only did the perpetrator become steeped in extremist ideology in cyberspace, but the fact that he livestreamed the attack shocked social media service operators. In light of the situation regarding terrorist activities and radicalization, YouTube implemented the Redirect Method. Together with Jigsaw, a think tank also affiliated with Google, YouTube redirected users who searched for videos using keywords related to terrorist acts or extremist ideology, or who accessed videos that may have been uploaded by terrorist organizations, to counternarratives — videos that explain the intentions and errors of such organizations’ narratives.

Echo chambers in social media can work to amplify the risk of radicalization. To stop radicalization, redirecting people who are seeking more radical content to content that cools them down, even just for a moment, is effective. According to a study conducted by U.S. research institution Rand, YouTube’s Redirect Method was able to reach more than 320,000 people in just eight weeks, with an average of more than 90 seconds of counternarrative video seen per person. While some young people may still have joined IS, it was an effective effort, at the very least, in that it directly appealed to young people who might have thought of joining a terrorist organization.

In addition, consortiums of governments, social media platforms, the United Nations secretariat and think tanks have been formed to enhance the safety of cyberspace by eliminating content linked to terrorism and violent extremism. Major examples include the French-led Paris Call, the New Zealand-led Christchurch Call, the platforms-led Global Internet Forum to Counter Terrorism and Tech Against Terrorism (TAT). These public-private consortiums have continuously been working on automatic reporting of terrorism-related sites, and their efforts were mentioned in the outcome document released at the Group of Seven ministerial meeting on home affairs and security held in Mito, Ibaraki Prefecture, in December. In addition, TAT is working with Microsoft to accelerate the use of AI in order to quickly detect terrorism-related content that exploits generative AI.

 

Democracy confronting election propaganda

Social media must have strengthened its capacity to deal with MDM through such countermeasures for deradicalization, as well as through tackling a massive “infodemic” — an overabundance of information including false or misleading information during a disease outbreak — during the COVID-19 pandemic crisis. The social responsibility of social media platform companies has never been greater amid the growing power of disinformation and the increasing weight of political campaigns conducted via social media. In the U.S. presidential election, government authorities and social media platforms should work together to combat election propaganda. Unjustifiable election propaganda is a challenge to democracies around the world.

In Europe, which has been fighting an information war with Russia over Ukraine, the European External Action Service has been continuing a disinformation monitoring project called EUvsDisinfo since 2015. Taiwan has been at the forefront of China’s influence operations, and the recent Taiwanese presidential election was the subject of a multiyear cognitive warfare offensive. Just prior to the election in January, a large number of fake videos were released on YouTube, making the “deepfake saturation attack” a real threat. We should take into account the risk that the methods deployed in the Taiwanese presidential election could be used in the U.S. presidential election as well. The Japanese government has designated “technology for false information analysis” as one of its critical technologies, and will support its research and development with a budget of up to ¥6 billion ($40.5 million) over four years.

The only thing Japan, European nations and other U.S. allies and like-minded countries can do is to trust the resilience of the U.S. electoral system and the judgment of the American people as to who will take office as U.S. president in January 2025. Regardless of the outcome of the election, the U.S. should conduct free and fair elections and carry out an anti-election-propaganda operation that will serve as a model for democracies around the world. Not only the U.S. but also its allies and like-minded countries that believe in democracy must unite in the fight against electoral propaganda. In the year of elections, the battle to defend the authority of democracy and demonstrate its resilience has only just begun.

 

 
[Note] This article was posted to the Japan Times on February 13, 2024:
https://www.japantimes.co.jp/commentary/2024/02/13/world/us-election-propaganda/

 

Author


Geoeconomic Briefing

Geoeconomic Briefing is a series featuring researchers at the IOG focused on Japan’s challenges in that field. It will also provide analyses of the state of the world and trade risks as well as technological and industrial structures. (Editor-in-chief: Dr. Kazuto Suzuki, Director, Institute of Geoeconomics (IOG); Professor, The University of Tokyo)

Latest Contents


 

We’ll keep up to date on latest briefings, videos, events, research activities, media coverage, and more.