AI Law - International Review of Artificial Intelligence Law
G. Giappichelli Editore

03/06/2024 - The Growing Threat of AI Voice Cloning in the 2024 Elections (USA)

argument: Notizie/News - Personal Data Protection Law

According to a report by The Washington Post on May 31, 2024, the increasing use of AI-generated audio and voice cloning technologies is raising significant concerns about their potential impact on the upcoming 2024 elections in the United States.

Voice cloning technology has advanced to the point where it can create highly realistic replicas of individuals' voices, including political figures, with minimal input data. This poses a substantial risk for misinformation and manipulation, as malicious actors can generate fake audio clips that sound authentic to the untrained ear.

In a series of tests conducted by various organizations, including the Center for Countering Digital Hate, researchers found that popular AI voice-cloning tools such as ElevenLabs, Speechify, and PlayHT were able to generate convincing fake audio clips of prominent politicians. These tools were used to create false statements that could potentially deceive voters, such as warnings about bomb threats at polling stations or false confessions of misconduct by candidates. The tests revealed that while some tools, like ElevenLabs, had better safeguards in place, others performed poorly, generating believable fake audio in numerous test runs.

The Federal Communications Commission (FCC) has responded to these threats by banning the use of AI-generated voices in robocalls, a regulation aimed at curbing the misuse of this technology to mislead and deceive voters. This new rule empowers the FCC to fine companies that use AI voices in their calls and to block service providers that carry such calls.

Despite these measures, experts warn that the technology is evolving rapidly, often outpacing the development of effective detection and regulation mechanisms. They emphasize the need for comprehensive legal frameworks and technological safeguards to prevent the misuse of AI voice cloning tools. Recommendations include implementing audio watermarks or digital signatures and ensuring these tools are only available to verified users.

The proliferation of AI-generated audio content complicates an already polarized political landscape, making it harder for voters to distinguish between real and fake information. This can undermine public trust in the electoral process and democratic institutions. Therefore, it is crucial for policymakers to act swiftly to establish protections against the potential chaos that AI-generated misinformation can cause in elections.

For more detailed information, refer to the original article on The Washington Post.