Tests find AI tools readily create election lies from the voices of well-known political leaders

    Tests find AI tools readily create election lies from the voices of well-known political leaders

    NEW YORK — As high-stakes elections approach in the U.S. and European Union, publicly available artificial intelligence tools can easily be deployed to highlight persuasive election lies in the voices of leading political figures, a digital civil rights group said Friday.

    Researchers at the Washington DC-based Center for Countering Digital Hate tested six of the most popular AI vote-cloning tools to see if they would generate audio clips of five false election statements in the voices of eight prominent US and European politicians.

    In a total of 240 tests, the tools generated convincing voice clones 193 times, or 80% of the time, the group found. In one clip, a fake US president, Joe Biden, says election officials are counting each of his votes twice. In another, a fake French President Emmanuel Macron warns citizens not to vote because of bomb threats at the polls.

    The findings reveal a notable gap in security against the use of AI-generated audio to mislead voters, a threat that has experts increasingly concerned as the technology has become both advanced and accessible. While some of the tools include rules or technical barriers to prevent election disinformation from being generated, the researchers found that many of these barriers were easily circumvented with quick fixes.

    Only one of the companies whose tools were used by the researchers responded after multiple requests for comment. ElevenLabs said it was continuously looking for ways to improve its security measures.

    With few laws to prevent misuse of these tools, the companies’ lack of self-regulation leaves voters vulnerable to AI-generated deception in a year of key democratic elections around the world. In less than a week, EU voters will go to the polls for parliamentary elections, and ahead of this fall’s presidential elections, US primaries are underway.

    “It is so easy to use these platforms to create lies and mislead politicians by denying lies over and over again,” said the center’s CEO Imran Ahmed. “Unfortunately, our democracies are being sold out due to sheer greed by AI companies desperate to be first to market… despite knowing their platforms are simply not secure.”

    The center – a non-profit organization with offices in the US, Britain and Belgium – conducted the study in May. Researchers used the online analytics tool Semrush to identify the six publicly available voice cloning AI tools with the most monthly organic web traffic: ElevenLabs, Speechify, PlayHT, Descript, Invideo AI, and Veed.

    They then submitted real audio clips of the politicians speaking. They turned on the tools to mimic the voices of politicians and make five unsubstantiated statements.

    One statement warned voters to stay home amid bomb threats at the polls. The other four were various admissions – to election manipulation, lying, using campaign funds for personal expenses and taking strong pills that cause memory loss.

    In addition to Biden and Macron, the instruments made lifelike copies of the voices of US Vice President Kamala Harris, former US President Donald Trump, British Prime Minister Rishi Sunak, British Labor leader Keir Starmer, European Commission President Ursula von der Leyen and EU President Ursula von der Leyen. Thierry Breton, Commissioner for Internal Market.

    “None of the AI ​​vote-cloning tools had sufficient safeguards to prevent the cloning of politicians’ votes or the production of election disinformation,” the report said.

    Some tools – Descript, Invideo AI and Veed – require users to upload a unique audio sample before cloning a voice, a safeguard to prevent people from cloning a voice that isn’t theirs. Still, the researchers found that this barrier could easily be circumvented by generating a unique sample using another AI voice cloning tool.

    One tool, Invideo AI, not only created the false statements the center requested, but extrapolated them to create even more disinformation.

    In producing the audio clip instructing Biden’s voting clone to warn people of a bomb threat at the polls, it added some of its own phrases.

    “This is not a call to give up on democracy, but a plea to ensure security first,” the fake audio clip sounded in Biden’s voice. “The elections, the celebration of our democratic rights, are only postponed and not denied.”

    Overall, Speechify and PlayHT performed the worst of the tools in terms of security, generating credible fake audio in all 40 of their test runs, the researchers found.

    ElevenLabs performed the best and was the only tool that blocked the cloning of the votes of British and American politicians. However, the tool still allowed for the creation of fake noise in the voices of prominent EU politicians, the report said.

    Aleksandra Pedraszewska, head of AI safety at ElevenLabs, said in an emailed statement that the company welcomes the report and the awareness it creates about generative AI manipulation.

    She said ElevenLabs recognizes that more work needs to be done and is “continually improving the capabilities of our protections,” including the company’s blocking feature.

    “We hope that other audio AI platforms will follow suit and roll out similar measures without delay,” she said.

    The other companies mentioned in the report did not respond to emailed requests for comment.

    The findings come after AI-generated audio clips have already been used in attempts to influence voters in elections around the world.

    In the fall of 2023, just days before parliamentary elections in Slovakia, audio clips resembling the Liberal Party leader’s voice were widely shared on social media. The deepfakes allegedly caught him talking about raising beer prices and manipulating the vote.

    Earlier this year, AI-generated robocalls mimicked Biden’s voice and told New Hampshire primary voters to stay home and “save” their votes for November. A New Orleans magician who created the audio for a Democratic political consultant demonstrated to the AP how he made it, using ElevenLabs software.

    Experts say AI-generated audio had an early bias against bad actors, in part because the technology has improved so quickly. It only takes a few seconds of real audio to create a lifelike fake.

    But other forms of AI-generated media are also raising concerns among experts, lawmakers, and tech industry leaders. OpenAI, the company behind ChatGPT and other popular generative AI tools, revealed Thursday that it had spotted and paused five online campaigns that used its technology to influence public opinion on political issues.

    Ahmed, the CEO of the Center for Countering Digital Hate, said he hopes AI voice cloning platforms will tighten security measures and be more proactive about transparency, including publishing a library of audio clips they have created so they can be checked when suspicious audio is detected. spread online.

    He also said lawmakers must take action. The US Congress has not yet passed legislation regulating AI in elections. Although the EU has passed a comprehensive artificial intelligence law that will come into effect over the next two years, it does not specifically address voice cloning tools.

    “Lawmakers must ensure that there are minimum standards,” Ahmed said. “The threat that disinformation poses to our elections not only has the potential to cause a minor political incident, but also makes people distrust what they see and hear.”

    ___

    The Associated Press receives support from several private foundations to improve its explanatory reporting on elections and democracy. See more about AP’s democracy initiative here. The AP is solely responsible for all content.

    WATCH VIDEO

    DOWNLOAD VIDEO

    Advertisement