AI-powered voice cloning tools can be used to create disinformation using the voices of prominent political figures, an online safety group has warned.
The Centre for Countering Digital Hate (CCDH) said researchers used six different AI voice cloning tools and attempted to create false statements using the voices of a range of well-known political leaders, with around 80% of their attempts producing what they called a convincing piece of content.
The CCDH said all but one of the tools it tested claim to have built-in safeguards to prevent misuse for the production of disinformation, but that its report found these measures to be “ineffective” and easy to circumnavigate.
The online safety organisation said its testing included using the voices of both Prime Minister Rishi Sunak and Labour leader Sir Keir Starmer, and said AI and social media companies needed to do more to protect the integrity of the upcoming General Election from such content.
The group said its researchers were also able to create audio-based disinformation of other global figures including former US president Donald Trump, US President Joe Biden and French President Emmanuel Macron.
The examples included various political figures warning people not to vote because of bomb threats, declaring election results had been manipulated and “confessing” to the misuse of campaign funds.
The organisation said AI companies need to introduce specific safeguards to prevent users from generating and sharing false or misleading content about geopolitical events and elections, backed up by more work from social media firms to detect and stop such content from spreading.
The CCDH said existing election laws should be updated to take into account AI-generated content.
Imran Ahmed, chief executive of the CCDH, said: “AI tools radically reduce the skill, money and time needed to produce disinformation in the voices of the world’s most recognisable and influential political leaders.
“This could prove devastating to our democracy and elections.
“By making these tools freely available with the flimsiest guardrails imaginable, irresponsible AI companies threaten to undermine the integrity of elections across the world at a stroke – all so they can steal a march in the race to profit from these new technologies.”
Mr Ahmed added that it was vital that social media platforms do more to stop the spread of AI-powered disinformation, particularly during such a busy year of elections around the world.
He said: “Disinformation this convincing unleashed on social media platforms – whose track record of protecting democracy is abysmal – is a recipe for disaster.
“This voice cloning technology can and inevitably will be weaponised by bad actors to mislead voters and subvert the democratic process.
“It is simply a matter of time before Russian, Chinese, Iranian and domestic anti-democratic forces sow chaos in our elections.
“Hyperbolic AI companies often claim to be creating and guarding the future, but they can’t see past their own greed.
“It is vital that in the crucial months ahead they address the threat of AI election disinformation and institute standardised guardrails before the worst happens.”
Disclaimer: The copyright of this article belongs to the original author. Reposting this article is solely for the purpose of information dissemination and does not constitute any investment advice. If there is any infringement, please contact us immediately. We will make corrections or deletions as necessary. Thank you.