A video using artificial intelligence to clone the voice of Vice President Kamala Harris has sparked concerns about the potential for AI to mislead voters as the election nears.
The video, which gained significant attention after Elon Musk shared it on his social media platform X, mimics Harris’s voice, making statements she never said.
Initially, Musk shared the video without noting it was a parody, leading to confusion. Musk later clarified the video was meant as satire, but the incident highlighted the potential dangers of AI-generated content in politics, demonstrating the power of AI to create realistic but false representations of public figures.
With the 2024 elections approaching and financial losses from deepfake fraud mounting, digital authentication specialists are enhancing their defenses against AI-generated media. The increasing accessibility of deepfake creation tools has raised concerns about spoofing attacks and synthetic identities, which can bypass traditional fraud detection.
Experts like FacePhi’s Miguel Santos Luparelli Mathieu and iProov’s and Campbell Cowie emphasize the need for robust, multi-layered security measures. Technologies such as liveness detection and 3D imaging are crucial in mitigating these threats, as organizations continue to adapt to the evolving landscape of generative AI and deepfake risks.
In the voice-based deepfake arena, Reality Defender, a New York-based startup focused on detecting deepfake media, announced last month that it had partnered with ElevenLabs to enhance its platform’s capabilities in detecting synthetic voices.
ElevenLabs, known for its deepfake technology, gained attention in early 2023 when a Vice journalist used its voice cloning technology to bypass a bank’s voice-based biometric authentication system. This notoriety helped ElevenLabs secure a $19 million Series A funding round and establish strategic partnerships, including one with Pika on text-to-animated speech technology.
And earlier this summer, ID R&D revealed that it received a US patent for a system designed to secure voice-based device interactions using voice biometrics and anti-spoofing techniques. Credited to co-founder and Chief Scientific Officer Konstantin Simonchik, this technology aims to protect against voice deepfakes, which are increasingly problematic due to advancements in generative AI, as the latest Kamala Harris clone demonstrates.
Source: AP
–
August 1, 2024 – by Tony Bitzionis
Follow Us