With an election looming and millions of dollars already being lost to fraud, digital authentication specialists are fighting back against deepfake-powered cybercriminals.
With 2024 being an election year in the U.S. and elsewhere, there has been growing concern about the threat of AI-generated media compromising the integrity of the information landscape. But this rapidly advancing technology arguably poses a more immediate threat to organizations and their employees. To give one illustrative example, a multinational company’s Hong Kong office lost about $25 million after a deepfake of its Chief Financial Officer instructed a subordinate to arrange multiple wire transfers.
Worse yet, those orders were given on a video call that featured multiple other corporate officers—who were themselves deepfakes. The employee who fell victim to the scam was the only “real” person on the call.
This threat is poised to reap serious returns for fraudsters beyond that paltry $25 million. According to a recent report from Deloitte, fraud losses attributable to generative AI in the financial services sector alone could skyrocket from $12.3 billion last year to $40 billion by 2027.
The Deepfake Toolkit Opens
It’s the sort of threat that some tech companies in the biometrics and identity security space have been preparing for. Biometric authentication, long touted as a sophisticated leap forward from password-based authentication, has always faced the threat of “spoofing” or presentation attacks, in which legitimate biometric credentials (such as a person’s face) might be mimicked in order to fool the authentication system. And some high-profile systems have indeed been fooled. But the companies at the vanguard of biometric tech have been pouring considerable time and energy into addressing the issue.
It is not a simple problem to solve. Danielle VanZandt, the head of market research for Frost & Sullivan’s Commercial & Public Security division, says that one of the most difficult challenges is “how easy the tools needed to create many of these deepfakes are to acquire,” which means fraudsters can easily get their hands on rapidly advancing tech.
“Downloading emulators, face swapping apps, or applications to mirror a mobile device are all things that can be acquired through normal open web downloads,” she said. “It’s no longer hidden behind the deep or dark web traffic.”
These tools not only enable someone to impersonate another individual in video media, but to forge entire fraudulent identities—an approach that Campbell Cowie calls “silent, synthetic identity.” The Head of Policy, Standards & Regulatory Affairs for iProov, Cowie explains that “synthetic identities are a blend of real and fabricated data and are far more difficult to detect using traditional fraud detection methods.”
For example, a synthetic face can be attached to genuine identity information—such as Personally Identifiable Information stolen through a data breach—and passed off as a real person in online channels.
Social Engineering on Steroids
This points to an even more sinister threat, and a deeper problem. Not only can these AI tools be used to make a synthetic identity that might, for example, pose as a real person opening a bank account or claiming government benefits; but deepfake technology can be used to hack the very heart of an identity verification system.
As FaceTec’s Jay Meier explains, deepfakes and deepfake injection attacks – in which synthetic media is plugged directly into a system’s data stream, tricking it into “thinking” a real user is present – can be used during the user enrolment process itself, when someone is first vetted against their biographic data claims.
“Once approved for access into the system as a legitimate enrollee, they are considered safe to exercise approved privileges, including domain/network access, services access, asset access, etc.,” explains Meier, who serves as FaceTec’s Senior Vice President of North American Operations. “Once approved, the relying party will regard the fraudster as legitimate every time. This is the definition of an advanced persistent threat.”
Even when human interviewers are involved in such digital identity systems, deepfake technology can easily fool them, Meier says, making it “effectively a social engineering attack, but automated and on steroids.”
VanZandt, the Frost & Sullivan researcher, concurs that these “new attack types, like mobile injection attacks, are particularly prevalent,” and represent a serious and growing threat. “We’re already seeing triple-digit growth in these attack types since 2022, so it makes me cautionary as to what those numbers could look like as these AI models continue to improve,” she said.
Multi-layered Answers
As for how to address this threat, an ongoing cybersecurity “arms race” has produced some compelling answers. iProov’s Campbell Cowie emphasizes the need for “robust security controls and advanced authentication technologies that take a multi-layered approach to defending against the various strains of deepfake attacks.”
It’s a sentiment echoed by Miguel Santos Luparelli Mathieu, the Product Innovation Director at FacePhi, another biometric identity assurance firm. He says FacePhi recommends “a several-layer approach to protect against this threat and to let our clients achieve cyber resiliency,” which includes forensic analysis and measures to determine content provenance, in addition to deep learning systems that can detect the signs of synthetic media.
On a more technical level, FaceTec’s Meier emphasizes the need for liveness detection technology that can ascertain whether a subject truly is present during the enrolment process—ideally one based on three-dimensional imaging.
“In an automated system, liveness-proven and highly accurate biometric matching are effectively required, removing the human mistake engine from the system altogether,” Meier explained. “To maximize the effectiveness of the liveness-proven biometric match, 3D systems must be utilized.”
To much of the public, some of these terms and concepts will be new and unfamiliar, but a growing number of cybersecurity professionals are starting to encounter and understand them. And they will need to, as analysts have made clear: the threat of generative AI and deepfakes is only accelerating.
–
July 8, 2024 – by Tony Bitzionis and Alex Perala
Follow Us