Selfie-based biometric authentication is booming right now, and among the vendors of such technological solutions, FaceTec is a company that really stands out. That’s thanks in large part to FaceTec’s sophisticated liveness detection technology, which is designed to thwart presentation attacks in which phony biometric credentials – such as a recorded video of a genuine user or even an AI-generated deepfake – are being used in an attempt to trick the authetication system. FaceTec was the first company to attain lab-tested certification to Levels 1 and 2 of the ISO 30107 presentation attack detection standard, and went on to further demonstrate its confidence in its technology with a huge, $100,000 spoof bounty program that includes additional provisions for advanced video injection and template tampering attacks.
The value of that kind of technology is all the more clear in light of a recent deepfake-driven hack attack in China that resulted in the theft of tens of millions of dollars – and that’s where the conversation starts in a new interview with FaceTec CEO Kevin Alan Tussy. Speaking to FindBiometrics Founder Peter O’Neill, Tussy breaks down the mechanics of the attack before going on to explain why 2D liveness detection isn’t sufficient to resist such attacks. The discussion also touches on lessons from CAPTCHA security, FaceTec’s expanding partner network, and the critical importance of a “high-stakes spoof bounty program” like FaceTec’s.
Read on to get some in-depth insights from a trailblazer in face biometrics…
Peter O’Neill, Founder, FindBiometrics: I want to talk with you about the recent liveness hack that made international news. Attackers in China stole 500 million yuan, or about 76 million dollars, using a deepfake injection technique. How was this attack different from the traditional presentation attacks that the biometric industry tests against?
Kevin Alan Tussy, CEO, FaceTec: This attack utilized two vectors that are often woven together. The hackers used what we refer to as video injection, so rather than the camera capturing live video footage collected in real-time, the attackers went right around the camera hardware and injected a rendered-in-real-time, deepfake puppet.
This attack seems complex, but it’s actually not exceedingly difficult to perpetrate because deepfake puppet software is easily accessible, and often free. And the video injection aspect is even easier…as long as you can bypass the device’s camera, whether that’s using a smart device emulator or with a virtual webcam program like ManyCam; if you can inject previously recorded or real-time-rendered deepfake frames, you can fool the 2D liveness detection and then the face matching because the source biometric data is only 2D images. Which, despite our innate human ability to correlate them to a real person, 2D photos are not true, accurate, or consistent representations of 3D human faces. 2D photos are convenient derivatives that are unfortunately highly affected by pose, angle, capture distance, and lighting, and aren’t suitable for mission critical identity verification applications.
Deepfakes started as 2D mugshot-style photos, and they were then animated into puppets to fool the liveness. Since only 2D-to-2D face matching algorithms were used, they contained sufficient data to make the comparison successful. The 2D photo “cat” is already out of the bag, and is everywhere online and on the dark web. So at FaceTec we use 3D FaceMaps: they are encrypted, they are proprietary, and they aren’t all over the internet like 2D photos. 3D FaceMaps provide our customers and their users with important security layers that 2D liveness and matching will never have.
Peter O’Neill: Oh, so these are much closer to cybersecurity attacks than traditional presentation attacks. That makes me wonder if we need testing that evaluates these biometric security systems from more traditional cyber threats. What is missing from the liveness detection discourse in our industry that’s leaving these gaps for hackers to exploit?
Kevin Alan Tussy: Well, I think the biggest gap in understanding for both customers and vendors is the lack of awareness of the very scalable attack methods that can beat every 2D Liveness tech we’ve ever seen. We all know about presentation attacks, where photos or masks are held up to the camera, but the other vectors I’ve been describing are not well understood. So much so that we see inexperienced companies are creating “server-side-only liveness checks” because it’s convenient for customer integration, and unknowing customers who lack understanding accept it because it checks the “Liveness” box, and they don’t know any better. But, purely server-side PAD checks have absolutely no way of stopping these video injection attacks. In my opinion, if server-side Liveness vendors aren’t disclaiming that they can’t defend deepfakes and video injection, it’s negligence at this point. Their customer’s applications will be wide open to rendered-in-real-time deepfake puppets or even previously recorded video being replayed.
One of the biggest reasons few know about these less obvious attack vectors is that they weren’t even obvious at the time to the people who wrote ISO 30107; or if they did understand them at the time, they didn’t stress that PAD without the assurance of video feed integrity is essentially worthless. Testing labs had another opportunity to add these threat vectors, but they didn’t. So I blame the ISO 30107 authors and especially the testing labs for allowing that PAD is enough narrative to persist. For more information on this, we publish Liveness.com, where you can learn more about 2D versus 3D Liveness and Level 4 and 5 attack vectors.
The organizations that chose Liveness Detection because it was a “single-frame selfie” or it was “passive” need to understand these approaches are so much weaker than what has already been broken by the Chinese hackers. Too many in the Liveness business are under a false sense of security because they had some PAD testing done, but it was probably on a high-resolution phone handpicked by the vendor, while real hackers will use lower-end devices and cameras with lower resolutions. And in some tests, vendors have only blocked maybe 150 to 300 spoof attacks – and that’s nothing. That’s not a real test; it’s a walk in the park compared to what liveness detection systems endure in the real world.
So, the 2D active and server-side Liveness vendors are going to learn the hard way that when you start storing real value behind these weak 2D liveness methods, like they’ve now started doing abroad, their systems are going to get ripped apart. And the more value you sequester behind liveness checks, the harder bad actors will try to break in. Someone is going to expose the weaknesses, and before long every bad actor knows about them, and fraud goes up very fast, like we just saw with the Exchange Server hacks.
Peter O’Neill: And in its reporting the South China Morning Post mentioned affordable spoofing services that could be purchased to be fake face liveness detection systems. Meanwhile, the specialized software and hardware used for these attacks is relatively low as you mentioned in answering the first question. With deepfake creation software often being free to use, how can our industry respond to dark web services dedicated to bypassing biometric user authentication?
Kevin Alan Tussy: Yes, this is something we’ve seen in the CAPTCHA world, with services like deathbycaptcha.com. Whenever platforms put hoops in place attempting to block bots or stop multiple accounts from being created, services that help bad actors work around them pop up. So, there are all these CAPTCHA services that either use AI or send that image to another person sitting at a computer in a geography where their time is very inexpensive; that person getting two to 10 cents for every CAPTCHA they solve. And when biometric liveness detection is put in place, then the CAPTCHA service must evolve into beating or bypassing liveness detection.
To protect our customers and their users, we must have liveness stronger than these spoof services, and stronger than what these unregulated dark web service providers can conjure up in the future. And that means defending threat Levels 1 through 5.
Peter O’Neill: Well, this is critical right now, Kevin, and this has to get resolved right away. Right now, there seems to be a strict demarcation between identity verification for onboarding, and then subsequent authentication. Does this type of siloing of biometric use cases feed into the growing risk for biometric hacks?
Kevin Alan Tussy: Yes. So, there’s absolutely a different use case for using liveness for onboarding and using liveness for subsequent authentication, logging in, or reverification. When someone takes a photo of you, that photo is a derivative of you. And in our case at FaceTec, we use another type of derivative: it uses the same 2D cameras to capture it, but we actually create a 3D FaceMap; a much higher quality, much-closer-to-the-real-you derivative than a 2D photo.
Back in 2001, when Dorothy Denning coined the term “liveness”, she made some brilliant observations about biometric systems and how they should not depend on secrecy for security. She was absolutely right. But secrecy is still a layer of security and the fact that our biometric modality, the 3D FaceMap, isn’t stored by companies like Facebook and Google, that they don’t have a copy of your 3D FaceMap hackers can search for, and they don’t have the data needed to derive it. It’s a huge advantage over the hackers. That true 3D data provides FaceTec with increased security over other modalities. If the matching data in the recent hacks had been 3D FaceMaps, the attackers wouldn’t have been able to get that data from anywhere in a scalable way because it doesn’t exist publicly on the internet. They would likely just move on to another, easier target.
However, even during onboarding, where Liveness-proven users will be matched to a 2D photo from an ID document, it’s still significantly better to have 3D data on one side of the matching equation. And this is why we have such high matching accuracy across the board.
For 3D-FaceMap-to-a-2D-photo-ID-document we have an up to one-in-500,000 false accept rate at .99 percent false reject. And when we’re matching to a portrait photo, like what we would see in a passport chip or a government database, FaceTec is a one-in-950,000 false accept rate at a .99 percent false reject. These are orders-of-magnitude better than the best 2D matching algorithms in real-world usage and add yet another layer of security for both onboarding and ongoing authentication.
Peter O’Neill: Now, while all this nefarious activity might seem bleak, especially considering how much we rely on remote onboarding and authentication during the pandemic as you were referring to earlier, we think of it as: wherever there’s a challenge, there’s an opportunity. What’s the next step for the biometric industry in response to this new threat level?
Kevin Alan Tussy: Well, I think for us, the opportunity has definitely been to partner with the many companies in the Identity Proofing space. We now have over 60 partners around the world using our technology for onboarding, as well as ongoing authentication. For us, the opportunity is once organizations realize that they need to have liveness detection to determine the user is there in front of the camera in real-time, and then that person needs to be matched to the corresponding data from an official document, from a passport chip, or from a government database, then they also realize if they can’t trust the data collected the entire process is untrustworthy; so they need strong, 3D Liveness and 3D matching.
What everyone in biometrics trying to use face matching for onboarding or ongoing authentication needs to understand is, if you can’t defend against Levels 1 through 5 – all five of these attack levels – you’re going to have large holes in your security, and you won’t be able to safely store valuable data behind these access gates.
What we felt was best for us to determine whether our technology was strong enough for these very important, critical use cases that store and protect a lot of value behind a liveness check, was to build our own spoof bounty program. Our $100,000 spoof bounty program, in operation for well over a year, has had over 50,000 attacks. I would have to think that others in the industry, those who want to prove they are robust to Level 1 through 5 attacks, need to also stand-up and maintain an ongoing spoof bounty program.
Peter O’Neill: Well, like it or not, a digital arms race is a characteristic of what I would say is a healthy security industry. Are these a sign of biometric maturity? Are these issues the industry will grow out of, or is this really a call to action now?
Kevin Alan Tussy: I think it’s all of the above. Now that liveness detection is being used to attempt to prove the user is live and in front of the camera, and the technology is being trusted to the point where it’s managing access to real value, I think that shows a sign of maturity. Still, the lessons are going to be learned the hard way by many organizations because, while lab tests are a great first step, without a spoof bounty program, and major real-world deployments that hackers had time to try to work their way around, there are likely blind spots that can be leveraged like we saw in these recent hacks. As these and other hackers get even more skilled, I think we will see an increase in the fallout from using these weaker liveness systems.
Peter O’Neill: Who is threatened most from these new biometric hacking techniques, and what should an organization look for in a biometric security system if they want to stay protected?
Kevin Alan Tussy: The users and the companies employing inadequate liveness detection are most under threat because they have a false sense of security. Lab test results are thought to be far more meaningful in the real world than they actually are, and it’s in part because lab tests don’t use all the different types of devices. As far as I’ve seen, there has been no lab PAD testing with browsers, and browsers are most vulnerable to Level 4 and 5 bypasses.
For organizations looking to employ liveness detection that actually can effectively store and protect a lot of value, to provide a high level of confidence they are interacting with the actual user for account recovery, new account set up, and changing important data – for when we want some step-up authentication – organizations need to be certain all the threat vectors we’re aware of have been mitigated. That can only be done in my opinion with Level 1 through 5 liveness threat mitigation and an attack vector mitigation.
And the only way you can really test when “you don’t know what you don’t know”, is with a very high-stakes spoof bounty program. When you put your Liveness technology into the wild and let attackers from all over the planet throw their best efforts at beating it, you learn a lot. But much of the current technology is not robust enough to survive that, so vendors won’t stand-up a bounty program. In my opinion, if a vendor won’t, they should not be selling liveness technology.
Unfortunately, many Liveness vendors simply picked the wrong horse, and their 2D technology will not provide any value once the majority of hackers understand deepfakes and video injection that can go right around, or through, their security. I also recommend anyone interested in evaluating Liveness review the Liveness Vendor Report from Acuity Market Intelligence. I won’t spoil it, but let’s just say that we felt they very accurately ranked the vendors in the space.
We appreciate FindBiometrics, and you Peter, for allowing us to get this message out, and helping companies get headed in the right direction with robust, future-proof 3D Liveness Detection.
Peter O’Neill: Well Kevin, for all the reasons that we spoke about during this interview, it’s critical right now. And I’d like to say that I really appreciate the way that you explain these very complex issues in such a clear and concise manner. And congratulations on the $100,000 spoof bounty program that’s been in place now for a while. I think it’s needed in our industry, so congratulations on that. And thank you again for carving out some time today to speak with us about these critical issues.
Kevin Alan Tussy: Any time, Peter. Thanks for having us as always, and we look forward to talking again soon.
Follow Us