Amid booming interest in mobile biometric identity verification solutions, FaceTec has made a name for itself as an industry leader with its technological innovation and its transparency. With its 3D face authentication technology, FaceTec became a pioneer in the realm of liveness detection as the first vendor to achieve lab-tested compliance with the second level of ISO 30107-3 presentation attack detection standard. Since then, the company has gone further, backing up its own claims about the efficacy of its technology with an ongoing spoof bounty program that can pay out up to $100,000.
The company’s transparency can also be seen in its vocal industry leadership. FaceTec has sought to clearly explain its philosophical approach to identity verification and authentication, and laid out grounded arguments about why this approach is superior to certain others promoted in the identity space.
This proselytizing effort continues in a new interview with FaceTec’s Senior Vice President of North American Operations, Jay Meier. The conversation touches on important topics including the aforementioned booming market growth and the impact of the COVID-19 pandemic, but its central focus is on what FaceTec’s leadership have dubbed ‘the PKI Fallacy’ – an approach to authentication that Meier finds deeply problematic. Read on to find out exactly why, and for other insights from one of the industry’s leading voices.
Read the full interview with Jay Meier, Senior Vice President of North American Operations at FaceTec:
FindBiometrics: Activity in the market is the highest it’s ever been, and every indication is it’s likely to continue to grow at a rapid pace. How has this been affecting FaceTec?
FaceTec: It’s no surprise to me, as I’ve been watching companies in this space for two decades waiting for this wave to crash. The time has finally come. Biometrics can deliver remote digital identity with a high confidence. I think the big surprise is that folks just don’t understand how big the market is for this technology. It’s going to be like this for a long time.
FaceTec has grown tremendously since our first product was launched in 2018. However, the pandemic caused a broad global revelation that most credentialing and logical access control systems simply don’t work in 100 percent remote scenarios. It’s interesting because I have known what was wrong for a long time, but stakeholders ignored those of us who were saying you are not your device, and your device is not you. Suddenly, after the pandemic exposed the absolute need for in-person interactions to make these identity systems function, stakeholders have now learned that identity verification and authentication are at the heart of cybersecurity and data breach, fraud, and identity theft prevention.
FaceTec’s software solves critical problems that were exposed during the pandemic and won’t be going away because remote access is now a necessity. We have hundreds of customers, and their success proving liveness remotely for over 250 million people demonstrates that our architecture is the correct one. And as a result of that architecture, our growth rate has been shocking, frankly. We recently recorded quarterly annualized revenue and user growth rates of close to 300 percent.
FindBiometrics: From what you can see, do you think there is any correlation between the increased levels of biometrics education and awareness, and increased biometrics-related activity levels, now that we’re all spending more time in digital environments?
FaceTec: I suspect that the correlation coefficient is approaching 1.0. Truly strong, liveness-proven face biometrics are very clearly showing to be the best solution to hacking, data theft, online fraud, and identity theft. However, governments, enterprise, and consumers have come to realize this in a short time period, only about 18 months. Not only must they come to understand what’s wrong with what they thought was an adequate identity solution, but they also must learn exactly why strong, liveness-proven face biometrics is the only future-proof solution, and how to implement it.
You know, for decades the enterprise stakeholders have insisted that user friction cannot impede revenue. So they purchased solutions that were easy and cheap to deploy, but that focused more on an invisible user experience than the necessary security. And, of course, suppliers happily sold them what they wanted, and still do. But that’s not going to cut it anymore; companies don’t want to have to explain to their customers and regulators how they got hacked so easily. The truth is that most of them couldn’t understand why it wasn’t working. But, boy oh boy, they now know. Banks can no longer buy an identity verification system that just checks the regulatory compliance boxes, like so many of today’s IDV vendors peddle. Seriously, most of these systems do little more than satisfy a regulatory requirement and create a honeypot. When the laws were written, the regulators didn’t know what was missing from their compliance requirements, and that’s why money laundering hasn’t been curbed. But on the other hand, it’s exciting because now we have the tech to solve the problems and a great opportunity to educate, which we are doing at the highest levels with groups like ENISA, NIST, ISO, FinCEN, and many others.
FindBiometrics: Not long ago, you presented a critical analysis of what has become an accepted identity verification approach, and it has gotten some notice and has started some interesting conversations within the industry. Can you please recap the PKI Fallacy and why you believe it requires the industry’s immediate attention?
FaceTec: The PKI Fallacy is a false logic progression suggesting that if we can deterministically authenticate a device, somehow the device user/holder is also known. We use cryptography to secure data at rest and in transit. And cryptography works great for its intended use. However, the devices that cryptography secures are really just portals through which living human beings access valuable privileges, assets, services, and data. So it’s not enough to ensure the integrity of the connection between devices. We need to ensure that the correct, living human being is holding the device before granting access. The problem is that unless you actually can prove – with a high confidence – that it’s the correct living person holding the device, you presume to trust it’s the right person. Well, cryptography wasn’t designed to do this. It cannot do this. Yet, we’ve largely been convinced it does.
FindBiometrics: Does the general adherence to the legacy approach to digital identification that underlies the PKI Fallacy tell us anything about how it might be contributing to growing data breaches?
FaceTec: This industry is too wrapped up in the mathematical proofs to see the vulnerability between the device and the person holding the device. There are “verifiers” which are used to vet the privilege holder. If someone applies for a privilege, like unemployment insurance as John Smith, we need to verify that this particular John Smith actually exists and that this person is, in fact, the corresponding John Smith. Then we bind a strong, liveness-proven 3D FaceMap to the identity profile for this John Smith, and can then use the bound biometric data to verify that this is the correct John Smith for every subsequent access attempt. Of course, this is in the digital realm so we must also ensure the integrity of the devices that collect and process the biometric data. So we verify the device is real, the camera feed is real, perhaps that we’ve seen the device before, and only then will we authenticate the user as the correct John Smith.
But what if we had only bound John Smith to a device? If we register a computer as “John Smith’s Computer”, and then that computer shows up, does that prove that John Smith is using it? Of course not, but that is literally what is being assumed with PKI. That’s the PKI Fallacy, and it’s obviously not going to solve the challenges we have with remote digital identity verification. Unless the user’s identity verification is performed, then anyone with the device or the user’s credentials can access the system masquerading as the real user. So any authentication system is only as secure as the user authentication component, stand-alone. Not proving the identity of the user who was accessing the systems is exactly what happened with the SolarWinds breach, and shows, without question, that these attacks scale.
The truth is, until recently biometrics weren’t accurate enough and liveness wasn’t strong enough, so other methods were used in lieu of strong user authentication. Organizations started using the word “identity” in their names, but they had literally never verified an actual human, only devices. It was a slippery slope of semantics, and it’s caused a lot of confusion. A device, a token, a password, a PIN, etc., are all at least one degree separated from the entitled human privilege holder and, therefore, can be possessed by a different human who was not assigned the privilege. But because the biometrics weren’t where they needed to be, we all had to rely on device authentication that was posing as user authentication. Today, the two are literally conflated. This enabled rampant pandemic stimulus payment and unemployment fraud, and more and more breaches will happen until human users are verified as a prerequisite for access. It’s so very bad that standards groups purporting to advocate strong user authentication literally describe authenticators, like passwords, SMS messages, and in-device biometric sensors, as “verifying your identity”!
Let’s touch on why in-device biometrics are anonymous. And, as of right now Apple’s Touch ID and Face ID biometric systems are not bound to an actual legal identity, for example. Neither are Samsung’s, Huawei’s, or any other in-device biometrics. A good rule of thumb is, if the biometric sensor unlocks the device, or there is a PIN that overrides it, then it’s not suitable for legal user identity verification. The sensors don’t know whose biometrics are enrolled during registration. If the phone doesn’t know what legal identity the biometric data is collected from, the relying party cannot actually know who’s holding the phone. I’m not saying they’re useless. They’re great for convenience. But I am saying they don’t perform an identity verification like most people think they do.
FindBiometrics: Being truly successful in this business is largely predicated on staying ahead of the bad guys. Since they are becoming more sophisticated, more organized, and have better resources, strongly authenticating a device – but weakly authenticating the legitimate user has already proven to be eminently exploitable, like with the now-infamous SolarWinds breach you mentioned. What is the solution to the issues highlighted by the PKI Fallacy?
FaceTec: Well, as noted, the problem is that we use non-existent user authentication, but think that strong device authenticators have us covered. It’s comical that many solution providers advocate more and more device authentication to solve the problem of weak user authentication. Doing the same thing over and over expecting a different result. The answer is to use truly strong user authentication first, and then for even more confidence add a layer of strong device authentication.
We already know if an authenticator is one-degree separated from the user, it’s inherently vulnerable to be shared or stolen. The only authentication method that cannot be used by an imposter is an accurate, liveness-proven biometric match, so let’s start there. Consider how humans communicate with one another: we see, are seen, talk, and listen. That’s the natural human interface and where computers are evolving toward. “Hey Siri!” or “Alexa!” are good examples. However, we need to see what we’re doing to interact with a camera, so for face matching we need a screen. I see biometric authenticators leveraging and binding our face and voice biometrics to our legal identities. Legal identity databases of face images already exist for exactly this purpose. Passports, driver licenses, national ID cards, employee badges, and other credentialing systems already show faces. So face is going to be the primary authenticator, and is now proving evident as governments are increasingly focused on face. And, given the right environmental circumstances, voice biometrics could be bound to the trusted face biometric and then be used in the background to continuously re-authenticate the user while they speak. Microphones, speakers, cameras, and screens are ubiquitous.
How does the biometric need to work? First and foremost, we must ensure the subject is, in fact, alive and literally physically present in front of the camera. Liveness was the biggest hurdle, and the missing piece. Today, it seems like everyone has whipped up their own liveness checks, but not all liveness systems are equal. Most are merely a nuisance for fraudsters, but others can really provide a brick wall for bad actors. FaceTec has been developing liveness AI for over seven years, and to achieve the security levels required, a new modality needed to be developed.
The process works like this: first we need the liveness data and face matching data to be captured concurrently from the same data feed. So the 3D Liveness data and 3D face data are captured together in our architecture. This limits potential attacks, speeds capture to reduce friction and session abandonment, and provides a much stronger result confidence. Biometric matching outcomes are “probabilistic” predictions, so there is always a 1-in-X chance that the comparison is wrong at the chosen false reject rate. Inarguably, measuring more and more diverse data increases the match confidence. More data collected informs the AI’s decision better, allowing us to trust the comparison results more. This is why FaceTec captures three-dimensional face data that includes both the liveness signal and matching data. We call this a 3D FaceScan. These contain orders of magnitude more data than a 2D face template, providing matching confidence far higher than any 2D system.
In the past, groups like FIDO argued that biometric data should never leave the device, suggesting it’s too risky to transmit to a server. Moreover, they argued that anonymous biometrics are good enough. Others, like FaceTec, advocated and continue to advocate encrypting and then transmitting the biometric data over HTTPS to be processed behind the service provider’s firewall. This model requires less trust in the user-provided device, provides unlimited computational horsepower and storage for unlimited neural network models, and thus provides a higher match confidence than any in-device biometric sensor can provide.
Sending the biometric data to the server also allows us to bind the liveness-proven 3D FaceMap to a user profile, account, or identity, and authenticate to that verified and proven identity again and again, creating a chain of trust. By binding biometric data to an actual legal identity, rather than a device, we actually know who we are providing access to, and we can check to see if that 3D FaceMap is already in the database, potentially under a different name. You can’t do that on the device.
At FaceTec, we understand the risks of centralized systems, so we ensure that our partners and customers have data sovereignty; meaning that FaceTec never receives, stores, or processes any biometric data or personally identifiable information (PII). FaceTec’s software is run inside the customer’s firewall and their data stays 100 percent under their control. Our Device SDK software encrypts the 3D FaceScan file on the device to ensure its security in transit, and only once the encrypted biometric data is safely behind the service provider’s firewall, the data is decrypted and the liveness is confirmed or denied. Our best practices have the servers immediately deleting that liveness data, reducing the ability for it to ever be replayed in the future. Lacking the required liveness data, the remaining 3D FaceMap cannot be successfully resubmitted if, for example, it was stolen. This mitigates proverbial honeypot risks, and protects users from their data being reused to gain unauthorized access.
FindBiometrics: A lot of this seems related to foundational components of maturing mobile ID technologies. With broadly available mobile ID on the horizon, how do you expect the identity landscape to change in the near future?
FaceTec: You are who your government says you are. They issue an official birth certificate when you’re born, and it includes the name your parents chose for you and other attributes of who you are. In the U.S., the government also issues a Social Security number at birth. Other governments issue national ID cards. They also issue a driver license, a de facto national ID, after passing a test, allowing you to drive a car. If you want to change your name, you need government approval. Government issues an official death certificate when you die. The government is the original issuer and arbiter of who we are identified as. This is not something an organization of any other kind can do, particularly if these identifiers are to be used with any international credibility.
We use all of this to verify and authenticate ourselves in the real world…except on the internet, the wild-west of identity management. We can’t tell if you are who you say you are online.
How do we prove who we are in the digital world when our identity credentials are physical documents? A digital copy of the physical credential? Well, that’s pretty easily faked. The U.S. Department of Homeland Security intercepts hundreds-of-thousands of factory-made fake drivers licenses every year; and compounding this problem even further, mobile driver licenses (mDL) will soon be widely available and used online to verify the existence of a legal identity. Those credentials are secured by strong cryptography just like devices. However, this still leaves that pesky PKI Fallacy unaddressed.
Your biological identity is bound to your legal identity by your government, so you are who the government says you are, in person and online. And binding a liveness-proven biometric to a user profile is the best way to verify who is actually using the device. It seems logical to bind our biometrics to an official government issued and verified legal identity and authenticate to it.
At FaceTec, we see the future of identity verification and authentication going completely digital, meaning no physical documents are required, as more legal identity issuers become digital verifiers. This will allow identity verification from any device from anywhere. When you go to access your privileges, assets, services, etc., your liveness-proven 3D FaceMap will be compared to the trusted photos of you in the identity issuers database, and verified. Then, all subsequent sessions won’t just be user authentications, they will be legal identity verifications each time, as well.
Follow Us