PART TWO: The Problem
A Lack of Standards Promotes Hype and Vulnerability
Above, we used the simple use case of password replacement to illustrate the value proposition of biometrics. While they promise convenience and security, it is the former that has driven decentralized adoption. Device access happens dozens of times a day and, from a user experience standpoint, reducing security levels just makes sense. However, that same lenient security has prevented centralized adoption.
Conventional wisdom says that your biometric data is yours and yours alone, so only you should be able to use it. But when a biometric is used to protect something of value, hackers will attempt to fool the system with presentation attack artifacts (non-human representations of the real thing), otherwise known as “spoofs”. The best biometric authenticators are built to thwart such spoofing attempts, in which a derivative of the users biometric data, like a face photo or video, is presented to the camera to try to trick the algorithms into accepting a facsimile.
It seems an obvious security requirement that a user be physically present in order to gain access, but biometric authentication is only as strong as its ability to concurrently match unique human characteristics and verify living human traits in real-time. As it stands, truly verifying human liveness is hard. Really hard. This decades-long need for a “Turing Test in reverse” has spawned such non-biometric attempts as CAPTCHA, reCAPTCHA and “I am not a Robot”, steps in the right direction, but still fallible.
Even highly publicized biometric systems, like Apple’s Face ID, which leverages a 3D infrared camera system and claims to be the most secure biometric security in the consumer mobile market, was subject to a public spoofing by, among many others, security firm Bkav, that bypassed the 3D feature fairly easily using a mask with a paper face glued to it. This scenario proves to be particularly illustrative of the need for standardized third-party testing. Bkav did not publish detailed methodology for its Face ID spoofing process (though others published their own), and Apple never formally responded to the attack with a fix. In a self-attested security market a consumer can only become confused. While this was the highest profile example, dozens of solutions on the market continue to make unsubstantiated claims of immunity against presentation attacks.
Biometric hubris is not the solution to our password problems.
A data breach will tarnish a company’s reputation, generate lawsuits, and now, thanks to Europe’s General Data Protection Regulation (GDPR), result in massive fines. Not surprisingly, the number-one cause of data breaches is compromised password credentials. High-definition images of faces, irises and fingerprints can be captured by bad actors at a distance, but that’s not even necessary for many social media users who already choose to post this biometric data themselves. So despite biometric vendors’ eagerness to promote solutions without fully-developed liveness detection, replacing passwords with algorithms that will accept artifacts created from commonly available sources as the genuine articles will simply not work.
Thankfully, trust is on the way.
Continue to PART THREE: Putting Trust to the Test
“Standardized Testing for Biometrics: Cutting Through the Hype and Finding Integrity in Digital Identity” is a FaceTec white paper. This version has been optimized for the web for educational purposes and published here with permission from the author.
Learn more about our educational initiatives by contacting Peter O’Neill or Susan Stover, and asking about our Leadership Program.
Follow Us