Guest Essay by Neville Pattinson
More than our signature, fingerprint or even voice, our face is the most fundamental form of identification that is in plain view most of the time. With technology now able to capture, analyze and compare faces, we must take the application of facial recognition very seriously.
In the online economy, reliable authentication and identification can be hard to achieve. Email is not surefire proof of identity, and people forget passwords and PINs, which is why attention has turned to biometric alternatives and the potential of facial recognition.
Facial recognition technology is already becoming more commonplace. In October, the US government’s General Services Administration (GSA) announced that its facial matching login service is generally available to the public and other federal agencies.
But here’s the problem: This announcement came a mere week after the GSA released a report stating that remote ID verification is unreliable. Despite our progress, there is still a lack of clarity on where the industry stands with facial recognition technology. Fear and caution persist among the public as to its viability, as well as infosecurity and privacy implications. And in the process, a jaw-dropping amount of misinformation and subjective conclusions are disseminated.
Let’s take a look at some primary concerns surrounding facial recognition technology to determine whether caution is warranted, or if those concerns are simply a misconception to dispel.
Facial Recognition for Data Mining and Abuse
For 25 years, Customs and Border Protection (CBP) have used biometrics for identification to maintain border security, including facial recognition. The Transportation Security Administration (TSA) now can take a photo of a traveler’s face for a 1:1 biometric facial match with the face on their driver’s license or passport.
But in this case, as in many other cases, the data is only used for identity verification to ensure this is the correct person presenting the identity document, after which the photo and any scans of the document are deleted rather than stored and collected for reuse.
Contrary to popular opinion, not everybody is out to get our data. Industries already plagued by frequent data breaches, like law enforcement, absolutely don’t want to store any more personally identifiable information (PII) than they need to. Not only does that incur greater cost and IT labor, but it also increases the risk of liability should the data get compromised.
In other words, while a fear of data collection for mining and misuse is understandable, when it comes to facial recognition, it’s just a myth.
‘Big Brother’ Surveillance
George Orwell’s 1984 wasn’t written in a vacuum, but there are some key differences between that world and the one we live in today. Legislative guardrails prevent the government from overstepping its bounds, and growing consumer demand for data transparency and privacy protection is advancing enterprises’ caution around data.
Federal, state and company policies can vary, but transparency and open communication are common requirements to prevent any conflicts or threat of ‘Big Brother’ surveillance. For example, U.S. citizens and permanent residents can opt out of any program using facial recognition when boarding international flights.
There’s also a very distinct difference between facial verification and surveillance. Facial verification is used to ensure the rightful owner of a document is actually presenting it. Facial surveillance involves scanning faces in fields of view for matching and identifying individuals in real-time, against a database of known faces. This practice should be tightly restricted commercially, and legislative guardrails should be created to ensure appropriate use even for law enforcement purposes. The need to identify persons of interest after an incident is where facial recognition technology can analyze video footage and provide lead generation to law enforcement.
Law enforcement often faces the greatest misconceptions of how the technology is used. It relies on mugshots to identify perpetrators quickly and accurately, running the scans through its database for a system match to catch and convict criminals. But the tech is used to build out real evidence, or used after evidence is collected – not to malevolently and intentionally target and prosecute innocent people without cause or evidence.
Ergo, the fear that facial recognition technology is used for surveillance in the USA is a myth.
Replacing Other Security Measures
The idea that facial recognition technology could replace all other security measures is hard to believe in the age of multi-factor authentication (MFA). Too many data breaches have made it necessary for companies to adopt MFA to keep customers’ accounts and data secure from theft or fraud. With deepfake technology, our own faces are no longer enough to prove we are who we say we are.
For agencies like TSA, biometric-based authentication is more common, but even so, other secure documents, like traditional IDs or passports, are still involved in the traveler verification process, and will continue to be for some time.
Involving Untrustworthy Technology
Like all tools, facial recognition technology is not immune to manipulation. Criminals try to manipulate facial recognition technology by using photographs, 3D masks, video clips, stealing numeric codes, etc.
Fortunately, technological improvements and a smarter approach to the user interface have made it harder for hackers or imposters to succeed. However, this depends on the level of established security. In places not considered high-risk, facial recognition technology is usually all that is required for identification and authentication. But where the risk is high, the system might demand MFA such as passwords and fingerprint scanning. In most high-risk cases, there is a human expert involved too.
While we are not quite in the realm of myth, neither are we actively using untrustworthy technology for verification.
The Truth Behind Facial Recognition Technology
The biggest truth behind facial recognition technology is that it is badly misunderstood. To much of the public, it is malevolently used to invade the average Joe’s privacy, put everyday citizens in jail, and has the capacity to identify anyone’s emotions, age, or other distinguishing features. Privacy groups citing its use as a violation of personal rights, and Hollywood’s dramatic depictions, do not help alleviate these myths and misconceptions.
In truth, the technology is intended to expedite and simplify authentication and accessibility to provide an overall positive experience. There are many guardrails in place where organizations spend time reviewing thoroughly and ensuring compliance.
The recent change in administration and possibility of Project 2025 creates uncertainty around biometric use and abuse. Right now, it’s still early days, and hard to say what direction the next four years will take.
But whatever lies ahead, we are stronger together than we are apart, and the onus is on everyone, from the individual experiencing the technology to the company creating it, to ensure we do not lose our way as we wade through sociopolitical and economic obstacles, and risk compromising our principles and ethics for the safe, consensual use of biometric technology.
–
About the Author
Neville Pattinson is the Head of Business Development for Identity & Biometric Solutions, Thales North America and is the Chairman of the International Biometrics + Identity Association. Pattinson is a leading expert and thought leader on digital identity solutions such as smart cards, electronic passports, various biometric technologies and mobile digital identity to keep identity credentials secure, private and trusted.
Follow Us