A new NIST report suggests that facial recognition providers may be overstating their ability to identify people wearing masks. The organization found that the error rates of the top platforms fluctuated between five and 50 percent when comparing photos of people without masks to photos of the same people with face masks that had been digitally applied.
The Interagency Report looked at 89 of the world’s top facial recognition algorithms, and is the first in a planned series of reports on the efficacy of facial recognition in a masked environment. Mask wearing has gone up dramatically since the onset of COVID-19, and while that would seem to pose a problem for facial recognition, many of the leading providers have released updates or otherwise assured users that their engines are still highly accurate (above 95 percent) when used on mask wearers.
The NIST results will obviously call those claims into question. However, the organization stressed that its results are only preliminary, and that there are a few caveats that are worth considering. For one thing, the algorithms used in the study were not designed with masks in mind, and their performance would be expected to improve once providers adjust their software to account for the new reality.
Meanwhile, the masks used in the study were digitally created. As a result, they lacked some of the texture and contours of an actual mask worn on a three-dimensional face.
“With the arrival of the pandemic, we need to understand how face recognition technology deals with masked faces,” said the NIST’s Mei Ngan, who was one of the writers of the report. “We have begun by focusing on how an algorithm developed before the pandemic might be affected by subjects wearing face masks. Later this summer, we plan to test the accuracy of algorithms that were intentionally developed with masked faces in mind.”
To conduct the test, the NIST used a dataset of roughly 6 million images used in prior rounds of FRVT testing. The researchers only looked at one-to-one matching scenarios, and examined nine kinds of masks of different shapes and colors. In that regard, the algorithms seemed to do better with round masks than they did with wide ones, while black masks were more disruptive than blue ones that were designed to mimic the color of a traditional surgical mask.
The NIST also found that the algorithms had more trouble when the mask covered the wearer’s nose, and that failure to capture incidents were far more common. Thankfully, the number of false positives either declined or stayed the same, which suggests that masks should not lead to any additional false identification or security incidents with existing technology.
The Department of Homeland Security and US Customs and Border Protection collaborated with the NIST on the report. In internal documents, both agencies complained that their surveillance tech has not performed as well since people started wearing masks.
–
July 28, 2020 – by Eric Weiss
Follow Us