A study by the National Institute of Standards and Technology (NIST) released last Thursday shows that many of the facial recognition algorithms used today misidentify people of color far more often than middle-aged white men.
The study tested roughly 18 million photos of more than 8 million people taken from databases run by the State Department, the Department of Homeland Security, and the FBI. It was conducted on 189 algorithms from the facial recognition industry’s leading systems, voluntarily submitted by 99 companies, academic institutions and other developers. A number of major tech companies supplied the algorithms, including Intel, Microsoft, Panasonic and SenseTime.
Amazon — which develops Rekognition, its own software used by law enforcement to track criminal suspects — did not submit its algorithm for the study, saying that its cloud-based service could not be easily examined by the NIST test.
The results of the study showed that, depending on the algorithm being tested and the type of search being conducted, Asian and African American people were up to 100 times more likely to be misidentified than white men.
For the kinds of searches most often used by police investigators — ‘many-to-one’ searches where a single image is compared to thousands or millions of others to find a match — the faces of African American women were the most likely to receive a false positive match.
The highest false positive rate amongst ethnicities belongs to Native Americans, with researchers finding that algorithms varied widely in their accuracy.
The study also showed that with regards to algorithms developed in the U.S., “one-to-one” searches of Asians, African Americans, Native Americans, and Pacific Islanders showed high error rates in identification. These searches form the backbone of rapidly expanding services like cellphone sign-ins and airport boarding systems.
Lawmakers reacted to the results, saying they were alarmed and called on the Trump administration to revisit its plans to expand the country’s use of facial recognition technology.
“[F]acial recognition systems are even more unreliable and racially biased than we feared,” said Rep. Bennie G. Thompson (D-Miss.), chairman of the Committee on Homeland Security.
In a statement released after the findings were made public, Sen. Ron Wyden (D-Ore.) said that “algorithms often carry all the biases and failures of human employees, but with even less judgment,” and added that “[a]ny company or government that deploys new technology has a responsibility to scrutinize their product for bias and discrimination at least as thoroughly as they’d look for bugs in the software.”
NIST’s study comes after multiple communities in the U.S. have placed bans or restrictions on the use of facial recognition technology by law enforcement, including the state of California, which banned its use on body cameras worn by police officers.
Facial recognition is also facing vocal criticism on the international stage. In China’s northwestern Xinjiang province, its use — along with other forms of biometric surveillance — against the Uighur Muslims is the subject of increasing political and public scrutiny.
Source: The Washington Post
—
December 23, 2019 – by Tony Bitzionis
Follow Us