Corsight AI is setting the benchmark for the rest of the facial recognition industry. In that regard, the company is touting its performance in its most recent NIST evaluation, in which it displayed no bias across several key demographics.
On that front, Corsight explained that bias is a measure of an algorithm’s False Match Rate (FMR) in the NIST test. If an algorithm has a higher FMR for Black individuals than it does for white individuals, then it means that the algorithm is less accurate when applied to Black people and that is therefore a higher chance of a matching error.
Corsight’s algorithm, on the other hand, had the exact same FMR for both Black and white test subjects, a trend that held true for men and women alike. That was good enough for the top score on the test, with the NIST even going so far as to set Corsight as its reference point in reports about bias.
In doing so, the NIST established Corsight as the gold standard for bias in the facial recognition industry. Corsight posted those results in the latest iteration of the NIST’s FRVT 1:1 Verification exam.
“We’re absolutely thrilled with these results,” said Corsight Chief Privacy Officer Tony Porter. “This is another step forward in countering claims that bias is damaging the effectiveness of facial recognition technology. The argument that facial recognition software is not capable of being fair is frozen in time and the performance of Corsight’s latest submission demonstrates that.”
While Porter was proud of the company’s results, he stressed that bias has not yet been “solved,” and that Corsight is still working to develop more egalitarian facial recognition tech. The company’s algorithm also showed a significant bias reduction in an NIST evaluation in December, and updated its platform in June in an effort to (proactively) comply with future privacy legislation.
–
July 26, 2022 – by Eric Weiss
Follow Us