The NIST is advocating for a more nuanced understanding of bias in AI. The organization noted that bias has historically been understood as a technological concern, in the sense that it is defined solely based on accuracy. For example, a facial recognition system is biased if it does a better job of identifying people in one group than it does another.
The problem, according to the NIST, is that that does not fully capture all of the ways in which AI can be biased. If bias is simply a question of accuracy, the solution is relatively straightforward. Developers need to train their systems with better datasets to ensure consistent performance across all demographic groups.
There are many indications that that process is now underway, and the NIST stressed that accuracy is indeed an important concern for AI developers. However, the organization’s Special Publication 1270 argues that the work will not be done once statistical parity is achieved. AI systems are used by humans that exist within complex social systems, and those people and systems can be biased in the ways they treat different individuals. For instance, the NYPD recently came under fire for placing more facial recognition cameras in neighborhoods with more people of color, which exposes those populations to more invasive policing practices and reinforces biased assumptions about who is and is not likely to be a criminal.
With that in mind, the NIST wants the tech industry to take those human and social factors into account when discussing bias in biometrics, arguing that any approach that fails to recognize context will lead to solutions that fail to address the full scope of the problem. The organization encouraged the tech industry to welcome input from experts in other fields to gain a better understanding of the ways in which AI can have a tangible impact in a community.
“Context is everything,” said Reva Schwartz, one of the authors of the NIST’s latest publication. “AI systems do not operate in isolation. They help people make decisions that directly affect other people’s lives. If we are to develop trustworthy AI systems, we need to consider all the factors that can chip away at the public’s trust in AI.”
NIST Special Publication 1270 seeks to create a standard for identifying bias, and incorporates public feedback on a draft version of the publication that was released last summer. The information detailed in the report will inform the creation of the organization’s new AI Risk Management Framework, which is currently in development. The NIST will continue to solicit public feedback through a series of public workshops in the coming months.
–
March 23, 2022 – by Eric Weiss
Follow Us