Some members of the academic community are starting to question the dubious ethical foundations of facial recognition research. The critics are particularly concerned about the ways in which biometric data is collected, and about facial recognition projects that could be used against vulnerable populations.
Those concerns were chronicled in a recent article in Nature, based on the survey responses of 480 researchers whose work concerns facial recognition in some capacity. The survey did not offer much in the way of consensus, with some respondents being more critical while others tried to justify their scientific practices.
One of the primary points of contention was the collection of biometric data. Facial recognition research typically requires a large dataset, and several private companies and academic institutions have made datasets available to the public over the years. The images in those datasets were in many cases gathered without consent and have since been taken down, but they are still circulated within the academic community, and are still being used in academic research. In that regard, only 40 percent of the Nature respondents felt that researchers needed to get informed consent from the people in a database, while nearly 20 percent believed that they should be able to use any photos they could find online.
There are similar concerns about the ways in which facial recognition can be used. The Nature article called particular attention to research involving the minority Uyghur population in Xinjiang, noting that while the research itself may be agnostic, the findings could still perpetuate human rights abuses if used in a surveillance scheme. Other researchers have tried to use facial recognition to predict criminality, which could similarly exacerbate racial bias.
Some respondents tried to draw a distinction between the research and the potential applications of the technology. However, critics were quick to dismiss that argument, suggesting that those who do not consider the implications of their research are simply trying to shirk their moral and scientific responsibilities.
“The AI community suffers from not seeing how its work fits into a long history of science being used to legitimize violence against marginalized people,” said MIT researcher Chelsea Barabas. “If you design a facial-recognition algorithm for medical research without thinking about how it could be used by law enforcement, for instance, you’re being negligent.”
In light of those concerns, critics are asking academic journals to retract papers that relied on questionable methodology, and to create ethical oversight boards to try to stop such projects before they make it to publication. They also want researchers to stop associating with technology providers who have been accused of enabling human rights abuses.
Source: Nature
–
November 23, 2020 – by Eric Weiss
Follow Us