Stanford researcher Michal Kosinski has published a study that suggests that facial recognition can be used to predict someone’s political affiliation. In the study, Kosinski developed a facial recognition algorithm using open-source facial recognition software, and trained the algorithm using images pulled from dating sites in Canada, the US, and the UK, and from Facebook users in the United States.
The faces in Kosinski’s database were classified as liberal or conservative based on that person’s responses to a questionnaire on their chosen site. Once it had been trained, the algorithm was able to determine which face was liberal and which was conservative nearly three-quarters of the time when presented with two new faces. The accuracy was 71 percent when the two faces were demographically similar, and 73 percent accurate after introducing variables like age, gender, and ethnicity. Human beings, by comparison, are only around 55 percent accurate.
Of course, those results still leave plenty of room for error, and it’s worth noting that the algorithm was asked to make head-to-head comparisons knowing that one would always be liberal and one would always be conservative. It’s unclear how the algorithm would perform in an open scenario in which it was asked to identify people of one political leaning in a crowd.
Kosinski nevertheless argues that his study raises troubling implications about facial recognition, since it demonstrates that the technology can in fact be used to profile people based on their political affiliation. Once that possibility exists, it becomes increasingly likely that someone will use the technology in bad faith for their own political ends. For example, oppressive regimes could use facial scans as evidence to target political opponents.
There is already some evidence to suggest that governments are interested in using facial recognition for such purposes. In China, Huawei and Megvii have come under fire for developing a system that could identify members of the country’s Uighur Muslim population, and privacy advocates have raised similar bias concerns in the United States.
For his part, Kosinski said that his study is intended as a warning to privacy advocates. While most would prefer to dismiss physiognomy as a pseudoscience, his study shows that that is not entirely the case. Civil rights groups need to be aware that the technology exists if they are going to be prepared to fight against it.
“Don’t shoot the messenger,” said Kosinski. “In my work, I am warning against widely used facial recognition algorithms. Worryingly, those AI physiognomists are now being used to judge people’s intimate traits – scholars, policymakers, and citizens should take notice.”
While Kosinski was able to highlight some patterns – liberals were more likely to face the camera, and to express surprise rather than disgust – he was not able to fully account for the accuracy of his algorithm. Kosinski explained that he has also published a similar study that used facial recognition to predict people’s sexual orientation.
Source: Nature Research (via Tech Crunch)
–
January 14, 2021 – by Eric Weiss
Follow Us