A team of South Korean researchers has published a study that suggests that some of the world’s leading facial recognition APIs may be far more vulnerable to deepfakes than previously suspected. The findings have troubling implications for the future development of facial recognition technology, since face biometrics would no longer be an effective form of authentication if algorithms cannot distinguish real faces from the fakes.
The study itself was conducted by researchers at Sungkyunkwan University in Suwon, and was published on a preprint server at Arxiv.org. The researchers looked at Microsoft’s Azure Cognitive Services and Amazon Rekognition, both of which offer tools to identify the faces of celebrities. From there, the team carried out deepfake tests with celebrity faces, primarily because the readily availability of celebrity faces made it easier to generate the fakes.
To that end, the researchers used five celebrity data sets (three public and two built in-house) to train AI models that could fool the two APIs. They ended up creating 8,119 deepfakes in total, and were able to trick Azure Cognitive Services and Rekognition 78 percent and 68.7 percent of the time, respectively. Both APIs also had a tendency to be quite confident when they were wrong, with Rekognition in particular giving 902 out of 3,200 deepfakes a higher confidence score than a real image of the celebrity in question.
The researchers noted that different facial recognition systems take different approaches to deepfakes, and that some are better at dealing with them than others. However, they still framed it as a pressing problem for everyone in the space.
“Assuming the underlying face recognition API cannot distinguish the deepfake impersonator from the genuine user, it can cause many privacy, security, and repudiation risks, as well as numerous fraud cases,” the researchers wrote. “If the commercial APIs fail to filter the deepfakes on social media, it will allow the propagation of false information and harm innocent individuals.”
For its part, Microsoft recently released a new deepfake detection solution to help spot videos and images that have been manipulated. In the meantime, iProov has established a new Security Operations Centre to monitor attacks at scale, while researchers have tried using heartbeat biometrics to separate deepfakes from originals.
Source: VentureBeat
–
March 8, 2021 – by Eric Weiss
Follow Us