A team of researchers at the University of Southern California has released a report that suggests that deepfake detection systems have many of the same biases as more traditional facial recognition systems. In their report, the researchers evaluated three different deepfake detection systems, each of which was trained with the FaceForensics++ dataset and each of which was reputed to be able to identify deepfake videos with a high accuracy rate.
Unfortunately, systems that could detect deepfake videos in the sample group did not fare as well when applied to the general population. The researchers found that the error rate could vary by as much as 10.7 percent depending on the race of the person in the video, which suggests that racial bias has been coded into deepfake detectors in the same way that it has been encoded into other facial recognition algorithms.
The reasons for that bias are the same in both cases, insofar as the deepfake detectors were trained with a non-representative dataset that does not fully capture the diversity of the population at large. In that regard, the FaceForensics++ videos had more women than men (58 percent as compared to 41.7 percent), and white people were similarly overrepresented. Less than five percent of the real videos included Black or Indian faces.
The researchers also called attention to the dataset’s “irregular swaps,” which transpose one face onto the face of a person of a different race. While the swaps are ostensibly supposed to reduce racial bias, they tend to exacerbate it in practice because it teaches the algorithm to associate altered videos with people of certain races and facial features.
“In a real-world scenario, facial profiles of female Asian or female African are 1.5 to 3 times more likely to be mistakenly labeled as fake than profiles of the male Caucasian,” wrote the researchers. “The proportion of real subjects mistakenly identified as fake can be much larger for female subjects than male subjects.”
Based on those findings, the researchers argued that deepfake detection systems are not ready for commercial deployment because the potential impact of biased systems is not yet fully understood. That remains true despite the growing threat that deepfakes pose to international security. In the meantime, many organizations have emphasized the need for representative datasets in AI development, whether for facial recognition or deepfake detection.
Source: VentureBeat
–
May 12, 2021 – by Eric Weiss
Follow Us