OpenAI has taken measures to prevent its AI tool, GPT-4, from being used broadly for facial recognition. In addition to performing text-based interactions, GPT-4 can describe images. One participant in a trial of this feature, Jonathan Mosen, who is blind, called it an “extraordinary” tool for helping him to understand and interpret the visual world. But he was disappointed to find that the app recently stopped giving him information about people’s faces. OpenAI had changed its facial recognition capabilities so that it will only identify public figures.
ChatGPT’s ability to “interrogate images” has been particularly extraordinary for Mosen. Speaking to the New York Times, he recounts an instance where a social media image was described as a “woman with blond hair looking happy,” but upon analysis by ChatGPT, the chatbot identified the woman as wearing a dark blue shirt, taking a selfie in a full-length mirror. Mosen could then ask follow-up questions about the shoes she was wearing and other details visible in the image.
OpenAI’s decision to obscure people’s faces within the visual analysis tool is a response to privacy concerns. While the technology can identify primarily public figures, such as those with Wikipedia pages, it does not match the comprehensive capabilities of controversial tools like Clearview AI and PimEyes, which are designed for widespread facial recognition. Making facial recognition publicly available would challenge the accepted practices of U.S. technology companies and could raise legal issues in jurisdictions with biometric information consent requirements. (OpenAI and Microsoft are already facing a lawsuit under Illinois’s Biometric Information Privacy Act (BIPA) concerning their collection of biometric data from the internet for AI training.)
Another concern for OpenAI is the potential for the tool to make inappropriate or inaccurate assessments about people’s faces, including gender or emotional state. OpenAI is actively working to address these safety concerns and seeks public input for responsible deployment.
OpenAI acknowledges that the development of visual analysis was expected, given that the model was trained on images and text collected from the internet. They recognize that similar facial recognition software exists, such as Google’s tool, which offers an opt-out option for well-known individuals. OpenAI is considering similar approaches to protect privacy.
Beyond privacy and misreading concerns, OpenAI must also grapple with “hallucinations” — the false assertions that AI tools like ChatGPT have been known to pronounce from time to time. These have also cropped up in GPT-4’s visual analysis efforts, with the system assigning the wrong name to a famous tech CEO, or telling the visually-impaired Mosen that there were buttons on a remote control that weren’t actually there.
As for OpenAI’s major investor, Microsoft, the company has implemented a face-blurring tool to prevent its Bing chatbot, which runs on OpenAI technology, from identifying people in photos that users upload to the platform.
Source: The New York Times
–
July 18, 2023 – by the FindBiometrics Editorial Team
Follow Us