Microsoft has strengthened its restrictions on the use of its Azure OpenAI Service for facial recognition purposes by law enforcement, with the updated terms of service explicitly banning US police departments from using integrations for any form of facial recognition. The move extends to a global restriction on using “real-time facial recognition technology” with mobile cameras.
The policy shift comes as the use of generative AI in law enforcement faces increasing scrutiny. For example, Axon, a tech-focused weapons company, recently announced a product that leverages OpenAI’s powerful GPT-4 model to generate summaries from body camera footage, fuelling concerns about the technology’s use.
Critics highlight the potential for AI-generated reports to introduce inaccuracies and biases, especially given racial disparities in policing practices.
While focused on US police, Microsoft’s ban leaves room for international law enforcement agencies to potentially use the service for facial recognition. Additionally, the restrictions don’t explicitly rule out the use of stationary cameras in controlled environments by US police. This aligns with an evolving stance on AI applications within defense and law enforcement for both Microsoft and its close partner OpenAI.
OpenAI has reportedly begun working with the Pentagon on AI capabilities, and Microsoft has proposed leveraging its DALL-E image generation tool for military operations. This highlights a shifting landscape regarding AI ethics and deployment by major technology companies.
Microsoft’s Azure OpenAI Service, specifically designed with government compliance features, was made available on Microsoft’s Azure Government platform in February, further signalling the trend.
Source: TechCrunch
—
May 3, 2024 — by Ali Nassar-Smith
Follow Us