The Biometrics Institute has released a report titled “Members’ Viewpoints: The Relationship Between Biometrics and Artificial Intelligence (AI)”, addressing the complex and often contentious interplay between these two rapidly advancing technologies. Based on consultations throughout 2024, the report highlights starkly differing opinions within the global biometrics community regarding whether biometrics and AI should be seen as inherently linked or as distinct technological fields.
“Rarely has the biometrics community disagreed on an issue at this level before,” said Isabelle Moeller, CEO of the Biometrics Institute. “This paper reflects the conflicting perspectives of our global community on an evolving topic that is critical technology for biometric success. Understanding the relationship between biometrics and AI is essential for responsible innovation and the development of ethical guidelines for their use.”
Key findings include challenges in defining biometrics and AI, as there are no universal definitions. While ISO and other entities offer technical definitions, public and media discourse often conflates or oversimplifies the terms. The Institute has responded by updating its Explanatory Dictionary of Biometrics to provide clarity, aiming to bridge the gap between technical definitions and public understanding.
The Interaction Between Biometrics and AI
The report outlines several ways in which AI interacts with biometrics, both positively and negatively. AI is often employed in biometric systems for tasks such as data processing, decision-making, quality assurance, and security enhancements. For instance, AI can streamline the fusion of multiple biometric inputs or aid in detecting and countering presentation attacks. However, the pervasive integration of AI also introduces risks, including vulnerabilities to cyberattacks and exploitation through tools like generative AI.
Some members of the Institute view biometrics as inherently intertwined with AI, suggesting that all biometric processes involving machine learning or neural networks should be classified as AI. Others emphasize that certain biometric applications operate independently of AI, relying instead on traditional algorithms or human intervention. The inclusion of AI within a biometric system often depends on the use case and not necessarily the biometric modality itself. For example, systems using facial recognition for access control may incorporate AI, while forensic fingerprint analysis might still depend heavily on human operators.
This nuanced interaction raises important questions about risk assessment and the ability to distinguish AI components from other system elements. As AI continues to be integrated into broader technological ecosystems, the report highlights the growing difficulty in isolating AI from non-AI components within a given system.
Definitions and Public Perception
A major theme of the report is the ongoing struggle to define biometrics and AI in ways that are accessible to both experts and the general public. Existing definitions, such as those from ISO, are often seen as too technical or misaligned with one another. Public and media narratives further complicate the issue, frequently conflating biometrics and AI or using terms like “face recognition” interchangeably with “AI.”
The Biometrics Institute’s Explanatory Dictionary of Biometrics aims to address this gap by offering definitions that capture both formal meanings and common perceptions. The updated dictionary includes a new entry for AI that reflects its evolving role within biometrics and other technologies. By contextualizing these definitions, the Institute hopes to demystify these terms and improve public understanding, which is often shaped by fragmented and sometimes misleading media portrayals.
Regulatory Challenges
The report also highlights the complexities of regulating biometrics and AI, particularly when the two are conflated in legal frameworks. For instance, the EU AI Act treats live and remote biometric surveillance under the broader umbrella of AI, potentially leading to overly broad regulations that may stifle innovation. Members expressed concern that such regulations could inadvertently constrain the development of beneficial biometric technologies, such as advanced security measures and identity verification systems.
The report underscores the need for balanced legislation that distinguishes between biometrics and AI while addressing their unique risks. Misaligned regulations, such as those equating all remote biometric surveillance with AI, could have unintended consequences for industries relying on these technologies. The Biometrics Institute advocates for policies that protect civil liberties without hindering technological progress, emphasizing the importance of tailoring regulatory approaches to specific applications.
Broader Implications and Future Discussions
The findings in this report contribute to ongoing debates about the ethical and practical implications of AI in biometrics. The Biometrics Institute’s research stresses the importance of responsible innovation and public trust. Members called for further exploration of the interplay between AI and biometrics at upcoming events, including the Asia-Pacific Conference in Sydney in May and the Impact of AI on Biometric Vulnerabilities Workshop in New York in June.
The full paper is publicly available on the Biometrics Institute’s website, offering in-depth insights into these critical issues. Policymakers, industry stakeholders, and the general public are encouraged to engage with the findings to better understand the challenges and opportunities presented by the convergence of biometrics and AI.
Source: Biometrics Institute
–
January 23, 2025 – by Cass Kennedy and Alex Perala
Follow Us