A coalition of over two dozen civil society organizations and individuals has called on the European Union to prioritize human rights in upcoming guidelines for the EU AI Act implementation. The guidelines, to be issued by the newly established AI Office, will help interpret the Act’s scope and prohibited practices following its adoption in 2024.
The AI Act establishes definitions for artificial intelligence systems and outlines prohibited practices, including remote biometric identification, social scoring, predictive policing, and emotion recognition. This follows the EU’s extensive consultation process on defining AI systems and their prohibited uses. The coalition emphasizes the need to include simpler AI systems under the Act’s scope to prevent potential loopholes based on technical simplicity.
The statement’s signatories include prominent organizations such as Amnesty International, Privacy International, Access Now, and Statewatch, along with various academic experts. These stakeholders emphasize that the EU Charter of Fundamental Rights should serve as the central guiding basis for the Act’s implementation, particularly in light of growing concerns about biometric surveillance and predictive policing technologies.
The organizations advocate for broad interpretation of the Act’s prohibitions to prevent various forms of harm, including discrimination, racism, and prejudice. They specifically highlight the importance of clearly defining and robustly enforcing restrictions on social scoring and biometric surveillance across multiple contexts, including welfare, migration, education, and law enforcement. This stance builds on recent European Court of Justice rulings that have emphasized strict justification requirements for biometric data collection by authorities.
The coalition’s recommendations include expanding the definition of biometric categorization to encompass deductions about ethnicity, gender identity, and other personal characteristics. They also emphasize the need to address loopholes that might permit retrospective remote biometric identification or emotion recognition systems. This follows recent research demonstrating both the capabilities and limitations of emotion recognition technologies.
Regarding the consultation process, participating organizations have expressed concerns about transparency, limited timeframes, and insufficient inclusion of diverse perspectives. They recommend more comprehensive involvement of civil society stakeholders in future AI Act-related consultations, building on the framework established by COMPL-AI, the first compliance evaluation framework for Generative AI models under the Act.
Sources: Statewatch
–
January 16, 2025 – by the ID Tech Editorial Team
Follow Us