The European Union has launched a consultation seeking input on key aspects of its artificial intelligence regulations, particularly focusing on the definition of AI systems and banned uses of AI. The consultation, running until December 11, 2024, is part of the EU’s broader effort to develop compliance guidance for its new AI law, following the landmark agreement reached last December.
The EU Artificial Intelligence Act (AI Act), which entered into force on August 1, 2024, will be implemented gradually over a 36-month period. The Act introduces a comprehensive regulatory framework affecting businesses worldwide across various sectors, with the goal of promoting human-centric and trustworthy AI while maintaining high safety standards and protecting fundamental rights.
Under the AI Act, AI systems are classified into four risk levels: minimal risk, high risk, unacceptable risk, and specific transparency risk. High-risk systems include safety-critical applications in critical infrastructure, employment, law enforcement, and judicial processes. Limited-risk AI systems, such as chatbots and digital assistants, must meet specific transparency requirements. This tiered approach builds on earlier EU initiatives in digital regulation, including GDPR and the Digital Services Act.
Starting August 2, 2026, providers must inform users when they are interacting with AI systems, unless obvious from the context. Additional transparency obligations apply to systems involving emotion recognition, biometric categorization, and deepfakes, reflecting growing concerns about AI-powered manipulation of media and personal data.
The Act explicitly prohibits certain AI applications deemed to pose unacceptable risks, including China-style social scoring systems and unrestricted facial recognition in public spaces. The current consultation seeks detailed feedback on these banned uses, with the European Commission planning to publish guidance on defining AI systems and prohibited applications in early 2025. This follows the development of COMPL-AI, the first compliance evaluation framework for Generative AI models under the Act.
For implementation, EU Member States must establish or designate three types of authorities: market surveillance authorities, notifying authorities, and national public authorities responsible for fundamental rights enforcement. Member states have flexibility in structuring these authorities, as demonstrated by Spain’s centralized approach and Finland’s proposed decentralized model. This flexibility aims to accommodate varying national regulatory frameworks while ensuring consistent enforcement across the EU.
The consultation welcomes input from stakeholders across the AI industry, business sector, academia, and civil society. Participants can provide feedback on the clarity of the Act’s key definitions and suggest examples of software that should be excluded from its scope.
Source: TechCrunch
–
November 13, 2024 – by the ID Tech Editorial Team
Follow Us