ETH Zurich, INSAIT, and LatticeFlow AI have launched COMPL-AI, the first compliance evaluation framework for Generative AI models under the EU AI Act. The framework provides a technical interpretation of the Act, converting regulatory requirements into actionable technical criteria. It also offers an open-source tool for assessing large language models (LLMs) against these technical requirements.
ETH Zurich is a renowned Swiss university specializing in science, technology, engineering, and mathematics, often recognized for its cutting-edge research and innovation. INSAIT, or the Institute for Computer Science, Artificial Intelligence, and Technology, is a research institute based in Sofia, Bulgaria, created in partnership with ETH Zurich and EPFL Lausanne, focused on advancing AI and computer science.
LatticeFlow AI, meanwhile, is a tech company specializing in AI safety and robustness, developing tools to ensure AI models meet regulatory and ethical standards.
Through COMPL-AI, the group has conducted a compliance-centered evaluation of foundation models from prominent organizations, including OpenAI, Meta, Google, Anthropic, and Alibaba.
The EU AI Act is a significant regulatory effort intended to shape AI standards globally. However, it lacks detailed technical guidelines, creating a need for frameworks like COMPL-AI to bridge this gap.
Thomas Regnier from the European Commission praised the framework. “The European Commission welcomes this study and AI model evaluation platform as a first step in translating the EU AI Act into technical requirements, helping AI model providers implement the AI Act,” he said.
COMPL-AI, built upon 27 benchmarks, invites collaboration from AI researchers and practitioners to further refine and expand the framework. The initial evaluations indicate that while many models excel in mitigating harmful content and toxicity, there are notable gaps in cybersecurity and fairness, with some models scoring only around 50 percent on these metrics.
While COMPL-AI is specifically designed for generative AI models, the approach and methodology it uses could still provide valuable insights for developers of facial recognition and other biometric technologies, which are also subject to the AI Act.
For example, facial recognition technologies face similar regulatory concerns as generative AI in areas like data privacy, accuracy, fairness, and cybersecurity. COMPL-AI’s structure—an open-source framework with specific benchmarks—could inspire or inform the creation of a comparable tool for facial recognition. Such a tool could help developers assess compliance with aspects of the EU AI Act that are relevant to biometric technologies, like ensuring non-discrimination and safeguarding personal data.
As regulatory standards evolve, we may see similar tools specifically crafted for assessing biometric technologies to ensure they align with the EU AI Act and other regulations.
Source: The Recursive
–
October 17, 2024 – by Cass Kennedy
Follow Us