Tuesday, October 22, 2024
Tuesday, October 22, 2024
- Advertisement -

GPT-4 Turbo leads EU’s AI compliance landscape

Must Read

- Advertisement -
- Advertisement -
  • Overall shortcomings highlight significant work ahead for the AI community in meeting the ethical and regulatory expectations outlined by the EU.
  • Study underscores the necessity for ongoing dialogue between AI developers and regulatory frameworks to ensure the responsible development of AI technologies.

The rapid advancement of artificial intelligence (AI) technologies has prompted regulatory bodies, particularly in Europe, to establish frameworks governing the ethical deployment of AI systems.

A recent study by researchers from ETH Zurich, the Bulgarian AI research institute INSAIT, and the ETH spin-off LatticeFlow AI evaluated the compliance of twelve leading large language models (LLMs) with the European Union’s AI regulations, specifically the EU AI Act.

The findings indicate that while OpenAI’s GPT-4 Turbo is the frontrunner in compliance, it still falls short of full adherence to these vital regulations.

The researchers developed a tool named COMPL-AI, which serves as a compliance checker through a series of benchmarks aimed at quantifying how well AI models align with EU standards.

Evaluation criteria

Central to this assessment are six ethical principles established in the EU AI Act: human agency, data protection, transparency, diversity, non-discrimination, and fairness.

From these principles, the study formulates twelve clear technical requirements, underpinned by twenty-seven evaluation criteria.

The framework aims to translate the abstract legal language of the EU AI Act into measurable and verifiable standards for AI development.

The analysis revealed that the 12 models, including well-known instances like ChatGPT, Claude, Mistral and Llama, exhibited varied levels of compliance.

Notably, while some demonstrated adherence to data protection regulations, significant deficiencies emerged in other areas, particularly regarding diversity, non-discrimination and fairness.

COMPL-AI available as open-source

As co-author Robin Staab remarked, the evaluation illuminated critical shortcomings in the models, emphasising the need for improvement in robustness and social equity.

Furthermore, the concept of explainability—a fundamental aspect of ethical AI—remains inadequately addressed, suggesting a tendency among developers to prioritise model performance over ethical considerations.

Martin Vechev, a professor at ETH and a founding member of INSAIT, articulated that although the EU AI Act signifies a movement towards responsible AI, clarity in the technical interpretation of its regulations has been lacking until now.

The researchers presented their findings to the EU AI Office and made COMPL-AI available as an open-source resource on GitHub, inviting further collaboration and development.

The European Commission welcomed these initiatives as a constructive step towards translating the EU AI Act into actionable technical requirements.

As the Act is gradually implemented—having come into force in August 2024, with strict enforcement for high-risk AI models delayed for two years—developers are urged to align their technologies with these essential standards.



Sign up to receive top stories every day

- Advertisement -

Latest News

Paytm’s Q2 results marked by persistent challenges

Paytm reports first profit due to one-time gain from sale of its event-ticketing business to Zomato.

Qualcomm’s GenAI chip to make way into smartphones soon

Qualcomm aims to make AI more accessible and practical for everyday use and rejuvenate smartphone sales

Microsoft clients can build AI agents from next month

Microsoft aims to position Copilot as the primary interface for AI interactions
- Advertisement -
- Advertisement -

More Articles

- Advertisement -