EU AI Act: Pioneering responsible artificial intelligence

Article

The EU Artificial Intelligence Act is the world's first binding horizontal regulation on AI. It sets new rules for how to use and manage AI systems.

EU Artificial Intelligence Act

The EU Artificial Intelligence Act is the world's first binding horizontal regulation on artificial intelligence (AI). It creates a new, common framework for using and filling AI systems in the EU. Enacted on 1 August 2024, this landmark legislation shapes how AI is developed, deployed, and used across the European Union (EU). As key compliance deadlines approach in 2025, understanding the Act’s provisions is vital for businesses and organisations.

What is the EU AI Act?

The EU AI Act creates a risk-based system to regulate AI. It divides AI systems into four risk levels: unacceptable, high, limited, and minimal. This approach ensures that higher-risk applications face stricter requirements while lower-risk systems can operate with fewer restrictions.

Key risk categories

  • Unacceptable risk:  AI systems that pose severe threats to safety or fundamental rights, such as those enabling social scoring or behavioural manipulation, are strictly prohibited.

  • High risk: Systems used in critical areas like employment, healthcare, and law enforcement must comply with stringent requirements for transparency, accuracy, and oversight.

  • Limited risk: Systems with moderate risk levels, such as custom AI chatbots, must inform users about their AI nature.

  • Minimal risk: Common systems like spam filters are not subject to additional regulations under the Act.

Building on this risk-based approach, the AI Act also addresses General-Purpose AI (GPAI) models, such as GPT-4 by OpenAI and Gemini by Google, which have broad applications across various domains and present distinct challenges. To effectively regulate these systems, the Act introduces a two-tier framework that categorises GPAI models into general-risk and systemic-risk levels, each with tailored obligations to address their potential impact.

 

Timeline of implementation

February 2025

Prohibited AI practices, such as social scoring and real-time biometric identification systems, become enforceable. Additionally, companies using AI must ensure their employees have adequate AI literacy. 

August 2025

Rules regarding general-purpose AI models, transparency requirements, penalties, and governance systems take effect.

August 2026

High-risk and limited-risk AI systems must fully comply with the regulations.

August 2027

Final provisions, including those for AI systems integrated into regulated products, become enforceable.

 



Why the EU AI Act matters

The Act builds on the success of the General Data Protection Regulation (GDPR). This regulation set global standards for data privacy. Similarly, the EU AI Act aims to ensure AI development aligns with European values, including human dignity, democracy, and non-discrimination. Its extraterritorial scope requires companies outside the EU, whose AI systems are used within the EU, to comply with the regulation. Note that the legislation of the GDPR and the EU AI Act are additive — meaning they complement each other by addressing different aspects of data and AI governance. 

Implications for businesses

AI Providers

Entities developing high-risk AI systems must proactively ensure compliance, quality, and transparency. Here are the key obligations:

  • Conformity assessments: Providers must conduct thorough assessments to confirm their systems meet regulatory standards.

  • Technical documentation & monitoring: Maintaining detailed records and continuously monitoring system performance are mandatory.

  • Quality management: Providers must establish robust quality management systems and post-market monitoring processes.

  • Transparency & usability: AI systems should come with clear instructions and ensure transparency for deployers.

  • Human oversight: Measures to ensure human oversight, accuracy, robustness, and cybersecurity must be in place.

Providers are also required to:

  • Take corrective actions if their systems are found to be non-compliant.

  • Provide requested information to authorities.

  • Ensure accessibility requirements are met for high-risk systems.

Providers must repeat the conformity assessment process if substantial modifications are made to a high-risk AI system.

Special obligations for general-purpose AI providers

Providers of general-purpose AI models face additional responsibilities:

  • Maintain technical documentation and collaborate with high-risk AI system providers.

  • Address systemic risks through model evaluations, risk mitigation strategies, and incident reporting.

AI Deployers

Organisations deploying AI systems, especially high-risk ones, are subject to strict operational and ethical guidelines:

  • Impact assessments: Deployers must evaluate the system's potential impact on fundamental rights, especially for public-facing or sensitive applications.

  • Usage logs: Maintaining detailed logs of system operations is essential for accountability.

  • Human oversight: Assign qualified personnel to oversee AI operations and ensure proper use as per the provider’s instructions.

  • Transparency: Inform individuals when high-risk AI systems affect them directly and submit reports to market surveillance authorities when required.

  • Data relevance: Ensure input data used in AI systems is appropriate and accurate.

Additional Responsibilities
  • Modifications to AI use: If a deployer repurposes a non-high-risk system into a high-risk one, they assume the obligations of a provider.

  • Worker notifications: Employers using high-risk AI systems must inform employees and their representatives before implementation.

Penalties for non-compliance

EU AI Act penalties

Fines depend on the type of violation and the entity involved:

  • Prohibited AI practices (e.g., manipulative techniques, biometric surveillance) can result in fines of up to €35 million or 7% of global turnover.

  • General violations (e.g., failing to meet risk management or transparency requirements) carry fines of up to €15 million or 3% of turnover.

  • Providing false or misleading information may incur fines of up to €7.5 million or 1% of turnover.

  • GPAI providers face fines of up to €15 million or 3% of turnover for non-compliance.

  • EU institutions and agencies have lower maximum fines of €1.5 million.

  • SMEs and startups are subject to a lower percentage or fixed fine.

Opportunities for innovation

EU AI Act opportunities

While the Act imposes new obligations, it also supports innovation:

  • AI regulatory sandboxes: Controlled environments allow companies to develop, train, test and validate AI systems.

  • Support for startups and SMEs: Tailored guidance and reduced regulatory burdens aim to foster innovation among smaller enterprises.

  • Trust-building: The Act promotes transparency and ethical practices, enhancing public trust in AI.

  • Level playing field: This framework creates a ‘level playing field’ for all actors in the EU, thus promoting cross-border trade in AI-based goods and services.

Challenges and criticisms

  • Market barriers: There is growing concern that the stringent requirements of the EU AI Act may discourage non-EU AI companies from entering the European market, fearing high compliance costs and regulatory complexities. This could limit competition and slow down innovation within the region.

  • Innovation concerns: Some industry leaders fear that the stringent regulations may hinder the competitiveness of European startups compared to those in less-regulated regions.

  • Enforcement complexity: The rapid pace of AI development poses challenges for monitoring and ensuring compliance, leading to concerns about regulatory bottlenecks and inconsistent application across member states.

Despite these challenges, the EU AI Act is widely regarded as a critical step towards building a more trustworthy and ethical AI ecosystem.

 

How to prepare for compliance in 2025

how to prepare for AI Act compliance

With key deadlines approaching, businesses must take proactive measures to align with the Act:

  • AI literacy: Foster AI literacy within your organisation to ensure that all employees — particularly those involved in developing, deploying, or monitoring AI systems, have a clear understanding of the EU AI Act's requirements.

  • Assess AI systems: Determine whether your AI systems fall into high-risk categories and understand the associated obligations.

  • Implement transparency: Ensure users are informed when interacting with AI systems.

  • Establish monitoring protocols: Develop frameworks to continuously evaluate and mitigate risks throughout the AI system’s lifecycle.

Conclusion

The EU AI Act is a transformative milestone for AI governance. As the regulation takes effect, 2025 will be a pivotal year for organisations to adapt and embrace responsible AI practices. By ensuring compliance, businesses can mitigate risks, build public trust, and gain a competitive edge in the rapidly evolving AI landscape.

For more insights on the EU AI Act and its implications, stay updated with reliable resources and expert commentary.

FAQ

What is the EU AI Act?
The EU AI Act is a regulatory framework designed to ensure that AI technologies are safe, ethical, and trustworthy. It introduces a risk-based classification system, with stricter rules for high-risk applications such as biometric identification and healthcare technologies.
Is the EU AI Act passed?
Yes, the EU AI Act officially entered into force on 1 August 2024. It sets a global benchmark for regulating artificial intelligence, focusing on safety, transparency, and accountability.
Who does the AI Act apply to?
The EU AI Act applies to any organisation that develops, deploys, or uses AI systems within the EU, regardless of their location. It covers providers, users, and distributors of AI technologies.
What is the difference between the GDPR and the AI Act?
The GDPR focuses on personal data protection and privacy, while the AI Act regulates the development and deployment of AI systems to ensure safety and transparency. Together, they form a comprehensive framework for digital rights and responsible technology use within the EU.