About the EU AI Act

About the EU AI Act

The EU AI Act (Regulation (EU) 2024/1689) is the world's first comprehensive legal framework for artificial intelligence. Adopted by the European Parliament and Council in 2024, it establishes harmonised rules for the development, placement on the market, putting into service, and use of AI systems within the European Union.

This is not a soft-law guideline or a set of voluntary principles. It is a binding regulation with direct effect across all 27 EU member states, carrying enforcement mechanisms that include fines of up to 35 million EUR or 7% of global annual turnover.

What Is the EU AI Act

The EU AI Act is a regulation — the strongest form of EU legislation. Unlike a directive, which must be transposed into national law by each member state, a regulation applies directly and uniformly across the entire EU from the moment it enters into force.

The regulation's formal citation is Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2018/1725 and Directives 2000/14/EC, (EU) 2006/42/EC and (EU) 2011/65/EU (Artificial Intelligence Act).

It was published in the Official Journal of the European Union on July 12, 2024, and entered into force on August 1, 2024. Its obligations phase in over a three-year period, with full enforcement for most provisions by August 2, 2027.

History and Legislative Process

The AI Act has its origins in the European Commission's broader strategy on AI, which began taking shape in 2018.

April 2018 — The Commission publishes its Communication on "Artificial Intelligence for Europe," setting out the EU's approach to AI.

April 2019 — The High-Level Expert Group on AI publishes its Ethics Guidelines for Trustworthy AI, establishing the principles of lawful, ethical, and robust AI that would influence the regulation.

February 2020 — The Commission publishes the White Paper on AI, launching a public consultation on the regulatory framework for AI. It receives over 1,200 responses.

April 21, 2021 — The Commission adopts its legislative proposal for the AI Act (COM/2021/206). This original proposal focuses on a risk-based approach with four tiers of risk.

December 2022 — The Council of the EU (representing member state governments) adopts its general approach, introducing significant amendments including provisions on general-purpose AI and biometric identification.

June 14, 2023 — The European Parliament adopts its negotiating position, adding further amendments including broader prohibitions on biometric identification and extensive provisions on general-purpose AI models.

December 8, 2023 — After marathon trilogue negotiations between the Parliament, Council, and Commission, a political agreement is reached on the final text. The negotiations lasted over 36 hours in the final session.

March 13, 2024 — The European Parliament formally adopts the AI Act with 523 votes in favour, 46 against, and 49 abstentions.

May 21, 2024 — The Council formally adopts the AI Act.

July 12, 2024 — The AI Act is published in the Official Journal of the European Union.

August 1, 2024 — The AI Act enters into force.

The legislative process took over three years from proposal to adoption — reflecting both the complexity of the subject matter and the intense political negotiations involved, particularly around biometric identification, law enforcement uses of AI, and the regulation of general-purpose AI models.

Purpose and Objectives

The AI Act pursues several interconnected objectives:

Protect fundamental rights. The regulation is grounded in the EU Charter of Fundamental Rights. It aims to prevent AI systems from undermining human dignity, privacy, non-discrimination, freedom of expression, and other rights protected under EU law.

Ensure safety. AI systems should not pose unacceptable risks to health, safety, or the environment. The regulation applies the EU's established product safety framework to AI.

Foster innovation. The regulation seeks to provide legal certainty for AI developers and deployers, creating a predictable regulatory environment that supports innovation. Regulatory sandboxes, SME provisions, and proportionate requirements for lower-risk systems reflect this objective.

Establish a single market. By creating harmonised rules across all member states, the AI Act prevents fragmentation of the internal market. Instead of 27 different national AI regulations, there is one uniform framework.

Build trust. The regulation aims to increase public trust in AI by ensuring that AI systems placed on the EU market meet baseline requirements for transparency, safety, and accountability.

Scope and Territorial Application

Who Does It Apply To

The AI Act applies to:

  • Providers who place AI systems on the EU market or put them into service in the EU, regardless of whether they are established in the EU or a third country
  • Deployers of AI systems who are located within the EU
  • Providers and deployers located in third countries, where the output produced by the AI system is used in the EU
  • Importers and distributors of AI systems
  • Product manufacturers who place AI systems on the market as part of their products

The extraterritorial reach is significant. A company headquartered in the United States, China, or anywhere else that places an AI system on the EU market or whose AI system produces outputs used in the EU is subject to the regulation.

What Is Excluded

The AI Act does not apply to:

  • AI systems used exclusively for military, defence, or national security purposes
  • AI systems used by public authorities in third countries or international organisations for law enforcement or judicial cooperation, subject to certain conditions
  • AI research and development activities before a system is placed on the market or put into service (provided testing does not affect individuals)
  • Individuals using AI purely for personal, non-professional purposes
  • AI systems released under free and open-source licences, unless they are high-risk or prohibited

Key Definitions

Understanding the AI Act requires precision about its terminology.

AI system (Article 3(1)): A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

Provider (Article 3(3)): A natural or legal person, public authority, agency, or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark.

Deployer (Article 3(4)): A natural or legal person, public authority, agency, or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity.

General-purpose AI model (Article 3(63)): An AI model, including where such a model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks.

High-risk AI system: An AI system that falls within one of the categories specified in Annex I (safety components of regulated products) or Annex III (standalone high-risk uses), subject to certain exceptions.

Operator: An umbrella term covering providers, deployers, authorised representatives, importers, and distributors.

The Risk-Based Approach

The cornerstone of the AI Act is its classification of AI systems into risk categories, with obligations proportionate to the level of risk.

Unacceptable Risk (Prohibited Practices — Article 5)

Certain AI practices are outright banned because they pose an unacceptable risk to fundamental rights. These include:

  • AI systems using subliminal, manipulative, or deceptive techniques to distort behaviour
  • AI systems exploiting vulnerabilities of specific groups (age, disability, social or economic situation)
  • Social scoring by public authorities
  • Individual predictive policing based solely on profiling
  • Untargeted scraping of facial images from the internet or CCTV for facial recognition databases
  • Emotion recognition in workplaces and educational institutions (with narrow exceptions)
  • Biometric categorisation to infer sensitive characteristics (race, political opinions, sexual orientation, etc.)
  • Real-time remote biometric identification in publicly accessible spaces for law enforcement (with narrow exceptions)

These prohibitions took effect on February 2, 2025.

High Risk (Articles 6-49)

AI systems classified as high-risk face the most comprehensive obligations. High-risk systems include:

  • Safety components of products regulated under existing EU harmonisation legislation (Annex I)
  • AI systems in specified domains (Annex III): biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, and justice

Providers of high-risk AI systems must implement risk management systems, ensure data governance, prepare technical documentation, enable logging and traceability, provide transparency and instructions, design for human oversight, and achieve appropriate accuracy and robustness. They must also undergo conformity assessment before placing systems on the market.

Limited Risk (Article 50 — Transparency Obligations)

Certain AI systems carry transparency obligations regardless of whether they are high-risk:

  • AI systems interacting with natural persons (chatbots) must disclose that the person is interacting with AI
  • AI-generated or manipulated content (deepfakes) must be labelled as artificially generated or manipulated
  • Emotion recognition and biometric categorisation systems must inform individuals that such systems are in operation
  • AI-generated text published to inform the public on matters of public interest must be labelled as AI-generated

Minimal Risk

AI systems that do not fall into the above categories face no mandatory obligations under the AI Act. The regulation encourages voluntary adherence to codes of conduct but does not impose specific requirements.

Understand your AI risk classification

Ctrl AI helps organisations map their AI systems to EU AI Act risk categories — with automated documentation, execution traces, and trust-tagged outputs that demonstrate compliance.

Learn About Ctrl AI

Governance Structure

The AI Office

Housed within the European Commission's Directorate-General for Communications Networks, Content and Technology (DG CNECT), the AI Office has a central role in:

  • Enforcing rules for general-purpose AI models (this is done at EU level, not by national authorities)
  • Developing guidelines, codes of practice, and delegated acts
  • Coordinating with national authorities through the AI Board
  • Monitoring implementation and compliance across the EU
  • International cooperation on AI governance

The European Artificial Intelligence Board

Composed of one representative from each member state, the AI Board:

  • Advises and assists the Commission and member states in consistent application of the regulation
  • Issues recommendations on matters related to AI Act implementation
  • Collects and shares best practices among member states
  • Contributes to uniform administrative practices in member states

National Competent Authorities

Each member state must designate at least one notifying authority and at least one market surveillance authority. These authorities are responsible for:

  • Market surveillance and enforcement at the national level
  • Registering and overseeing notified bodies (for conformity assessment)
  • Receiving and investigating complaints
  • Imposing penalties for non-compliance

National Market Surveillance Authorities

Responsible for ensuring AI systems on the market comply with the regulation. They can:

  • Request access to documentation, source code, and data
  • Conduct evaluations and tests
  • Order corrective actions or withdrawal from the market
  • Impose fines and penalties

Relationship to Other Regulations

The AI Act does not exist in isolation. It interacts with a complex web of existing EU legislation.

GDPR (Regulation 2016/679)

The AI Act and GDPR are complementary. GDPR governs the processing of personal data, including by AI systems. The AI Act governs AI systems themselves, including requirements that go beyond data protection (such as technical robustness, human oversight design, and conformity assessment). Where both apply, both must be complied with.

Article 10(5) of the AI Act includes a specific provision allowing the processing of special categories of personal data (sensitive data under GDPR) for bias detection and correction in high-risk AI systems, subject to strict conditions.

Product Safety Legislation

The AI Act is designed to integrate with existing EU product safety frameworks. For AI systems that are safety components of products covered by sectoral legislation (medical devices under the MDR, machinery under the Machinery Regulation, vehicles, aviation, etc.), the AI Act requirements apply alongside the sector-specific requirements. The conformity assessment for such systems is typically integrated into the existing product assessment process.

Digital Services Act and Digital Markets Act

The DSA and DMA regulate online platforms and gatekeepers. Where AI systems are used in the context of online platforms (content recommendation, content moderation, advertising targeting), both the AI Act and the DSA/DMA apply.

Sector-Specific Regulation

The AI Act defers to sector-specific regulation where it provides equivalent or more specific requirements. For instance, in financial services, the AI Act applies alongside the Capital Requirements Regulation, PSD2, insurance regulations, and EBA guidelines.

Global Context

The EU AI Act is the first comprehensive AI regulation, but it exists within a broader global landscape of AI governance.

United States

The US has taken a more fragmented approach, with sector-specific guidance, executive orders, and state-level legislation rather than a single comprehensive federal law. The NIST AI Risk Management Framework provides voluntary guidance. Several states, including Colorado and Connecticut, have enacted AI-related legislation focused on specific use cases.

United Kingdom

Post-Brexit, the UK has pursued a "pro-innovation" approach to AI regulation, initially favouring sector-specific guidance over horizontal legislation. However, the UK has also introduced AI safety measures, including the AI Safety Institute, and is developing its regulatory approach through existing sector regulators.

China

China has enacted several AI-related regulations, including rules on recommendation algorithms, deep synthesis (deepfakes), and generative AI. China's approach is more prescriptive in certain areas than the EU's but differs significantly in its scope and objectives.

Canada

Canada's Artificial Intelligence and Data Act (AIDA), part of Bill C-27, proposes a framework for AI regulation that shares some features with the EU approach, including a focus on high-impact systems.

International Coordination

The EU has been active in international AI governance forums, including the G7, OECD, and the Council of Europe. The Council of Europe's Framework Convention on AI (adopted in 2024) represents the first binding international treaty on AI governance, though it is less prescriptive than the EU AI Act.

The "Brussels Effect" — the tendency for EU regulation to set global standards — is already visible in AI governance. Many multinational companies are using the EU AI Act as their baseline compliance standard worldwide, finding it more efficient to comply with the strictest framework globally than to maintain different compliance levels for different markets.

Enforcement and Penalties

The AI Act establishes a tiered penalty structure:

  • Up to 35 million EUR or 7% of global annual turnover for violations of prohibited practices (Article 5)
  • Up to 15 million EUR or 3% of global annual turnover for violations of most other obligations, including high-risk system requirements
  • Up to 7.5 million EUR or 1% of global annual turnover for supplying incorrect, incomplete, or misleading information to authorities

For SMEs and startups, the lower of the two figures (fixed amount or percentage of turnover) applies, providing some proportionality.

National market surveillance authorities are responsible for enforcement, with the AI Office handling enforcement for general-purpose AI model obligations.

Why This Matters

The EU AI Act represents a fundamental shift in how societies govern artificial intelligence. It moves AI from a largely unregulated space into a structured legal framework with clear requirements, enforcement mechanisms, and accountability.

For organisations that develop or deploy AI systems, the regulation creates both obligations and opportunities. Compliance is mandatory, but the discipline of risk management, transparency, and human oversight that the regulation requires also builds better AI systems — systems that are more reliable, more trustworthy, and more aligned with the values of the people they affect.

The organisations that engage with the AI Act seriously — not as a bureaucratic burden but as a framework for responsible AI — will be better positioned in a world where trust in AI is becoming a competitive advantage.

Start your EU AI Act compliance journey

Ctrl AI makes AI decision-making transparent, auditable, and compliant — helping organisations meet EU AI Act requirements with automated execution traces and trust-tagged outputs.

Learn About Ctrl AI

Make Your AI Auditable and Compliant

Ctrl AI provides expert-verified reasoning units with full execution traces — the infrastructure you need for EU AI Act compliance.

Explore Ctrl AI