EU AI Act Timeline: Key Dates from 2024 to 2027
Complete timeline of EU AI Act enforcement milestones — from entry into force in August 2024 to full high-risk compliance by August 2027. Know exactly when each requirement applies.
The EU AI Act (Regulation 2024/1689) does not apply all at once. Instead, the European Union designed a phased enforcement schedule that gives organisations time to prepare for increasingly stringent requirements. Understanding this timeline is critical for prioritising compliance efforts and allocating resources effectively.
This article provides a comprehensive, date-by-date breakdown of every major enforcement milestone from the regulation's entry into force through full application.
Several key deadlines have already passed. If your organisation has not yet begun compliance work, it is essential to understand which obligations are already in effect and which are approaching.
The Complete Enforcement Timeline
Phase 1: Entry into Force and Preparation (August 2024 - January 2025)
The period between 1 August 2024 and 2 February 2025 served as the initial preparation window. While no substantive obligations were yet enforceable during this phase, several important processes began.
Establishing Governance Bodies
The European Commission began the process of establishing the AI Office, which sits within the Commission's Directorate-General for Communications Networks, Content and Technology (DG CONNECT). The AI Office is responsible for overseeing general-purpose AI models and supporting consistent application of the regulation across Member States.
AI Literacy Obligation
Article 4 of the AI Act requires providers and deployers to ensure that their staff and other persons dealing with the operation and use of AI systems have a sufficient level of AI literacy. While this obligation formally applies from 2 August 2025, organisations were encouraged to begin training programmes during this preparatory phase.
AI literacy under Article 4 is not a one-time training. It requires ongoing education that accounts for the technical knowledge, experience, education, and context in which AI systems are used. This is a proportionate requirement — more complex or higher-risk uses demand deeper literacy.
Phase 2: Prohibited Practices (February 2025)
The first substantive enforcement date was 2 February 2025, when the prohibitions in Article 5 became applicable. This was the most urgent deadline because violations carry the highest penalties.
What Was Banned
From 2 February 2025, the following AI practices became prohibited:
Subliminal, manipulative, and deceptive techniques (Article 5(1)(a)): AI systems that deploy techniques beyond a person's consciousness, or purposefully manipulative or deceptive techniques, to materially distort behaviour and cause significant harm.
Exploitation of vulnerabilities (Article 5(1)(b)): AI systems that exploit vulnerabilities of individuals due to their age, disability, or specific social or economic situation to materially distort their behaviour and cause significant harm.
Social scoring (Article 5(1)(c)): AI systems used by public authorities or on their behalf to evaluate or classify natural persons based on social behaviour or personal characteristics, leading to detrimental treatment that is unjustified or disproportionate.
Individual risk assessment for criminal offences (Article 5(1)(d)): AI systems that assess the risk of natural persons committing criminal offences based solely on profiling or personality traits, unless used to augment human assessments based on objective, verifiable facts directly linked to criminal activity.
Untargeted scraping for facial recognition databases (Article 5(1)(e)): AI systems that create or expand facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
Emotion recognition in workplaces and education (Article 5(1)(f)): AI systems that infer emotions in the workplace and educational institutions, except where the AI system is intended for medical or safety reasons.
Biometric categorisation for sensitive attributes (Article 5(1)(g)): AI systems that categorise natural persons based on biometric data to deduce race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation, with limited exceptions for law enforcement.
Real-time remote biometric identification in public spaces (Article 5(1)(h)): Real-time remote biometric identification systems used in publicly accessible spaces for law enforcement purposes, with narrow exceptions for targeted searches related to serious crimes, imminent threats, or terrorist attacks.
Compliance Actions Required
Organisations needed to audit their AI systems before this date and discontinue any prohibited practices. There was no grace period — as of 2 February 2025, non-compliance with Article 5 carried penalties of up to 35 million EUR or 7% of worldwide annual turnover.
Need auditable AI for compliance?
Ctrl AI provides full execution traces, expert verification, and trust-tagged outputs for every AI decision.
Learn About Ctrl AIPhase 3: GPAI and Governance (August 2025)
The next major milestone is 2 August 2025, when obligations for general-purpose AI (GPAI) models and the governance framework take effect.
General-Purpose AI Model Obligations
Chapter V of the AI Act establishes requirements for all GPAI models. From 2 August 2025, providers of GPAI models must:
Maintain technical documentation (Article 53(1)(a)): Providers must draw up and keep up to date technical documentation of the model, including its training and testing process and results, which must be made available to the AI Office and national competent authorities upon request.
Provide information to downstream providers (Article 53(1)(b)): When a GPAI model is integrated into an AI system, the GPAI provider must supply sufficient information and documentation to enable the downstream AI system provider to understand the model's capabilities and limitations and comply with their own obligations.
Comply with copyright rules (Article 53(1)(c)): GPAI providers must put in place a policy to comply with Union copyright law, in particular to identify and respect reservations of rights expressed by rights holders under Article 4(3) of the Copyright Directive.
Publish training data summary (Article 53(1)(d)): Providers must draw up and make publicly available a sufficiently detailed summary about the content used for training the GPAI model, following a template provided by the AI Office.
GPAI Models with Systemic Risk
GPAI models that are classified as presenting systemic risk under Article 51 — including models trained with more than 10^25 FLOPs of compute — face additional obligations from this date:
- Perform model evaluations, including adversarial testing
- Assess and mitigate systemic risks
- Track and report serious incidents to the AI Office and relevant national competent authorities
- Ensure adequate cybersecurity protections
Governance Framework
Several governance provisions also take effect on 2 August 2025:
- The European Artificial Intelligence Board (Article 65) becomes operational, with representatives from each Member State.
- Member States must designate national competent authorities (Article 70), including at least one notifying authority and one market surveillance authority.
- The Advisory Forum (Article 67) is established to provide stakeholder expertise.
Phase 4: The Major Compliance Deadline (August 2026)
2 August 2026 is the most consequential date for the majority of organisations. This is when most of the regulation's provisions become applicable, including the full set of requirements for high-risk AI systems listed in Annex III.
High-Risk AI System Requirements
From this date, providers of high-risk AI systems listed in Annex III must comply with all requirements under Articles 8 through 15:
- Risk management system (Article 9)
- Data and data governance (Article 10)
- Technical documentation (Article 11)
- Record-keeping and logging (Article 12)
- Transparency and provision of information (Article 13)
- Human oversight (Article 14)
- Accuracy, robustness, and cybersecurity (Article 15)
Deployer Obligations
Deployers of high-risk AI systems must also comply from this date. Under Article 26, deployers must:
- Use AI systems in accordance with instructions
- Ensure human oversight by trained individuals
- Monitor the operation of the AI system
- Keep logs generated by the AI system for the prescribed period
- Inform workers and their representatives about high-risk AI system use
- Conduct fundamental rights impact assessments (for public bodies and certain private entities under Article 27)
Transparency Obligations for Limited-Risk Systems
Article 50 transparency obligations apply from 2 August 2026. This requires:
- AI systems intended to interact directly with persons must disclose that the person is interacting with an AI system
- Providers of AI systems that generate synthetic content (deepfakes) must mark the output in a machine-readable format
- Deployers of emotion recognition or biometric categorisation systems must inform the persons exposed
Regulatory Sandboxes
Under Article 57, each Member State must establish at least one AI regulatory sandbox at the national level by 2 August 2026. These sandboxes provide a controlled testing environment where innovative AI systems can be developed and validated with regulatory oversight.
Organisations that are providers or deployers of high-risk AI systems listed in Annex III — covering areas such as employment, credit scoring, education, critical infrastructure, and law enforcement — must be fully compliant by 2 August 2026. Non-compliance carries fines of up to 15 million EUR or 3% of global turnover.
Phase 5: Full Application (August 2027)
The final enforcement date is 2 August 2027, when the remaining provisions apply.
Annex I High-Risk AI Systems
This phase primarily affects high-risk AI systems that serve as safety components of products already regulated under EU harmonisation legislation listed in Annex I. These include products governed by:
- Regulation (EU) 2017/745 — Medical devices
- Regulation (EU) 2017/746 — In vitro diagnostic medical devices
- Directive 2006/42/EC / Regulation (EU) 2023/1230 — Machinery
- Directive 2009/48/EC — Safety of toys
- Directive 2014/33/EU — Lifts
- Directive 2014/34/EU — Equipment for use in explosive atmospheres
- Directive 2014/53/EU — Radio equipment
- Regulation (EU) 2024/1252 — Critical raw materials (relevant products)
For these systems, the existing conformity assessment procedures under sectoral legislation will integrate AI Act requirements. This additional year gives manufacturers time to update their conformity procedures.
Conformity Assessment and Notified Bodies
By 2 August 2027, the system of notified bodies must be fully operational. Notified bodies are organisations designated by Member States to carry out conformity assessments for certain high-risk AI systems where a third-party assessment is required.
Building Your Compliance Roadmap
Given the phased timeline, organisations should take a structured approach to compliance.
Immediate Priorities (Now)
- Verify that no prohibited AI practices remain in operation
- Begin AI literacy programmes under Article 4
- Start inventorying all AI systems and classifying their risk level
Near-Term (Before August 2025)
- GPAI model providers must finalise technical documentation and training data summaries
- Establish relationships with downstream providers for information sharing
- Monitor the publication of codes of practice by the AI Office
Medium-Term (Before August 2026)
- Implement the full set of requirements for high-risk AI systems under Articles 8-15
- Establish or update quality management systems
- Prepare deployer obligations including fundamental rights impact assessments
- Update transparency measures for limited-risk AI systems
Long-Term (Before August 2027)
- Integrate AI Act requirements into sectoral conformity assessment procedures
- Engage with notified bodies for third-party assessments where required
- Ensure all remaining product safety AI systems are fully compliant
The phased timeline is designed to give organisations adequate preparation time, but that time is finite. Organisations that treat compliance as a strategic priority rather than a last-minute exercise will be better positioned to navigate the transition smoothly and maintain a competitive advantage.
Conclusion
The EU AI Act's enforcement timeline spans three years, from its entry into force on 1 August 2024 to full application on 2 August 2027. Each phase introduces new obligations, and the cumulative effect is a comprehensive regulatory framework that will shape how AI is developed and deployed across Europe and beyond.
The most critical period is between now and August 2026, when the bulk of the regulation's requirements become applicable. Organisations should use the timeline above to prioritise their compliance efforts, starting with an inventory of their AI systems and a thorough risk classification exercise.
The timeline is not just a series of deadlines — it is a roadmap. Organisations that follow it systematically will not only avoid penalties but will build the governance structures needed for responsible and trustworthy AI deployment.
Make Your AI Auditable and Compliant
Ctrl AI provides expert-verified reasoning units with full execution traces — the infrastructure you need for EU AI Act compliance.
Explore Ctrl AIRelated Articles
EU AI Act Penalties: Fines Up to €35 Million Explained
Complete breakdown of EU AI Act penalties and fines — from €35 million for prohibited practices to €7.5 million for incorrect information. Understand the enforcement regime and how to avoid penalties.
High-Risk AI Systems: Complete Requirements Under the EU AI Act
Detailed guide to the requirements for high-risk AI systems under the EU AI Act — risk management, data governance, documentation, human oversight, accuracy, and cybersecurity.
EU AI Act Risk Classification: Four Levels Explained
Deep dive into the EU AI Act's four-tier risk classification system — unacceptable, high, limited, and minimal risk. Learn which category your AI system falls into and what's required.