EU AI Act Penalties: Fines Up to €35 Million Explained
Complete breakdown of EU AI Act penalties and fines — from €35 million for prohibited practices to €7.5 million for incorrect information. Understand the enforcement regime and how to avoid penalties.
The EU AI Act (Regulation 2024/1689) introduces one of the most significant enforcement regimes in technology regulation. With fines reaching up to €35 million or 7% of global annual turnover, the financial stakes are comparable to — and in some cases exceed — those of the GDPR.
Understanding the penalty structure is not just a legal exercise. It is a strategic imperative for any organisation developing, deploying, or distributing AI systems in the European Union.
The Three-Tier Penalty Structure
Article 99 of the EU AI Act establishes a graduated system of administrative fines, calibrated to the severity of the infringement. The three tiers reflect the regulation's risk-based approach: the more dangerous the violation, the steeper the penalty.
Tier 1: Prohibited AI Practices — Up to €35 Million or 7% of Turnover
The highest penalties are reserved for violations of Article 5 — the outright banned AI practices. These include:
- Social scoring by public authorities
- Manipulative or deceptive AI techniques that distort behaviour and cause significant harm
- Exploitation of vulnerabilities of specific groups (age, disability, social or economic situation)
- Untargeted scraping of facial images from the internet or CCTV for facial recognition databases
- Emotion recognition in workplaces and educational institutions (with limited exceptions)
- Biometric categorisation systems that infer sensitive attributes like race, political opinions, or sexual orientation
- Real-time remote biometric identification in publicly accessible spaces for law enforcement (with narrow exceptions)
For companies, the fine is up to €35 million or 7% of the total worldwide annual turnover of the preceding financial year, whichever is higher. This is deliberately set above the GDPR's maximum of 4% of turnover to signal the seriousness with which the EU treats these practices.
Tier 2: Non-Compliance with Core Obligations — Up to €15 Million or 3% of Turnover
The second tier covers violations of most other substantive requirements in the regulation. This includes failure to comply with:
- High-risk AI system requirements (Articles 8–15) — data governance, technical documentation, transparency, human oversight, accuracy, robustness, and cybersecurity
- Obligations of providers of high-risk AI systems (Articles 16–22) — quality management, conformity assessment, registration, post-market monitoring
- Obligations of deployers (Article 26) — using systems in accordance with instructions, monitoring, and record-keeping
- Requirements for general-purpose AI models (Articles 51–55) — technical documentation, copyright compliance, and systemic risk management
- Notified body obligations and other procedural requirements
For companies, the fine is up to €15 million or 3% of the total worldwide annual turnover, whichever is higher.
Tier 3: Incorrect, Incomplete, or Misleading Information — Up to €7.5 Million or 1% of Turnover
The lowest — but still substantial — tier targets the supply of incorrect, incomplete, or misleading information to national competent authorities or notified bodies. This covers:
- Providing false data during conformity assessments
- Failing to disclose relevant information in response to regulatory requests
- Misleading representations in technical documentation or EU declarations of conformity
For companies, fines can reach €7.5 million or 1% of the total worldwide annual turnover, whichever is higher.
The "whichever is higher" clause is critical. For a company with €5 billion in annual revenue, a Tier 1 violation could result in a fine of €350 million — ten times the nominal €35 million cap. The percentage-of-turnover calculation applies to the entire corporate group, not just the subsidiary operating the AI system.
How Fines Are Calculated
Article 99(3) sets out the factors that national competent authorities must consider when deciding whether to impose a fine and determining its amount. These factors ensure proportionality while maintaining deterrence:
Aggravating Factors
- Nature, gravity, and duration of the infringement
- Intentional or negligent character of the infringement
- Previous infringements by the same operator
- Financial benefits gained or losses avoided as a result of the infringement
- Number of persons affected and the level of damage suffered
- Size of the undertaking — larger companies face proportionally larger fines
Mitigating Factors
- Actions taken to mitigate the damage suffered by affected persons
- Degree of cooperation with supervisory authorities
- Manner in which the infringement became known — self-reporting is viewed favourably
- Degree of responsibility considering technical and organisational measures implemented
- Adherence to approved codes of conduct or approved certification mechanisms
Authorities must ensure that fines are "effective, proportionate, and dissuasive" in each individual case. This means a small company and a multinational will not be treated identically for the same infringement — but neither will escape meaningful consequences.
Special Provisions for SMEs and Startups
Recognising that the penalty regime could disproportionately burden smaller organisations, Article 99(6) introduces important safeguards for SMEs, including startups:
- When calculating fines, the economic viability of the SME and its annual turnover must be taken into account
- Administrative fines imposed on SMEs should be proportionate to their financial capacity
- Member States are encouraged to provide guidance and support to SMEs in understanding and meeting their obligations
- The European AI Office is tasked with developing templates and simplified procedures for smaller organisations
Stay ahead of enforcement
Ctrl AI helps organisations build audit-ready AI systems with full execution traces, expert verification, and compliance documentation — before regulators come knocking.
Explore Ctrl AIWho Enforces the EU AI Act?
The enforcement architecture of the EU AI Act is multi-layered, reflecting the complexity of the AI ecosystem:
National Competent Authorities
Each EU Member State must designate one or more national competent authorities to supervise the application and implementation of the regulation (Article 70). These authorities are responsible for:
- Market surveillance of AI systems within their territory
- Investigating complaints and potential infringements
- Imposing administrative fines and other corrective measures
- Cooperating with authorities in other Member States
Each Member State must also designate a national supervisory authority — typically the data protection authority or a dedicated AI regulator — as the main point of contact.
The European AI Office
Established under Article 64, the European AI Office plays a central coordinating role:
- It directly supervises providers of general-purpose AI models (including large language models)
- It can investigate potential infringements of the rules on GPAI models
- It can impose fines on GPAI model providers at the EU level
- It facilitates cooperation between national authorities and provides technical expertise
The European AI Board
The AI Board (Article 65) brings together representatives from each Member State to ensure consistent application of the regulation across the EU. It issues recommendations, shares best practices, and advises the Commission on emerging issues.
For general-purpose AI model providers, enforcement is handled primarily at the EU level by the AI Office — not by individual Member States. This creates a single point of regulatory contact for models like GPT, Claude, or Gemini.
Corrective Measures Beyond Fines
Fines are only one tool in the enforcement toolkit. National competent authorities also have the power to impose a range of corrective measures under Article 16 and the market surveillance provisions:
- Requiring corrective actions within a specified timeframe
- Restricting or prohibiting the making available of an AI system on the market
- Ordering the withdrawal or recall of an AI system from the market
- Issuing public warnings about non-compliant AI systems or providers
- Requiring providers to provide information and access to systems for investigation
These non-financial measures can be equally damaging to an organisation. A public withdrawal order or warning can cause significant reputational harm and loss of customer trust — consequences that may exceed the financial impact of a fine.
The Enforcement Timeline
Not all provisions of the EU AI Act become enforceable at the same time. The regulation follows a phased implementation timeline:
Interaction with the GDPR and Other Regulations
Article 99(8) explicitly addresses the overlap between the AI Act and the GDPR. Where a single act or omission infringes both regulations:
- The total fine cannot exceed the maximum prescribed for the most serious infringement
- Authorities must coordinate to avoid double penalisation for the same conduct
- However, separate infringements of separate rules can still be penalised independently
This coordination requirement is particularly relevant for high-risk AI systems that process personal data — which is the vast majority of them.
Practical Steps to Minimise Penalty Risk
While understanding the penalty structure is important, the goal should be to never face enforcement action in the first place. Here are concrete steps organisations should take:
1. Classify Your AI Systems Accurately
The foundation of compliance is knowing which tier your systems fall into. Conduct a thorough inventory and risk classification of all AI systems you develop, deploy, or distribute. Misclassification — whether intentional or through negligence — is itself a compliance failure.
2. Implement Robust Documentation
Technical documentation, risk assessments, and conformity declarations are central requirements for high-risk AI systems. Incomplete or inaccurate documentation can trigger both Tier 2 (for non-compliance) and Tier 3 (for misleading information) fines.
3. Establish Internal Governance
Designate clear responsibilities for AI compliance within your organisation. Ensure that the people accountable for AI systems understand their obligations under the regulation and have the resources to meet them.
4. Build Audit Trails
Regulators will assess not just compliance at a point in time, but the processes and controls you have in place. Automated logging, version control, and decision traceability for AI systems provide the evidence base that demonstrates ongoing compliance.
5. Monitor Regulatory Guidance
The AI Office, national authorities, and the AI Board will issue implementing acts, guidelines, and codes of practice throughout 2025 and 2026. Stay current with this evolving landscape.
Organisations that demonstrate proactive compliance efforts — documented risk assessments, internal audits, cooperation with authorities — are far more likely to receive mitigated penalties in the event of an infringement. The regulation explicitly rewards good-faith compliance efforts.
Conclusion
The EU AI Act penalty regime is designed to be proportionate but impactful. With fines reaching 7% of global turnover for the most serious violations, the regulation sends a clear message: AI governance is not optional.
The graduated structure — from €7.5 million for information failures to €35 million for prohibited practices — reflects a nuanced understanding of the different ways AI systems can cause harm. Combined with non-financial corrective measures and a multi-layered enforcement architecture, the regime gives regulators significant tools to ensure compliance.
For organisations operating in the EU market, the question is not whether to comply, but how quickly and thoroughly they can build the systems, processes, and culture that compliance requires. The enforcement clock is already ticking.
Make Your AI Auditable and Compliant
Ctrl AI provides expert-verified reasoning units with full execution traces — the infrastructure you need for EU AI Act compliance.
Explore Ctrl AIRelated Articles
EU AI Act Timeline: Key Dates from 2024 to 2027
Complete timeline of EU AI Act enforcement milestones — from entry into force in August 2024 to full high-risk compliance by August 2027. Know exactly when each requirement applies.
High-Risk AI Systems: Complete Requirements Under the EU AI Act
Detailed guide to the requirements for high-risk AI systems under the EU AI Act — risk management, data governance, documentation, human oversight, accuracy, and cybersecurity.
EU AI Act Risk Classification: Four Levels Explained
Deep dive into the EU AI Act's four-tier risk classification system — unacceptable, high, limited, and minimal risk. Learn which category your AI system falls into and what's required.