EU AI Act Risk Classification: Four Levels Explained
Deep dive into the EU AI Act's four-tier risk classification system — unacceptable, high, limited, and minimal risk. Learn which category your AI system falls into and what's required.
The EU AI Act's risk-based classification system is the architectural foundation of the entire regulation. Rather than treating all AI systems equally, Regulation 2024/1689 establishes four tiers of risk — unacceptable, high, limited, and minimal — each carrying proportionate obligations. The logic is straightforward: the greater the potential harm an AI system can cause, the stricter the rules that govern it.
Understanding which tier your AI system falls into is the essential first step toward compliance. Misclassification can lead to either unnecessary burden or, more dangerously, non-compliance with requirements that carry significant penalties.
The Four-Tier Risk Framework
The EU AI Act organises AI systems into four categories based on the level of risk they pose to health, safety, and fundamental rights. Each level carries distinct regulatory requirements.
Unacceptable Risk: Prohibited AI Practices
Unacceptable RiskAt the top of the risk pyramid are AI practices deemed so harmful that they are banned outright. Article 5 of the AI Act lists these prohibited practices, which represent a clear line that no organisation may cross.
Subliminal, manipulative, and deceptive techniques (Article 5(1)(a)): AI systems that deploy techniques operating below the threshold of consciousness, or that are purposefully manipulative or deceptive, with the objective or effect of materially distorting behaviour and causing or being reasonably likely to cause significant harm.
Exploitation of vulnerabilities (Article 5(1)(b)): AI systems that target specific vulnerabilities of individuals — whether due to age, disability, or a particular social or economic situation — to materially distort their behaviour in a manner that causes or is likely to cause significant harm.
Social scoring (Article 5(1)(c)): AI systems used by public authorities (or on their behalf) to evaluate or classify natural persons based on their social behaviour or known, inferred, or predicted personal characteristics, where the resulting social score leads to detrimental treatment in unrelated contexts or treatment that is disproportionate.
Predictive policing based solely on profiling (Article 5(1)(d)): AI systems that assess the risk of a natural person committing a criminal offence based solely on profiling or the assessment of personality traits and characteristics. This prohibition does not apply where AI is used to augment human assessments that are already based on objective, verifiable facts directly linked to criminal activity.
Untargeted facial recognition scraping (Article 5(1)(e)): AI systems that create or expand facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
Emotion recognition in workplaces and education (Article 5(1)(f)): AI systems that infer emotions of natural persons in workplaces or educational institutions, except where used for medical or safety reasons.
Biometric categorisation for sensitive characteristics (Article 5(1)(g)): AI systems that categorise natural persons individually based on biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation. Narrow exceptions apply for law enforcement purposes.
Real-time remote biometric identification in public spaces for law enforcement (Article 5(1)(h)): Use of real-time remote biometric identification systems in publicly accessible spaces for law enforcement, except in strictly necessary situations involving targeted searches for victims, prevention of specific imminent threats, or investigation of serious criminal offences.
Prohibited practices carry the highest penalties under the AI Act: up to 35 million EUR or 7% of total worldwide annual turnover, whichever is greater. These prohibitions have been in force since 2 February 2025.
Practical Implications
Organisations should conduct an immediate audit to ensure they do not operate any system that falls within Article 5. While many of these practices are uncommon in mainstream business applications, some edge cases merit careful analysis. For example, personalisation algorithms that cross the line into manipulative techniques, or employee monitoring tools that could be construed as workplace emotion recognition, require close scrutiny.
High Risk: Comprehensive Requirements
High RiskHigh-risk AI systems form the core focus of the regulation. These are systems that are permitted but subject to extensive requirements before they can be placed on the market or put into service.
The AI Act defines high-risk AI systems through two pathways under Article 6:
Pathway 1 — Safety components of regulated products (Article 6(1)): An AI system is high-risk if it is a safety component of a product, or is itself a product, covered by EU harmonisation legislation listed in Annex I, and the product is required to undergo a third-party conformity assessment under that legislation. This covers AI embedded in medical devices, machinery, toys, lifts, automotive vehicles, aviation systems, and other regulated products.
Pathway 2 — Standalone systems in sensitive areas (Article 6(2) and Annex III): An AI system is high-risk if it falls within one of the use cases listed in Annex III. The eight areas defined in Annex III are:
1. Biometrics (Annex III, point 1): Remote biometric identification systems (excluding those prohibited under Article 5), AI systems for biometric categorisation by sensitive or protected attributes, and AI systems for emotion recognition.
2. Critical infrastructure (Annex III, point 2): AI systems used as safety components in the management and operation of critical digital infrastructure, road traffic, and the supply of water, gas, heating, and electricity.
3. Education and vocational training (Annex III, point 3): AI systems used to determine access to or admission to educational institutions, to evaluate learning outcomes, to assess the appropriate level of education, and to monitor prohibited behaviour during tests.
4. Employment, workers management, and access to self-employment (Annex III, point 4): AI systems used for recruitment (filtering applications, evaluating candidates), making decisions affecting terms of work relationships (promotion, termination, task allocation), and monitoring or evaluating worker performance and behaviour.
5. Access to essential private services and public services and benefits (Annex III, point 5): AI systems used to evaluate eligibility for public assistance benefits or services, assess creditworthiness (except for detecting financial fraud), evaluate and classify emergency calls, and assess risks in life and health insurance pricing.
6. Law enforcement (Annex III, point 6): AI systems used by law enforcement for individual risk assessment, polygraph or similar tools, evaluation of evidence reliability, assessing risk of offending or reoffending, profiling during detection or investigation, and crime analytics.
7. Migration, asylum, and border control management (Annex III, point 7): AI systems used for polygraph-type tools in migration proceedings, assessment of irregular migration risk, security or health risks posed by individuals, examination of asylum, visa, or residence permit applications, and detection of persons in border management.
8. Administration of justice and democratic processes (Annex III, point 8): AI systems used by judicial authorities to research and interpret facts and law and apply the law to concrete facts, or to be used in alternative dispute resolution.
Not every AI system used in these areas is automatically high-risk. Article 6(3) provides an important exception: an AI system listed in Annex III shall not be considered high-risk if it does not pose a significant risk of harm to health, safety, or fundamental rights. The provider must document this assessment, and it is subject to review by market surveillance authorities.
Requirements for High-Risk Systems
High-risk AI systems must comply with the requirements set out in Articles 8 through 15, which cover:
- Risk management system (Article 9)
- Data and data governance (Article 10)
- Technical documentation (Article 11)
- Record-keeping and automatic logging (Article 12)
- Transparency and provision of information to deployers (Article 13)
- Human oversight measures (Article 14)
- Accuracy, robustness, and cybersecurity (Article 15)
Providers must also implement a quality management system (Article 17), undergo conformity assessment (Article 43), register their systems in the EU database (Article 49), and conduct post-market monitoring (Article 72).
Need auditable AI for compliance?
Ctrl AI provides full execution traces, expert verification, and trust-tagged outputs for every AI decision.
Learn About Ctrl AILimited Risk: Transparency Obligations
Limited RiskThe limited-risk category covers AI systems that interact with people or generate content in ways that could be misleading if the AI nature of the system or its outputs is not disclosed. Article 50 establishes specific transparency obligations for these systems.
Who Must Comply
Providers of AI systems designed to interact with natural persons (Article 50(1)): Systems such as chatbots must be designed to ensure that the natural person is informed that they are interacting with an AI system, unless this is obvious from the circumstances and context of use.
Providers of AI systems generating synthetic content (Article 50(2)): AI systems that generate synthetic audio, image, video, or text content must mark the output in a machine-readable format that discloses it has been artificially generated or manipulated. The technical implementation must be effective, interoperable, robust, and reliable.
Deployers of emotion recognition or biometric categorisation systems (Article 50(3)): Where these systems are not classified as high-risk, deployers must inform the natural persons exposed to the system about its operation and process personal data in accordance with applicable data protection law.
Deployers of deepfake systems (Article 50(4)): Deployers who publish or enable access to AI-generated or manipulated content (deepfakes) must disclose that the content has been artificially generated or manipulated. An exception exists for content that is part of an obviously artistic, creative, satirical, or fictional work, though even then the disclosure must not prevent the display of the content.
Practical Examples
- A customer service chatbot on an e-commerce website must tell users they are chatting with an AI, not a human.
- An AI tool that generates marketing images must embed machine-readable metadata indicating the images are AI-generated.
- A company using AI to analyse the emotional tone of customer calls (where not high-risk) must inform callers.
- A media organisation publishing AI-generated articles must label them as such.
The transparency obligations may seem straightforward, but the technical requirements — particularly around machine-readable marking of AI-generated content — require implementation effort. Organisations should plan for these requirements well before the August 2026 deadline.
Minimal Risk: Voluntary Compliance
Minimal RiskThe vast majority of AI systems fall into the minimal-risk category. These are AI applications that pose little or no risk to fundamental rights and safety. The EU AI Act does not impose mandatory requirements on these systems beyond existing legislation.
Examples of Minimal-Risk AI
- Spam filters
- AI-powered recommendation systems (for non-essential services)
- AI-enhanced video games
- Inventory management systems
- AI-assisted spell checkers and grammar tools
- Predictive maintenance systems for manufacturing equipment
- AI-powered search engines (general purpose)
Voluntary Codes of Conduct
While minimal-risk AI systems face no mandatory requirements under the AI Act, Article 95 encourages providers and deployers to voluntarily apply the requirements for high-risk systems or develop their own codes of conduct. These codes may address issues such as:
- Environmental sustainability of AI systems
- AI literacy among stakeholders
- Inclusive and diverse design
- Accessibility for persons with disabilities
The Commission and Member States facilitate the development of these voluntary codes, and adherence to them can serve as a market differentiator for organisations that want to demonstrate responsible AI practices.
How to Classify Your AI System
Classifying your AI system correctly requires a systematic approach. Here is a practical framework for working through the classification process.
Step 1: Check Against Prohibited Practices
Review your AI system against each prohibited practice in Article 5. If your system falls within any of these categories, it must be discontinued immediately — no exceptions or workarounds will achieve compliance.
Step 2: Check Annex I (Product Safety Legislation)
If your AI system is a safety component of a product, or is itself a product, governed by EU harmonisation legislation listed in Annex I, check whether that product requires a third-party conformity assessment. If yes, the AI system is high-risk under Article 6(1).
Step 3: Check Annex III (Sensitive Use Cases)
Review whether your AI system's intended purpose falls within any of the eight categories in Annex III. This is where most standalone high-risk classifications arise.
Step 4: Apply the Article 6(3) Exception
If your system is listed in Annex III but does not pose a significant risk of harm, you may argue that it is not high-risk under the Article 6(3) exception. However, this requires documented justification and is subject to regulatory review. The exception does not apply if the AI system performs profiling of natural persons.
Step 5: Check Transparency Obligations
If your system is not high-risk, determine whether it interacts with natural persons, generates synthetic content, performs emotion recognition, or produces deepfake content. If so, it falls under limited-risk transparency obligations.
Step 6: Default to Minimal Risk
If your system does not fall into any of the above categories, it is classified as minimal risk with no mandatory obligations under the AI Act.
Risk classification is not a static exercise. If you modify your AI system's intended purpose, expand its scope, or change the context in which it operates, you must reassess its classification. Article 6(3) explicitly requires that the assessment be updated when there is a change in the intended purpose of the AI system.
Special Cases and Considerations
General-Purpose AI Models
General-purpose AI models (GPAI) are regulated separately under Chapter V of the AI Act. A GPAI model is not itself classified under the four-tier risk system. However, when a GPAI model is integrated into an AI system, that AI system is classified according to its intended purpose and use case.
For example, a large language model is governed by the GPAI provisions. But if that model is used to power a recruitment screening tool, the resulting AI system would be classified as high-risk under Annex III, point 4.
AI Systems with Multiple Uses
Some AI systems can serve multiple purposes. The classification should be based on each intended purpose individually. An AI system that is used for both minimal-risk applications (such as general customer analytics) and high-risk applications (such as creditworthiness assessment) must comply with high-risk requirements for the latter use.
Changes After Market Placement
If a provider substantially modifies an AI system after it has been placed on the market, the modified system may need to be reclassified and undergo a new conformity assessment. Similarly, deployers who use an AI system for a purpose that was not intended by the provider should conduct their own risk assessment.
Conclusion
The EU AI Act's four-tier risk classification system provides a proportionate regulatory framework that concentrates compliance obligations where they matter most. By distinguishing between unacceptable, high, limited, and minimal-risk AI systems, the regulation avoids imposing unnecessary burden on low-risk applications while ensuring robust protections where AI can significantly affect people's lives.
For organisations navigating this framework, accurate classification is the foundation of every compliance decision that follows. It determines which requirements apply, which deadlines to prioritise, and what resources to allocate. Taking the time to classify your AI systems correctly — and documenting that classification thoroughly — will save significant effort and uncertainty down the line.
Make Your AI Auditable and Compliant
Ctrl AI provides expert-verified reasoning units with full execution traces — the infrastructure you need for EU AI Act compliance.
Explore Ctrl AIRelated Articles
High-Risk AI Systems: Complete Requirements Under the EU AI Act
Detailed guide to the requirements for high-risk AI systems under the EU AI Act — risk management, data governance, documentation, human oversight, accuracy, and cybersecurity.
EU AI Act Penalties: Fines Up to €35 Million Explained
Complete breakdown of EU AI Act penalties and fines — from €35 million for prohibited practices to €7.5 million for incorrect information. Understand the enforcement regime and how to avoid penalties.
EU AI Act Timeline: Key Dates from 2024 to 2027
Complete timeline of EU AI Act enforcement milestones — from entry into force in August 2024 to full high-risk compliance by August 2027. Know exactly when each requirement applies.