Annex III Explained: Standalone High-Risk AI Systems Under the EU AI Act
Detailed breakdown of all eight categories of standalone high-risk AI systems in Annex III of the EU AI Act — biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, and justice.
Annex III of the EU AI Act is the single most consequential annex in the regulation. It lists eight categories of AI systems classified as high-risk under Article 6(2) — not because of any underlying product safety regulation, but because of the sensitive areas in which they are used.
If your AI system falls within an Annex III category and does not qualify for the Article 6(3) carve-out, you face the full set of high-risk obligations under Articles 8–15, including risk management, technical documentation, data governance, human oversight, accuracy and cybersecurity measures, and conformity assessment.
This article walks through each of the eight Annex III categories in detail, explaining what is covered, what is not, and how the Article 6(3) carve-out applies in practice.
Annex III is dynamic. Article 7 of the regulation empowers the Commission to add new use cases via delegated act when AI use in a new area creates a comparable risk to fundamental rights. The list reproduced below reflects the text of Regulation (EU) 2024/1689 as adopted; future amendments may broaden it.
The Annex III Framework
Before walking through the eight categories, it helps to understand how Annex III interacts with the rest of the regulation:
- Article 6(2) declares AI systems referred to in Annex III to be high-risk
- Article 6(3) provides a narrow carve-out: a system in an Annex III area is not high-risk if it does not pose a significant risk of harm. The provider must document the assessment.
- Article 6(3) second paragraph explicitly states that AI systems performing profiling of natural persons are always high-risk, regardless of any carve-out argument
- Article 49 requires providers and certain deployers of Annex III high-risk systems to register them in the EU database
Category 1: Biometrics
Annex III, point 1 covers AI systems intended to be used for biometric purposes, to the extent their use is permitted under Union or national law:
(a) Remote biometric identification systems. This does not include AI systems intended for biometric verification whose sole purpose is to confirm that a specific natural person is the person they claim to be. The carve-out for verification (1-to-1 matching) is important: most authentication systems used for accessing devices or services do not fall within this Annex III category.
(b) AI systems intended for biometric categorisation according to sensitive or protected attributes or characteristics based on the inference of those attributes or characteristics.
(c) AI systems intended to be used for emotion recognition. Emotion recognition in workplaces and educational institutions is prohibited under Article 5(1)(f) (with limited exceptions for medical and safety reasons); elsewhere it is high-risk under this Annex III provision and additionally subject to Article 50 transparency obligations.
The biometrics category overlaps significantly with the prohibitions in Article 5. Practical compliance often requires checking the prohibitions first (does any aspect of the system fall under Article 5?) before assessing the high-risk obligations.
Category 2: Critical Infrastructure
Annex III, point 2 covers AI systems intended to be used as safety components in the management and operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating, or electricity.
The keyword is safety component (Article 3(14)) — a component whose failure or malfunctioning endangers health and safety of persons or property. Generic optimisation software for, say, a power grid is not automatically captured; an AI system whose decisions could trigger physical safety incidents is.
This category includes:
- AI-based traffic management systems whose outputs control signalling, ramp metering, or speed limits
- AI systems controlling the supply or distribution of utilities where failure could cause supply interruption or safety hazards
- AI systems managing critical digital infrastructure such as Domain Name System (DNS) resolvers or major cloud platforms classified as critical under national law
- AI systems controlling traffic management in airports or maritime ports
Cybersecurity intrusion-detection AI used to protect critical infrastructure is generally not a safety component of the infrastructure itself, but providers should assess case-by-case.
Category 3: Education and Vocational Training
Annex III, point 3 covers AI systems used in education and vocational training contexts, specifically:
(a) Determining access, admission, or assignment of natural persons to educational and vocational training institutions at all levels.
(b) Evaluating learning outcomes, including when those outcomes are used to steer the learning process.
(c) Assessing the appropriate level of education that an individual will receive or will be able to access in the context of or within educational and vocational training institutions at all levels.
(d) Monitoring and detecting prohibited behaviour of students during tests in the context of or within educational and vocational training institutions.
Education AI raises particular concerns because errors can shape life trajectories. Universities using AI to screen applications, schools using AI to assess essays at scale, or vocational programmes using AI to allocate students to streams all fall within Annex III.
Note that Article 5(1)(f) prohibits emotion recognition in educational institutions, so attention-monitoring AI in classrooms — even if framed as engagement analytics — is generally not just high-risk but prohibited.
Category 4: Employment, Workers Management, and Access to Self-Employment
Annex III, point 4 covers AI systems used in employment contexts:
(a) Recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates.
(b) Decisions affecting terms of work-related relationships, the promotion or termination of work-related relationships, the allocation of tasks based on individual behaviour or personal traits or characteristics, or the monitoring and evaluation of performance and behaviour of persons in such relationships.
This category is broad. It captures:
- AI hiring systems and applicant tracking systems with predictive features
- AI used to schedule shifts based on worker performance scores
- AI-driven task allocation in gig-economy platforms
- AI-based performance scoring or productivity-monitoring systems
- AI used in promotion and termination decisions
Standalone administrative tools without decision-influence may escape via the Article 6(3) carve-out. Profiling systems are explicitly included regardless of carve-out arguments.
Need auditable AI for compliance?
Ctrl AI provides full execution traces, expert verification, and trust-tagged outputs for every AI decision.
Learn About Ctrl AICategory 5: Access to and Enjoyment of Essential Private and Public Services
Annex III, point 5 covers AI systems used to grant or deny access to essential services. It has four sub-points:
(a) Public assistance benefits and services. AI systems intended to be used by public authorities or on their behalf to evaluate the eligibility of natural persons for public assistance benefits and services, including healthcare services, as well as to grant, reduce, revoke, or reclaim such benefits and services.
(b) Creditworthiness and credit scoring. AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud. This is the basis for the high-risk classification of credit-scoring AI.
(c) Life and health insurance risk assessment and pricing. AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance. See AI in insurance compliance.
(d) Emergency calls and dispatch. AI systems intended to evaluate and classify emergency calls by natural persons or to be used to dispatch, or to establish priority in the dispatching of, emergency first response services, including by police, firefighters and medical aid, as well as of emergency healthcare patient triage systems.
This is one of the most economically significant Annex III categories. It captures large parts of financial services AI and government-services AI.
Category 6: Law Enforcement
Annex III, point 6 covers AI systems used by law enforcement authorities (or by other authorities on their behalf) in support of law enforcement:
(a) AI polygraphs and similar tools or to detect the emotional state of a natural person.
(b) Evaluating the reliability of evidence in the course of investigation or prosecution of criminal offences.
(c) Profiling of natural persons for the purposes of analysing or predicting criminal behaviour — note that pure profile-based individual risk assessment is prohibited under Article 5(1)(d).
(d) Profiling of natural persons in the course of detection, investigation, or prosecution of criminal offences.
(e) Assessing the risk of a natural person becoming a victim of criminal offences.
This category has been narrowed compared to earlier drafts. Article 5 prohibitions exclude the worst practices (pure profile-based predictive policing, untargeted facial-recognition scraping, real-time remote biometric ID in public spaces with narrow exceptions). What remains in Annex III is still tightly regulated.
Category 7: Migration, Asylum, and Border Control Management
Annex III, point 7 covers AI systems used by or on behalf of competent public authorities in the area of migration, asylum, and border control:
(a) AI polygraphs and similar tools.
(b) Assessing risks posed by a natural person entering or having entered the territory of a Member State, including security risks, irregular migration risks, or health risks.
(c) Examining applications for asylum, visa, or residence permits, including assessing the reliability of evidence.
(d) Detecting, recognising, or identifying natural persons in the context of migration, asylum, and border control management, other than for the purposes of verifying travel documents.
This area is politically sensitive. The high-risk classification reflects the severity of decisions involved (denial of asylum, deportation, detention) and the vulnerability of the populations affected.
Category 8: Administration of Justice and Democratic Processes
Annex III, point 8 has two sub-points:
(a) Administration of justice. AI systems intended to be used by a judicial authority or on its behalf to assist a judicial authority in researching and interpreting facts and the law, and in applying the law to a concrete set of facts, or to be used in a similar way in alternative dispute resolution.
(b) Democratic processes. AI systems intended to be used for influencing the outcome of an election or referendum or the voting behaviour of natural persons in the exercise of their vote in elections or referendums. This does not include AI systems whose output natural persons are not directly exposed to (such as tools used to organise, optimise, or structure political campaigns from an administrative or logistical point of view).
This is the most recently added Annex III category, reflecting concern about AI-driven electoral manipulation. The carve-out for back-office political tools is important — campaign-management software is not high-risk just because it uses AI.
The Article 6(3) Carve-Out
A system that falls within an Annex III category is not automatically high-risk. Article 6(3) provides four cumulative scenarios where the classification does not apply, unless the system performs profiling of natural persons:
-
Narrow procedural task. The AI system is intended to perform a narrow procedural task (for example, a system that converts unstructured text into structured data).
-
Improving previously completed human activity. The system is intended to improve the result of a previously completed human activity (for example, AI that polishes drafts already created by a human evaluator).
-
Detecting decision-making patterns. The system is intended to detect decision-making patterns or deviations from prior decision-making patterns, and is not meant to replace or influence the previously completed human assessment without proper human review.
-
Preparatory tasks. The system is intended to perform a preparatory task to an assessment relevant for the purposes of Annex III use cases (for example, smart solutions for file management that classify documents).
The carve-out is narrower than it sounds. The default presumption is that an Annex III system is high-risk; the provider must document the assessment when relying on the carve-out, and a regulator can challenge it.
Common Misclassifications
We see several recurring errors when companies classify their systems:
1. Assuming customer-service chatbots are high-risk. They are usually limited-risk under Article 50 unless deployed in a high-risk use case (for example, a chatbot screening job applications).
2. Assuming recommendation systems are high-risk. Standard product or content recommendation systems are usually minimal-risk. They become high-risk only in specific Annex III contexts.
3. Missing the biometric-verification carve-out. AI used solely for 1-to-1 verification (proving someone is who they claim to be) is excluded from the biometric high-risk category — though it remains subject to GDPR.
4. Treating credit-fraud detection as high-risk. Annex III, point 5(b) explicitly excludes fraud detection from the credit-scoring high-risk category.
5. Misapplying the Article 6(3) carve-out. The carve-out does not apply to systems performing profiling. Providers also need to document the assessment — the carve-out is not invisible.
What to Do If Your System Is in Annex III
If your system falls within an Annex III category and you cannot rely on the Article 6(3) carve-out, you need to:
- Build a risk management system under Article 9
- Establish data governance that meets Article 10 (relevance, representativeness, statistical properties of training, validation, and testing data)
- Prepare technical documentation per Annex IV under Article 11
- Implement automatic event logging under Article 12
- Provide transparency to deployers under Article 13
- Design human oversight in under Article 14
- Document accuracy, robustness, and cybersecurity under Article 15
- Run a conformity assessment under Article 43
- Issue an EU declaration of conformity under Article 47 and affix CE marking under Article 48
- Register the system in the EU database under Article 49
- Operate a post-market monitoring system under Article 72
This is substantial work. Plan for six to twelve months of compliance build-out before the 2 August 2026 deadline for new systems.
Conclusion
Annex III is the heart of the high-risk regime. Every organisation deploying AI in the EU needs to know whether its systems fall within these eight categories, whether the Article 6(3) carve-out applies, and what obligations attach if it does not.
For a broader framing of how Annex III fits alongside the rest of the regulation, see the risk classification system article and the complete EU AI Act overview. For practical execution, the compliance checklist for CTOs and CIOs translates these obligations into a project plan.
Frequently Asked Questions
What is Annex III of the EU AI Act?
How many categories are in Annex III?
Is every AI system in these areas high-risk?
When do Annex III obligations apply?
Can the EU change Annex III?
Make Your AI Auditable and Compliant
Ctrl AI provides expert-verified reasoning units with full execution traces — the infrastructure you need for EU AI Act compliance.
Explore Ctrl AIRelated Articles
Annex I Explained: AI in Regulated Products Under the EU AI Act
How Annex I of the EU AI Act classifies AI systems embedded in regulated products — medical devices, machinery, toys, vehicles, aviation, marine, and more. Conformity assessment, deadlines, and the MDR/IVDR interaction.
High-Risk AI Systems: Complete Requirements Under the EU AI Act
Detailed guide to the requirements for high-risk AI systems under the EU AI Act — risk management, data governance, documentation, human oversight, accuracy, and cybersecurity.
EU AI Act Risk Classification: Four Levels Explained
Deep dive into the EU AI Act's four-tier risk classification system — unacceptable, high, limited, and minimal risk. Learn which category your AI system falls into and what's required.