Regulationhigh-riskrequirementscompliance

High-Risk AI Systems: Complete Requirements Under the EU AI Act

Detailed guide to the requirements for high-risk AI systems under the EU AI Act — risk management, data governance, documentation, human oversight, accuracy, and cybersecurity.

February 1, 202518 min read

High-risk AI systems are the centrepiece of the EU AI Act's regulatory framework. While the regulation establishes four tiers of risk, it is the high-risk category that carries the most detailed and demanding set of compliance requirements. Articles 8 through 15 of Regulation 2024/1689 lay out a comprehensive set of obligations that providers must meet before placing a high-risk AI system on the market or putting it into service.

This article provides a detailed examination of each requirement, with practical guidance on what compliance looks like in practice.

The requirements in Articles 8-15 apply to providers of high-risk AI systems. Deployers have separate obligations under Article 26, but they depend heavily on the documentation and design choices made by providers. Both roles should understand these requirements thoroughly.

Article 8: Compliance with Requirements

Article 8 establishes the overarching principle: high-risk AI systems must comply with the requirements laid down in Articles 9 to 15, taking into account the intended purpose of the system as well as the generally acknowledged state of the art.

This article also introduces the concept of proportionality. The requirements must be met "taking into account the intended purpose of the high-risk AI system and the risk management system referred to in Article 9." This means that the depth and rigour of compliance measures should be proportionate to the specific risks posed by the system in question.

The State of the Art Standard

The reference to "the generally acknowledged state of the art" in Article 8(1) is significant. It means that compliance is not judged against a fixed technical standard but against evolving best practices. What constitutes adequate risk management or sufficient accuracy today may not be sufficient tomorrow as the field advances. Providers must stay current with developments in AI safety, fairness, and security.

Article 9: Risk Management System

Article 9 is arguably the most foundational requirement. It mandates that a risk management system be established, implemented, documented, and maintained for every high-risk AI system.

What the Risk Management System Must Include

The risk management system is defined as a continuous, iterative process that runs throughout the entire lifecycle of the high-risk AI system. Under Article 9(2), it must include:

Identification and analysis of known and reasonably foreseeable risks that the high-risk AI system can pose to health, safety, or fundamental rights when used in accordance with its intended purpose.

Estimation and evaluation of risks that may emerge when the high-risk AI system is used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse.

Evaluation of risks arising from the analysis of data gathered from the post-market monitoring system referred to in Article 72.

Adoption of appropriate and targeted risk management measures designed to address the identified risks.

Risk Mitigation Approach

Article 9(4) requires that risk management measures give due consideration to the effects and possible interactions resulting from the combined application of the requirements. They must be such that the relevant residual risk associated with each hazard as well as the overall residual risk of the high-risk AI system is judged to be acceptable.

The risk management measures must consider what is the generally acknowledged state of the art, including as reflected in relevant harmonised standards or common specifications.

Testing Requirements

Article 9(5)-(7) addresses testing specifically. High-risk AI systems must be tested to identify the most appropriate and targeted risk management measures. Testing must ensure that the system performs consistently for its intended purpose and is in compliance with the requirements. Testing procedures must be suitable to fulfil the intended purpose of the AI system and do not need to go beyond what is necessary to achieve that purpose.

Testing must be performed against prior-defined metrics and probabilistic thresholds, and must take into account the specific characteristics of the intended purpose.

The risk management system is not a document you create once and file away. Article 9(1) explicitly states it must be a "continuous iterative process planned and run throughout the entire lifecycle." This means regular updates as the system evolves, as new risks emerge from real-world use, and as the state of the art advances.

Article 10: Data and Data Governance

Article 10 establishes requirements for the data used to train, validate, and test high-risk AI systems. Data quality is treated as a cornerstone of trustworthy AI.

Training, Validation, and Testing Data

Under Article 10(2), training, validation, and testing datasets must be subject to data governance and management practices appropriate for the intended purpose of the AI system. These practices concern:

  • The relevant design choices
  • Data collection processes and their origin, and in the case of personal data, the original purpose of data collection
  • Relevant data-preparation processing operations such as annotation, labelling, cleaning, updating, enrichment, and aggregation
  • The formulation of relevant assumptions, notably with respect to the information that the data are supposed to measure and represent
  • An assessment of the availability, quantity, and suitability of the data sets needed
  • Examination in view of possible biases likely to affect the health and safety of persons, have a negative impact on fundamental rights, or lead to discrimination prohibited under Union law, especially where data outputs influence inputs for future operations (feedback loops)
  • Appropriate measures to detect, prevent, and mitigate possible biases

Dataset Requirements

Article 10(3) specifies that training, validation, and testing datasets must be relevant, sufficiently representative, and to the best extent possible free of errors and complete in view of the intended purpose. They must have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons in relation to whom the high-risk AI system is intended to be used.

Special Provisions for Bias

Article 10(5) permits the processing of special categories of personal data (as defined in Article 9(1) of the GDPR and Article 10 of the Law Enforcement Directive) to the extent that it is strictly necessary for bias detection and correction. This is an important carve-out that allows providers to process sensitive data specifically for the purpose of ensuring fairness.

Article 11: Technical Documentation

Article 11 requires providers to draw up technical documentation before the high-risk AI system is placed on the market or put into service. The documentation must be kept up to date.

Content Requirements

The technical documentation must be drawn up in such a way as to demonstrate that the high-risk AI system complies with the requirements set out in Articles 8 to 15 and to provide national competent authorities and notified bodies with the necessary information to assess the compliance of the AI system. It must contain, at a minimum, the elements set out in Annex IV.

Annex IV specifies extensive documentation requirements including:

  • A general description of the AI system, its intended purpose, and the provider
  • A detailed description of the elements of the AI system and its development process
  • Detailed information about the monitoring, functioning, and control of the AI system
  • A description of the risk management system
  • Information about the data used (training, validation, testing datasets)
  • A description of the human oversight measures
  • A description of pre-determined changes to the system
  • A list of harmonised standards or common specifications applied
  • A copy of the EU declaration of conformity
  • A detailed description of the system for evaluating the AI system's performance in the post-market phase

The European Commission may adopt implementing acts establishing a common template for the technical documentation (Article 11(3)). When available, using this template will simplify compliance and ensure consistency. Until then, providers should follow Annex IV closely.

Article 12: Record-Keeping

Article 12 requires that high-risk AI systems be designed and developed with capabilities enabling the automatic recording of events (logs) while the system is operating.

Logging Capabilities

The logging capabilities must enable the monitoring of the operation of the high-risk AI system with respect to the occurrence of situations that may result in the AI system presenting a risk, or constitute a substantial modification, and facilitate the post-market monitoring referred to in Article 72.

Under Article 12(2), the logging capabilities must include as a minimum:

  • Recording of the period of each use of the system (start date and time, end date and time)
  • The reference database against which input data has been checked
  • Input data for which the search has led to a match
  • Identification of the natural persons involved in the verification of the results (as referred to in Article 14(5))

Traceability

The purpose of these logging requirements is traceability. When an incident occurs or a complaint is raised, logs must allow authorities and providers to reconstruct what the system did, when, and based on what inputs. This is essential for effective post-market monitoring and for investigations by market surveillance authorities.

Need auditable AI for compliance?

Ctrl AI provides full execution traces, expert verification, and trust-tagged outputs for every AI decision.

Learn About Ctrl AI

Article 13: Transparency and Provision of Information to Deployers

Article 13 requires that high-risk AI systems be designed and developed in such a way as to ensure that their operation is sufficiently transparent to enable deployers to interpret a system's output and use it appropriately.

Instructions for Use

Under Article 13(3), providers must supply deployers with instructions for use that include, at a minimum:

  • The identity and contact details of the provider
  • The characteristics, capabilities, and limitations of performance of the high-risk AI system, including its intended purpose, the level of accuracy, robustness, and cybersecurity, and any known or foreseeable circumstance that may lead to risks to health, safety, or fundamental rights
  • Changes to the system that have been pre-determined by the provider
  • Human oversight measures, including the technical measures designed to facilitate the interpretation of AI system outputs by deployers
  • Computational and hardware resources needed, expected lifetime, and maintenance and care measures to ensure proper functioning
  • Where relevant, a description of input data and any measures to ensure appropriate data quality

Interpretability

The transparency requirement goes beyond mere documentation. Article 13(1) emphasises that AI systems must be designed so that deployers can actually interpret outputs and use them appropriately. For high-risk AI systems that make or inform decisions about individuals, this means providing information about the main factors, and where appropriate, the main parameters, contributing to a particular output.

Article 14: Human Oversight

Article 14 is one of the most distinctive features of the EU AI Act. It requires that high-risk AI systems be designed and developed so as to be effectively overseen by natural persons during the period in which they are in use.

Design for Oversight

Human oversight must be designed into the system by the provider and implemented by the deployer. Under Article 14(3), the measures must be appropriate to the risks, level of autonomy, and context of use, and must enable the individuals who oversee the system to:

(a) Properly understand the relevant capacities and limitations of the high-risk AI system and be able to duly monitor its operation, including in view of detecting and addressing anomalies, dysfunctions, and unexpected performance.

(b) Remain aware of the possible tendency of automatically relying or over-relying on the output produced by the system (automation bias), in particular for high-risk AI systems used to provide information or recommendations for decisions to be taken by natural persons.

(c) Be able to correctly interpret the high-risk AI system's output, taking into account, for example, the interpretation tools and methods available.

(d) Be able to decide, in any particular situation, not to use the high-risk AI system or to otherwise disregard, override, or reverse the output of the high-risk AI system.

(e) Be able to intervene in the operation of the high-risk AI system or interrupt the system through a "stop" button or a similar procedure that allows the system to come to a halt in a safe state.

Human oversight is not a rubber stamp. Article 14(4)(a) explicitly addresses automation bias — the tendency of humans to defer to automated outputs. Providers must design systems that actively counteract this tendency, and deployers must ensure that oversight personnel are properly trained and empowered to override the system.

Identification of Oversight Personnel

Under Article 14(5), for high-risk AI systems used in the areas of biometrics (Annex III, point 1), the system must be designed so that no action or decision is taken by the deployer on the basis of the identification resulting from the system unless this has been separately verified and confirmed by at least two natural persons.

Article 15: Accuracy, Robustness, and Cybersecurity

Article 15 establishes the technical performance requirements for high-risk AI systems, covering three distinct dimensions.

Accuracy

Article 15(1) requires that high-risk AI systems be designed and developed in such a way that they achieve an appropriate level of accuracy, robustness, and cybersecurity, and that they perform consistently in those respects throughout their lifecycle.

Under Article 15(2), the levels of accuracy and the relevant accuracy metrics must be declared in the instructions for use. This means providers must define, measure, and communicate the accuracy of their systems in concrete, measurable terms.

Robustness

Article 15(4) addresses robustness, requiring that high-risk AI systems be as resilient as possible regarding errors, faults, or inconsistencies that may occur within the system or the environment in which the system operates, in particular due to their interaction with natural persons or other systems. This includes:

  • Technical redundancy solutions, which may include backup or fail-safe plans
  • Resilience against attempts by unauthorised third parties to alter the use or performance of the high-risk AI system by exploiting its vulnerabilities (adversarial robustness)

Cybersecurity

Article 15(5) requires that high-risk AI systems be resilient against attempts by unauthorised third parties to exploit vulnerabilities of the system. The cybersecurity measures must be appropriate to the relevant circumstances and risks, and may include measures to prevent and control for:

  • Attacks attempting to manipulate training datasets (data poisoning)
  • Attacks attempting to manipulate pre-trained components used in training (model poisoning)
  • Inputs designed to cause the AI model to make a mistake (adversarial examples or model evasion)
  • Confidentiality attacks
  • Model flaws

Beyond Articles 8-15: Additional Provider Obligations

While Articles 8-15 define the core technical and process requirements, providers of high-risk AI systems face several additional obligations.

Quality Management System (Article 17)

Providers must implement a quality management system that ensures compliance with the regulation. The QMS must include:

  • A strategy for regulatory compliance, including conformity assessment procedures and management of modifications
  • Techniques, procedures, and systematic actions for design, design control, and design verification
  • Techniques, procedures, and actions for development, quality control, and quality assurance
  • Examination, test, and validation procedures to be carried out before, during, and after development, and the frequency of those procedures
  • Technical specifications, including standards, to be applied
  • Systems and procedures for data management, including data acquisition, collection, analysis, labelling, storage, filtering, mining, aggregation, retention, and any other operation regarding the data
  • A risk management system as referred to in Article 9
  • Post-market monitoring, including Article 72 obligations
  • Procedures for incident reporting under Article 73
  • Communication with competent authorities, other relevant actors, and deployers
  • Systems and procedures for record-keeping
  • Resource management including security-of-supply measures
  • An accountability framework

Conformity Assessment (Article 43)

Before a high-risk AI system can be placed on the market, providers must undergo a conformity assessment to verify that the system meets all applicable requirements. The nature of this assessment depends on the type of AI system:

  • For most high-risk AI systems listed in Annex III, providers may conduct the conformity assessment through internal control (Annex VI), based on a quality management system and technical documentation assessment.
  • For high-risk AI systems related to biometric identification (Annex III, point 1), and for Annex I systems where a third-party assessment is required by sectoral legislation, a notified body must be involved.

EU Declaration of Conformity (Article 47)

Following a successful conformity assessment, the provider must draw up an EU declaration of conformity for each AI system and keep it available for 10 years after the system has been placed on the market or put into service.

CE Marking (Article 48)

The CE marking must be affixed to the high-risk AI system or, where that is not possible, on its packaging or accompanying documentation, in a visible, legible, and indelible manner.

Registration (Article 49)

Providers must register their high-risk AI system in the EU database (established under Article 71) before placing it on the market or putting it into service.

Post-Market Monitoring (Article 72)

Providers must establish and document a post-market monitoring system that is proportionate to the nature and risks of the AI system. The system must actively and systematically collect, document, and analyse relevant data on the system's performance throughout its lifetime.

Meeting these requirements is a substantial undertaking, but it is also an opportunity to build more robust, reliable, and trustworthy AI systems. Organisations that embed these practices into their development processes will produce better products while ensuring compliance.

Deployer Obligations (Article 26)

While the requirements in Articles 8-15 are addressed to providers, deployers of high-risk AI systems have their own obligations that are tightly linked to provider compliance.

Key Deployer Requirements

Under Article 26, deployers must:

  • Use the system according to instructions: Deployers must use the high-risk AI system in accordance with the instructions for use provided by the provider.
  • Assign competent human oversight: Ensure that natural persons assigned to human oversight are competent, properly trained, and have the authority and resources to fulfil their role effectively.
  • Monitor operation: To the extent the deployer exercises control over the AI system, ensure that input data is relevant and sufficiently representative in view of the intended purpose.
  • Suspend or discontinue use: If the deployer has reasons to consider that the use of the AI system in accordance with the instructions may result in risks, they must suspend the system and inform the provider or distributor.
  • Keep logs: Retain the logs automatically generated by the high-risk AI system for a period of at least six months (unless provided otherwise in applicable Union or national law).
  • Conduct fundamental rights impact assessment: Under Article 27, deployers that are public law bodies or private entities providing public services must conduct a fundamental rights impact assessment before putting a high-risk AI system into use.
  • Inform workers: Under Article 26(7), deployers must inform workers' representatives and affected workers that they will be subject to the use of a high-risk AI system.

Building a Compliance Programme

Meeting these requirements demands a structured compliance programme. Here is a practical approach.

Phase 1: Assessment

Inventory your AI systems, classify them under the risk framework, and identify which systems are high-risk. For each high-risk system, conduct a gap analysis against Articles 8-15 and the additional obligations.

Phase 2: Implementation

Address identified gaps systematically. Establish the risk management system, implement data governance practices, create technical documentation, build logging capabilities, design human oversight mechanisms, and ensure adequate accuracy, robustness, and cybersecurity.

Phase 3: Verification

Conduct the conformity assessment, prepare the EU declaration of conformity, affix the CE marking, and register the system in the EU database.

Phase 4: Ongoing Compliance

Maintain the post-market monitoring system, update documentation as the system evolves, report incidents, and regularly review the risk management system.

Conclusion

The requirements for high-risk AI systems under the EU AI Act are comprehensive and demanding. They cover the entire lifecycle of an AI system, from initial design through deployment and beyond. Articles 8 through 15 establish a framework that, when properly implemented, ensures that high-risk AI systems are safe, transparent, accountable, and respectful of fundamental rights.

The key to compliance is recognising that these requirements are interconnected. The risk management system informs data governance decisions, which shape the technical documentation, which underpins transparency, which enables human oversight. Treating these requirements as isolated checklists will lead to fragmented compliance. Treating them as a coherent system will produce both compliance and better AI.

Organisations should begin their compliance work now, using the phased approach outlined above. The August 2026 deadline for Annex III systems and the August 2027 deadline for Annex I systems may seem distant, but the depth of these requirements means that preparation cannot be deferred.

Make Your AI Auditable and Compliant

Ctrl AI provides expert-verified reasoning units with full execution traces — the infrastructure you need for EU AI Act compliance.

Explore Ctrl AI

Related Articles