Industryhealthcaremedical-devicesclinical-ai

EU AI Act Compliance for Healthcare and Medical AI

How the EU AI Act applies to clinical AI, medical devices, diagnostic systems, and healthcare decision support — classification, MDR interaction, and compliance requirements.

March 30, 202512 min read

Artificial intelligence is transforming healthcare at a remarkable pace. AI systems now assist with medical imaging analysis, clinical decision support, drug dosing optimisation, patient triage, and surgical robotics. The EU AI Act (Regulation 2024/1689) recognises the sensitivity of this domain by classifying many healthcare AI applications as high-risk, while also creating a complex interaction with existing medical device regulation that healthcare organisations must navigate carefully.

This article examines how the EU AI Act applies to healthcare and medical AI, how it interacts with the Medical Devices Regulation (MDR) and the In Vitro Diagnostic Regulation (IVDR), and what compliance strategies healthcare organisations and medical AI providers should pursue.

How the EU AI Act Classifies Healthcare AI

Healthcare AI systems can be classified as high-risk under the EU AI Act through two distinct pathways, and understanding the difference is essential for compliance planning.

Pathway 1: Annex I — Medical Device Legislation

Annex I, Section A of the EU AI Act lists Union harmonisation legislation that, when applicable to a product, triggers high-risk classification for any AI system that is a safety component of that product or is itself such a product. This includes:

  • Regulation (EU) 2017/745 — the Medical Devices Regulation (MDR)
  • Regulation (EU) 2017/746 — the In Vitro Diagnostic Medical Devices Regulation (IVDR)

Any AI system that qualifies as a medical device or an in vitro diagnostic medical device, or that is a safety component of such a device, is automatically classified as high-risk under the EU AI Act if it is required to undergo a third-party conformity assessment under the relevant medical device legislation.

This pathway captures a wide range of healthcare AI applications, including AI-powered diagnostic imaging tools, AI-based clinical decision support software (Software as a Medical Device, or SaMD), AI systems embedded in surgical robots, and AI-driven laboratory diagnostics.

Pathway 2: Annex III — Standalone Classification

Even where healthcare AI does not fall under medical device legislation, Annex III of the EU AI Act independently classifies certain healthcare-adjacent AI systems as high-risk. Specifically, Annex III, point 5(a) covers AI systems "intended to be used to evaluate the eligibility of natural persons for public assistance benefits and services, including healthcare services." AI systems used in triage, patient prioritisation, or allocation of healthcare resources may fall within this category.

Additionally, AI systems used for biometric identification in healthcare settings, or for HR and recruitment purposes within healthcare organisations, may trigger high-risk classification under other Annex III categories.

Not all healthcare AI is high-risk. A wellness app that provides general health tips using AI, or an administrative scheduling tool that uses machine learning to optimise appointment booking, would likely fall into the minimal or limited risk category. Classification depends on the specific function and impact of the AI system, not merely on its use within a healthcare setting.

The Critical Interaction Between the EU AI Act and the MDR/IVDR

The relationship between the EU AI Act and existing medical device regulation is one of the most important — and most complex — aspects of healthcare AI compliance.

Integrated Conformity Assessment

For AI systems that are medical devices, Article 43(3) of the EU AI Act provides that the AI Act conformity assessment can be integrated into the conformity assessment already required under the MDR or IVDR. This means that healthcare AI providers do not necessarily face two entirely separate assessment processes. Instead, the notified body conducting the MDR/IVDR conformity assessment will also verify compliance with the EU AI Act's requirements.

However, this integration does not reduce the substantive requirements. The AI system must still comply with all applicable requirements under Articles 8 through 15 of the EU AI Act in addition to the MDR/IVDR requirements. The integration is procedural, not substantive.

Where Requirements Overlap and Diverge

The MDR and the EU AI Act share several conceptual foundations — risk management, clinical evaluation, post-market surveillance, and technical documentation. But they are not identical, and healthcare AI providers must address the specific requirements of each framework:

Risk management: The MDR requires risk management in accordance with ISO 14971. The EU AI Act's Article 9 imposes its own risk management requirements, which include specific provisions for bias assessment, feedback loops, and fundamental rights impacts that go beyond traditional medical device risk analysis.

Clinical evidence: The MDR requires clinical evidence demonstrating safety and performance. The EU AI Act requires evidence of accuracy, robustness, and non-discrimination. For AI-based medical devices, both forms of evidence must be generated and maintained.

Post-market monitoring: Both frameworks require post-market surveillance, but the EU AI Act's Article 72 adds specific requirements for AI systems, including monitoring for bias drift, accuracy degradation, and emerging risks to fundamental rights.

Technical documentation: The MDR requires technical documentation per Annex II/III of the MDR. The EU AI Act requires documentation per Annex IV of the AI Act. Healthcare AI providers must maintain documentation that satisfies both sets of requirements.

The European Commission and the Medical Device Coordination Group (MDCG) are expected to issue guidance on the practical integration of EU AI Act and MDR requirements. Healthcare AI providers should monitor these developments closely, as the guidance will shape how notified bodies assess compliance in practice.

Key Compliance Requirements for Healthcare AI

Data Governance in Healthcare Contexts

Article 10 of the EU AI Act imposes stringent data governance requirements that have particular implications for healthcare AI:

Representative datasets: Medical AI training data must be representative of the patient populations to which the system will be applied. A diagnostic imaging AI trained primarily on data from one demographic group may perform poorly on others, creating both a clinical risk and a compliance issue. Article 10(3) requires that datasets be "sufficiently representative" in view of the intended purpose.

Bias detection for protected groups: Healthcare AI systems must be assessed for biases that could lead to discriminatory outcomes. This is particularly important for diagnostic and triage systems, where biases based on age, gender, ethnicity, or socioeconomic status could lead to disparities in care.

Special category data: Healthcare data is special category data under the GDPR (Article 9). The EU AI Act's Article 10(5) permits processing of special category data strictly for bias detection and correction, subject to appropriate safeguards. Healthcare AI providers must navigate the interplay between the GDPR's restrictions on health data processing and the AI Act's requirements for bias assessment.

Data quality and annotation: Clinical data used for training must be accurately labelled and annotated. For medical imaging AI, this means ensuring that training data has been reviewed and annotated by qualified clinicians, and that the annotation process itself is documented and quality-controlled.

Transparency and Explainability for Clinical Users

Article 13 requires that high-risk AI systems be sufficiently transparent for deployers — in healthcare, this means clinicians — to interpret outputs and use them appropriately. For healthcare AI, this translates into several practical requirements:

  • Diagnostic AI systems must clearly communicate confidence levels, limitations, and the basis for their outputs. A radiology AI that identifies a potential lesion should indicate its confidence level and flag relevant limitations (such as reduced accuracy for certain patient populations or imaging conditions).
  • Instructions for use must be written for clinical audiences, explaining when the system should and should not be relied upon, what clinical validation has been performed, and how to interpret the system's outputs in the context of clinical judgment.
  • Known failure modes and edge cases must be documented and communicated to clinical users.

Need auditable AI for compliance?

Ctrl AI provides full execution traces, expert verification, and trust-tagged outputs for every AI decision.

Learn About Ctrl AI

Human Oversight in Clinical Settings

Article 14's human oversight requirements align naturally with the principle of clinical autonomy, but they impose specific obligations:

Clinician override capability: Healthcare AI systems must be designed so that clinicians can override, disregard, or reverse the system's outputs. A diagnostic AI that presents its output as definitive rather than advisory would be non-compliant.

Automation bias mitigation: Article 14(4)(a) specifically addresses automation bias — the tendency to over-rely on automated outputs. In healthcare, this is a well-documented concern. AI systems must be designed to counteract this tendency, for example by presenting differential diagnoses rather than single conclusions, or by requiring clinicians to document their own assessment before viewing the AI output.

Competence requirements: Deployers must ensure that clinical staff using AI systems are properly trained to understand the system's capabilities, limitations, and potential failure modes. This goes beyond general AI literacy to include system-specific training.

Accuracy and Robustness

Article 15 requires appropriate levels of accuracy, robustness, and cybersecurity. For healthcare AI, accuracy requirements are particularly demanding:

  • Accuracy metrics must be defined and declared in the instructions for use. For diagnostic AI, this means specifying sensitivity, specificity, positive and negative predictive values, and the populations and conditions under which these metrics were measured.
  • Robustness must account for the variability of real-world clinical environments, including different imaging equipment, patient positioning, image quality, and clinical workflows.
  • Cybersecurity is critical given the sensitivity of health data and the potential patient safety implications of compromised AI systems.

Specific Healthcare AI Use Cases

Medical Imaging and Diagnostics

AI-powered diagnostic imaging — including radiology, pathology, dermatology, and ophthalmology applications — represents one of the largest categories of healthcare AI. These systems typically qualify as medical devices under the MDR and as high-risk AI under the EU AI Act.

Providers must ensure that their clinical validation studies are designed to demonstrate both clinical performance (as required by the MDR) and the absence of discriminatory bias (as required by the EU AI Act). This may require larger and more diverse study populations than has historically been standard for medical device clinical evaluations.

Clinical Decision Support Systems

Clinical decision support (CDS) systems range from simple rule-based alerts to sophisticated AI-driven diagnostic and treatment recommendation engines. The regulatory classification depends on the system's function:

  • CDS systems that provide information to clinicians but leave the final decision entirely to clinical judgment may not qualify as medical devices under the MDR, but could still be high-risk under the EU AI Act if they significantly influence clinical decisions.
  • CDS systems that are intended to provide specific diagnostic or therapeutic recommendations are likely to qualify as medical devices and be subject to both the MDR and the EU AI Act.

Patient Triage and Resource Allocation

AI systems used to prioritise patients for treatment, allocate hospital beds, or determine eligibility for specific healthcare services raise particular concerns about fairness and non-discrimination. These systems may be classified as high-risk under Annex III, point 5(a) of the EU AI Act.

The stakes in triage and resource allocation are exceptionally high. Providers and deployers must pay particular attention to bias detection, transparency, and human oversight to ensure that AI-driven triage does not systematically disadvantage vulnerable patient groups.

Healthcare organisations that deploy AI systems developed by third parties remain responsible for their deployer obligations under Article 26. This includes ensuring appropriate human oversight, monitoring system performance, retaining logs, and suspending use if risks emerge. The fact that a system has CE marking as a medical device does not relieve the deployer of its AI Act obligations.

Compliance Strategy for Healthcare Organisations

For Medical AI Providers (Manufacturers)

Integrate compliance frameworks: Rather than treating MDR and EU AI Act compliance as separate workstreams, build an integrated compliance programme. Use ISO 14971 risk management as a foundation and extend it to address the AI Act's specific requirements for bias assessment, data governance, and fundamental rights impact.

Invest in diverse clinical validation: Design clinical studies that are large enough and diverse enough to demonstrate both clinical performance and the absence of discriminatory bias across different patient populations.

Build explainability into the design process: Transparency and explainability are much easier to achieve when they are design requirements from the outset, rather than afterthoughts added to satisfy regulation.

Establish post-market monitoring for AI-specific risks: Supplement existing post-market surveillance systems with monitoring for AI-specific risks such as accuracy degradation, data drift, and emerging biases.

For Healthcare Deployers (Hospitals, Clinics, Health Systems)

Conduct an AI inventory: Catalogue all AI systems in use across the organisation, including systems embedded in medical devices, standalone software, and administrative tools. Classify each system under the EU AI Act's risk framework.

Update procurement processes: Ensure that procurement contracts for AI-powered medical devices and software include provisions for access to technical documentation, logs, and ongoing compliance information required under the AI Act.

Train clinical staff: Develop training programmes that equip clinicians to exercise effective human oversight of AI systems, including understanding system limitations, recognising potential failure modes, and knowing when and how to override AI outputs.

Establish governance structures: Designate clear accountability for AI governance within the organisation, including responsibility for monitoring deployed AI systems, reporting incidents, and conducting fundamental rights impact assessments where required.

Timeline Considerations

For healthcare AI that is classified as high-risk through Annex I (medical device legislation), the compliance deadline is 2 August 2027. For AI systems classified as high-risk through Annex III, the deadline is 2 August 2026.

However, given the complexity of healthcare AI compliance — particularly the need to integrate EU AI Act requirements with MDR/IVDR obligations — healthcare organisations and medical AI providers should be actively working on compliance now. The conformity assessment process for medical devices already takes considerable time, and adding AI Act requirements will extend this further.

Conclusion

The EU AI Act introduces a substantial new regulatory layer for healthcare AI, but one that is fundamentally aligned with the sector's core values of patient safety, clinical evidence, and transparency. For medical AI providers, the challenge lies in integrating AI Act compliance with existing MDR/IVDR requirements without creating duplicative processes. For healthcare deployers, the priority is ensuring that clinical staff can exercise meaningful oversight of AI systems and that governance structures are in place to monitor deployed systems over time.

Healthcare AI has the potential to improve diagnosis, treatment, and patient outcomes significantly. The EU AI Act's requirements, while demanding, provide a framework for ensuring that these benefits are delivered safely, fairly, and transparently. Organisations that embrace these requirements as part of good clinical and engineering practice — rather than treating them as regulatory burdens — will be best positioned to innovate responsibly within this framework.

Make Your AI Auditable and Compliant

Ctrl AI provides expert-verified reasoning units with full execution traces — the infrastructure you need for EU AI Act compliance.

Explore Ctrl AI

Related Articles