Annex I Explained: AI in Regulated Products Under the EU AI Act
How Annex I of the EU AI Act classifies AI systems embedded in regulated products — medical devices, machinery, toys, vehicles, aviation, marine, and more. Conformity assessment, deadlines, and the MDR/IVDR interaction.
Annex I of the EU AI Act covers AI systems that are integrated with — or constitute — products already regulated under EU product-safety legislation. It is the "vertical" half of the high-risk regime, complementing the "horizontal" Annex III categories that classify AI by use case rather than by product type.
Understanding Annex I matters most for manufacturers of regulated products: medical devices, machinery, toys, vehicles, aircraft, ships, and many others. If your product already requires CE marking and notified-body involvement, and you are adding AI to it, you are almost certainly in scope.
This article walks through the Annex I list, the Article 6(1) trigger, the interaction with sector-specific product legislation, and the practical conformity-assessment path.
How Annex I Differs from Annex III
The EU AI Act classifies high-risk AI systems under two distinct pathways:
- Article 6(1) and Annex I capture AI systems integrated with regulated products. The classification is product-centric: it follows the existing Union harmonisation legislation that governs the product.
- Article 6(2) and Annex III capture standalone AI systems used in sensitive areas. The classification is use-case-centric: it depends on where the AI is deployed, not what product it powers.
A single AI system can fall under both pathways. For example, AI-driven diagnostic software that meets the definition of a medical device falls under Annex I via the Medical Devices Regulation, but if the same software is also used to evaluate eligibility for healthcare services, it may additionally fall under Annex III, point 5(a).
The Article 6(1) Trigger
Article 6(1) classifies an AI system as high-risk under Annex I if two cumulative conditions are met:
- The AI system is intended to be used as a safety component of a product, or is itself a product, covered by Union harmonisation legislation listed in Annex I, Section A or Section B; and
- The product whose safety component the AI system is, or the AI system itself as a product, is required to undergo a third-party conformity assessment pursuant to that Union harmonisation legislation.
Both conditions matter. The first identifies which products are relevant. The second narrows the trigger to those products whose conformity already requires notified-body involvement.
Annex I, Section A lists harmonisation legislation that fully integrates with the AI Act's enforcement model. Section B lists additional legislation where the AI Act applies in a more coordinated way. The distinction affects certain procedural details but not the substantive high-risk requirements.
Why the "Third-Party Conformity Assessment" Requirement?
The second condition is a calibration choice. EU product safety law treats different product classes with different rigour. Class I medical devices and Annex I machinery without high-risk features can self-certify; higher classes need a notified body. The AI Act follows this gradient: AI added to a product already requiring third-party assessment is high-risk; AI added to a self-certifying product is not (under this pathway).
This can produce surprising results. The same AI feature may be high-risk in one device class and not in another, depending on the underlying product classification.
Annex I, Section A — Harmonisation Legislation
Section A lists Union harmonisation legislation that is closely integrated with the AI Act. The full list:
| Sector | Legislation |
|---|---|
| Machinery | Regulation (EU) 2023/1230 on machinery |
| Toy safety | Directive 2009/48/EC on the safety of toys |
| Recreational craft | Directive 2013/53/EU on recreational craft and personal watercraft |
| Lifts | Directive 2014/33/EU on lifts |
| Equipment in potentially explosive atmospheres | Directive 2014/34/EU on equipment for use in potentially explosive atmospheres (ATEX) |
| Radio equipment | Directive 2014/53/EU on radio equipment |
| Pressure equipment | Directive 2014/68/EU on pressure equipment |
| Cableway installations | Regulation (EU) 2016/424 |
| Personal protective equipment | Regulation (EU) 2016/425 |
| Gas appliances | Regulation (EU) 2016/426 |
| Medical devices | Regulation (EU) 2017/745 (MDR) |
| In vitro diagnostic medical devices | Regulation (EU) 2017/746 (IVDR) |
These are the legislative regimes where AI Act obligations are fully integrated. Conformity assessment, post-market surveillance, and incident reporting can be conducted as a single, coordinated process under both regimes.
Annex I, Section B — Additional Legislation
Section B adds further legislation that mostly governs transport and motor vehicles:
| Sector | Legislation |
|---|---|
| Civil aviation safety | Regulation (EU) 2018/1139 |
| Motor vehicles | Regulation (EU) 2019/2144 |
| Marine equipment | Directive 2014/90/EU |
| Rail interoperability | Directive (EU) 2016/797 |
| Two- or three-wheel vehicles and quadricycles | Regulation (EU) 168/2013 |
| Agricultural and forestry vehicles | Regulation (EU) 167/2013 |
For Section B legislation, the AI Act's obligations apply but the integration with sector-specific procedures is less full than under Section A. Sector regulators (for instance, the European Union Aviation Safety Agency for civil aviation) retain primary authority in their domains.
Need auditable AI for compliance?
Ctrl AI provides full execution traces, expert verification, and trust-tagged outputs for every AI decision.
Learn About Ctrl AICross-Cutting Sectors
A few sectors deserve specific attention because they account for a large share of practical Annex I exposure:
Medical Devices and IVDs
Software qualifying as a medical device or in vitro diagnostic medical device falls within MDR or IVDR. Most software-only medical devices that incorporate AI require notified-body involvement (Class IIa, IIb, or III under MDR; List A, B, or C under IVDR), which triggers high-risk classification under the AI Act.
The AI Act and MDR/IVDR share several concepts — risk management, clinical evaluation, technical documentation, post-market surveillance — but they are not identical. The AI Act adds requirements (data governance, bias assessment, human oversight) that go beyond MDR/IVDR. See AI in healthcare compliance for the integration in practice.
Machinery
Regulation (EU) 2023/1230 on machinery (replacing the 2006 Machinery Directive) is the most far-reaching addition to Annex I, Section A. The new Machinery Regulation explicitly covers AI-based safety components and "machinery with self-evolving behaviour ensuring safety functions" — both of which are subject to Annex IV third-party assessment.
This means industrial AI used in safety-critical machinery (collision avoidance in mobile robots, safety interlocks in automated production lines, AI-based hazard detection) falls within Annex I. The Machinery Regulation also expands the list of Annex IV machinery to include several new AI-relevant categories.
Toys
Directive 2009/48/EC covers AI features in toys, including increasingly common AI-driven interactive toys. Most toy compliance is self-certification, so the AI Act's Annex I trigger does not apply to typical toy AI. However, certain toys (Class III in the directive's risk categorisation) require notified-body involvement and would therefore trigger the AI Act if AI is integrated.
Automotive
Regulation (EU) 2019/2144 governs the type-approval of motor vehicles, including the new Advanced Driver Assistance Systems (ADAS) requirements. AI used in safety-critical driver-assistance functions, automated driving systems, and event-data recorders falls within Section B. Most automotive AI compliance flows through the established type-approval procedure run by national type-approval authorities, with the AI Act adding specific obligations on top.
Aviation
Civil aviation AI (Regulation (EU) 2018/1139) is similarly governed primarily through EASA-supervised certification, with AI Act obligations layered on. The European Union Aviation Safety Agency has issued guidance materials clarifying how AI applications in aviation systems are assessed.
Conformity Assessment under Annex I
For Annex I high-risk systems, the conformity assessment procedure is integrated with the sectoral procedure, not run in parallel. Article 43(3) provides:
For high-risk AI systems referred to in point 1 of Annex III, where the provider has applied harmonised standards […], the conformity assessment procedure shall be that of internal control. For all other high-risk AI systems, the conformity assessment shall follow the procedures laid down in the legal acts listed in Section A of Annex I.
In practice, this means:
- For MDR/IVDR products: the notified body conducting the medical-device conformity assessment also verifies AI Act compliance
- For Machinery Regulation products: the conformity assessment under the new Machinery Regulation incorporates AI Act elements
- For Section B legislation (aviation, automotive, etc.): primary sector procedures apply, with AI Act requirements supplemental
The single CE marking under Article 48 covers both the AI Act and the underlying product legislation. There is no separate AI Act marking.
The Annex I Timeline
Annex I high-risk systems benefit from a longer transition period than Annex III systems:
- Annex III standalone high-risk systems: most obligations apply from 2 August 2026
- Annex I product-integrated high-risk systems: obligations apply from 2 August 2027
The extra year acknowledges that integrating AI Act compliance into existing notified-body workflows takes time. Notified bodies must extend their designations to cover the AI Act, common specifications and harmonised standards must be finalised, and product certification cycles run on multi-year timelines.
This does not mean manufacturers should wait. The full conformity-assessment chain — risk management, data governance, technical documentation, post-market monitoring — takes 12–24 months to set up, particularly for highly regulated sectors. By the time you can start submitting under the AI Act's integrated procedure, your internal compliance system needs to be ready.
What If My Product Is Covered by Both Annex I and Annex III?
This happens frequently. An AI clinical decision support system can fall under Annex I via MDR/IVDR and under Annex III, point 5(a) if it is used to evaluate eligibility for healthcare services.
Article 6 is cumulative: a system that meets either trigger is high-risk. There is no double regulation, however — the substantive requirements (Articles 8–15) apply once. The procedural pathways differ:
- For Annex I: conformity assessment integrated with sectoral procedure; registration under sectoral rules; CE marking
- For Annex III: conformity assessment per Article 43; EU database registration under Article 49
When both apply, the most demanding procedural path generally takes precedence, but with coordinated execution where possible.
Practical Checklist for Annex I Compliance
If you are placing an AI-enabled product on the EU market that may fall under Annex I:
- Identify the applicable Union harmonisation legislation. Is the product covered by anything in Annex I, Section A or B?
- Determine the conformity-assessment route under that legislation. Does it require notified-body involvement?
- If both yes, the AI is high-risk under Article 6(1). Plan for AI Act Articles 8–15 in addition to sectoral requirements.
- Engage a notified body that has been designated for both the sectoral legislation and the AI Act. Notified-body capacity is a known bottleneck.
- Coordinate technical documentation. Annex IV of the AI Act maps onto sectoral documentation requirements; structure documentation so the same evidence supports both.
- Plan post-market monitoring to satisfy both Article 72 of the AI Act and the sectoral post-market surveillance obligations.
- Affix a single CE marking covering all applicable legislation.
- Apply the 2 August 2027 deadline, working backward to design your compliance schedule.
Conclusion
Annex I is technically narrower than Annex III but commercially huge: it captures medical AI, industrial AI in machinery, automotive AI, aviation AI, and many other regulated-product sectors. The integration with existing CE-marking workflows is a feature, not a workaround — it lets manufacturers extend established compliance functions rather than building parallel ones.
For the use-case-driven half of the high-risk regime, see Annex III explained. For a step-by-step run through the full high-risk obligations once classification is settled, the high-risk AI systems requirements article is the reference.
Frequently Asked Questions
What does Annex I of the EU AI Act cover?
When do Annex I obligations apply?
Does CE marking cover both the AI Act and the underlying product legislation?
Are pure software medical devices captured by Annex I?
Does the AI Act apply if the product legislation doesn't require notified-body involvement?
Make Your AI Auditable and Compliant
Ctrl AI provides expert-verified reasoning units with full execution traces — the infrastructure you need for EU AI Act compliance.
Explore Ctrl AIRelated Articles
Annex III Explained: Standalone High-Risk AI Systems Under the EU AI Act
Detailed breakdown of all eight categories of standalone high-risk AI systems in Annex III of the EU AI Act — biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, and justice.
High-Risk AI Systems: Complete Requirements Under the EU AI Act
Detailed guide to the requirements for high-risk AI systems under the EU AI Act — risk management, data governance, documentation, human oversight, accuracy, and cybersecurity.
EU AI Act Risk Classification: Four Levels Explained
Deep dive into the EU AI Act's four-tier risk classification system — unacceptable, high, limited, and minimal risk. Learn which category your AI system falls into and what's required.