Use Casesbiometricsfacial-recognitionhigh-risk

Biometric AI and the EU AI Act: Identification, Verification, and Categorisation

How the EU AI Act regulates biometric AI — Article 5 prohibitions on real-time remote ID and sensitive-attribute categorisation, Annex III high-risk classification, and the practical compliance path.

May 12, 202611 min read

Biometric AI sits at the most heavily regulated end of the EU AI Act. The combination of Article 5 prohibitions, Annex III high-risk classifications, GDPR Article 9 restrictions on biometric data, and EU Charter fundamental-rights considerations creates a complex compliance landscape that most other AI use cases do not encounter.

This article maps the full regulatory picture for biometric AI: what is prohibited outright, what is high-risk, what falls within transparency obligations, and what is largely outside the regulatory scope. It also addresses the specific scenarios that most companies deploying biometric AI face — authentication, access control, surveillance, public-space identification, and emotion or attribute inference.

The Definitions That Drive Classification

Article 3 of the EU AI Act introduces several biometric-related definitions that determine which provisions apply. Getting these straight is essential.

Biometric data (Article 3(34)), aligned with GDPR: personal data resulting from specific technical processing relating to the physical, physiological, or behavioural characteristics of a natural person, allowing or confirming the unique identification of that person.

Biometric identification (Article 3(35)): the automated recognition of physical, physiological, behavioural, and psychological human features for the purpose of establishing an individual's identity by comparing biometric data of that individual to biometric data of individuals stored in a database. This is 1-to-N matching.

Biometric verification (Article 3(36)): the automated, one-to-one verification, including authentication, of natural persons' identity by comparing their biometric data with previously provided biometric data. This is 1-to-1 matching.

Biometric categorisation (Article 3(40)): assigning natural persons to specific categories on the basis of their biometric data. The categories can be coarse (age, gender) or fine (race, religion) — the latter triggers Article 5(1)(g).

Remote biometric identification system (Article 3(41)): an AI system for the purpose of identifying natural persons, without their active involvement, typically at a distance through the comparison of a person's biometric data with the biometric data contained in a reference database.

Real-time remote biometric identification (Article 3(42)): a remote biometric identification system whereby the capturing of biometric data, the comparison, and the identification all occur without significant delay. (As opposed to "post" remote biometric identification, where there is significant delay between capture and identification.)

The classification of any specific biometric AI system flows from which of these definitions it meets.

What Is Prohibited Under Article 5

Article 5 places three biometric-related practices on the outright-banned list:

Article 5(1)(e) — Untargeted Facial Recognition Database Building

The placing on the market, the putting into service for this specific purpose, or the use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage.

This directly targets Clearview-AI-style operations. Practices captured:

  • Scraping social media platforms or public websites to harvest facial images for biometric database construction
  • Crawling CCTV footage or live video to extract identifying images without targeted basis
  • Any systematic, non-targeted collection of facial images to train or populate facial recognition systems

Targeted collection — for instance, mugshot databases maintained under specific legal authority, or databases built from individuals who provided informed consent — is not captured.

Article 5(1)(g) — Sensitive-Attribute Biometric Categorisation

The placing on the market, the putting into service for this specific purpose, or the use of biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation. This prohibition does not cover any labelling or filtering of lawfully acquired biometric datasets, such as images, based on biometric data or categorising of biometric data in the area of law enforcement.

Banned practices include:

  • Facial-analysis systems that claim to infer ethnicity, religion, or sexual orientation
  • Voice-analysis tools designed to infer political beliefs
  • Gait-analysis systems categorising individuals by race
  • Any system that systematically deduces protected attributes from biometric data

The carve-out for "labelling or filtering of lawfully acquired biometric datasets" addresses certain forensic investigation tools — for instance, narrowing a database search by hair colour. This is intentionally narrow; it does not authorise deploying inference systems for production use.

Article 5(1)(h) — Real-Time Remote Biometric Identification in Public Spaces for Law Enforcement

The most-debated prohibition. Real-time remote biometric identification in publicly accessible spaces for law enforcement is banned, with three narrow exceptions:

  1. Targeted search for victims of abduction, trafficking, sexual exploitation, or missing persons
  2. Prevention of a specific, substantial, imminent threat to life or physical safety, including terrorism
  3. Identification of a suspect of certain serious offences (those punishable by at least four years' custody)

Even where exceptions apply, Article 5(2) imposes strict procedural conditions: prior judicial or independent administrative authorisation, fundamental rights impact assessment, geographic and temporal limits, notification to market surveillance and data protection authorities, and registration in the EU database.

Importantly, the prohibition applies only to law enforcement. Private operators and other public bodies are not bound by Article 5(1)(h), though they are bound by other provisions including Annex III high-risk classification and GDPR.

What Is High-Risk Under Annex III, Point 1

Annex III, point 1 classifies several biometric AI categories as high-risk:

(a) Remote biometric identification systems — except for verification systems whose sole purpose is to confirm that a specific natural person is the person they claim to be.

(b) Biometric categorisation systems based on sensitive or protected attributes — though those inferring sensitive attributes are prohibited under Article 5(1)(g), so this provision captures the residual category of biometric categorisation that is allowed but high-risk.

(c) Emotion recognition systems — though those deployed in workplaces and educational institutions are prohibited under Article 5(1)(f), so this provision captures emotion recognition in other contexts.

These categories are subject to the full Articles 8–15 high-risk regime: risk management, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy, robustness, and cybersecurity. Plus conformity assessment (Article 43), CE marking (Article 48), and EU database registration (Article 49).

The Verification Carve-Out

The verification carve-out from Annex III, point 1(a) is significant. Common authentication use cases — face-unlock on a phone, fingerprint authentication for an app, voice authentication for a banking call — fall outside the high-risk biometric category because they are 1-to-1 verification, not identification.

This means most enterprise-authentication biometric AI is not high-risk under the EU AI Act, though it remains subject to GDPR Article 9 (special categories of data including biometrics processed for unique identification).

Need auditable AI for compliance?

Ctrl AI provides full execution traces, expert verification, and trust-tagged outputs for every AI decision.

Learn About Ctrl AI

What Is Limited-Risk Under Article 50

Article 50(3) imposes transparency obligations on certain biometric AI systems that are not high-risk:

Deployers of an emotion recognition system or a biometric categorisation system shall inform the natural persons exposed thereto of the operation of the system, and shall process the personal data in accordance with Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, as applicable.

This applies to emotion recognition and biometric categorisation outside the prohibited contexts. Even where the deployment is permitted, deployers must inform affected individuals that they are being subjected to the system.

The GDPR Overlay

Biometric AI is one of the few areas where the GDPR is often more restrictive than the AI Act. Article 9(1) GDPR prohibits processing of biometric data for the purpose of uniquely identifying a natural person, with narrow exceptions in Article 9(2):

  • Explicit consent (Article 9(2)(a))
  • Necessary for employment, social security, or social protection law (Article 9(2)(b))
  • Necessary to protect vital interests (Article 9(2)(c))
  • Substantial public interest under Union or Member State law (Article 9(2)(g))
  • Etc.

In practice, private deployments of facial recognition or other biometric identification systems often struggle to find a lawful basis. Several EU data protection authorities have taken enforcement actions against facial-recognition deployments — Clearview AI alone has faced fines from Italy, France, Greece, the UK, and the Netherlands.

For most biometric AI deployments, the practical compliance path requires:

  1. AI Act compliance for the system classification and substantive obligations
  2. GDPR compliance for the data-processing aspects, including a lawful basis under Article 6 and, for biometric identification, satisfying an Article 9(2) exception
  3. National laws that may impose additional restrictions (Italy, France, and several others have stricter national rules on biometrics)
  4. Sector-specific rules for sectors like financial services (eIDAS for identity verification), employment (works-council consultation), or law enforcement (Law Enforcement Directive)

Specific Deployment Scenarios

Authentication for Enterprise Applications

A typical face-unlock or fingerprint-authentication system for an enterprise application:

  • AI Act classification: 1-to-1 verification — falls outside Annex III, point 1(a) — not high-risk
  • GDPR: Article 9 special-category data; lawful basis typically explicit consent or contract
  • Practical compliance: GDPR compliance, optional Article 50 disclosure as good practice, no AI Act high-risk obligations

Access Control to Physical Premises

A facial-recognition access-control system for an office building:

  • AI Act classification: depends on whether it is verification (employees enrolled, 1-to-1 matching) or identification (matching against a database of authorised personnel). The former is verification; the latter is identification and is high-risk under Annex III, point 1(a)
  • GDPR: Article 9 special-category data
  • Practical compliance: lawful basis (typically employer consent or legitimate interest with strict balancing), full high-risk regime if identification, employment law considerations including works council

Identity Verification at Borders

eIDAS-style identity verification at borders or financial onboarding:

  • AI Act classification: 1-to-1 verification — not high-risk under Annex III, point 1(a). However, if used in migration or border-control contexts, may fall under Annex III, point 7
  • GDPR: Article 9 special-category data; lawful basis under public-interest/substantial-public-interest exception
  • Practical compliance: comply with eIDAS, FATF (for financial), and AI Act for any high-risk classification

Public-Space Surveillance

Live facial recognition in public spaces by law enforcement:

  • AI Act classification: prohibited under Article 5(1)(h) except for narrow exceptions
  • GDPR / LED: subject to the Law Enforcement Directive (Directive (EU) 2016/680) for law-enforcement processing
  • Practical compliance: only deployable under the Article 5(1)(h) exceptions and only with prior authorisation, fundamental rights impact assessment, geographic and temporal limits, and notifications

Private use of similar systems is not banned by Article 5(1)(h) (which applies only to law enforcement) but is high-risk under Annex III, point 1(a) and is heavily restricted under GDPR Article 9.

Emotion Recognition in Workplaces

AI systems analysing facial expressions, voice tone, or physiological signals of employees:

  • AI Act classification: prohibited under Article 5(1)(f), except for medical or safety reasons
  • GDPR: Article 9 special-category data; lawful basis difficult to establish in workplace context
  • Practical compliance: in most cases, the system cannot be deployed at all. Medical and safety carve-outs (driver-fatigue detection, distress monitoring in heavy industry) are narrow.

Customer-Sentiment Emotion Recognition

AI systems analysing customer emotions in retail or call centres:

  • AI Act classification: high-risk under Annex III, point 1(c)
  • GDPR: Article 9 special-category data; lawful basis challenging
  • Practical compliance: full Article 8–15 regime, Article 50 disclosure, and GDPR compliance

Practical Compliance Steps

If your AI system processes biometric data:

  1. Determine which definitions apply. Is it identification (1-to-N), verification (1-to-1), categorisation, or emotion recognition? The classification flows from this.
  2. Check the Article 5 prohibitions first. If any aspect of your system falls under Article 5, redesign or do not deploy.
  3. Apply the verification carve-out for genuine 1-to-1 authentication — but document the assessment.
  4. For high-risk systems, plan the full Article 8–15 compliance: risk management, data governance, technical documentation, human oversight, accuracy, conformity assessment, CE marking, EU database registration.
  5. Comply with GDPR Article 9. Identify a lawful basis. Consider whether explicit consent or a national-law exception applies. Document the assessment.
  6. Engage your DPO and any works council. Biometric deployments in employment contexts require employee representative consultation in many EU Member States.
  7. Consider the Law Enforcement Directive if your deployment involves law-enforcement processing.
  8. Disclose under Article 50 for emotion recognition and biometric categorisation systems, even outside the high-risk regime.

Conclusion

Biometric AI is the most heavily regulated AI category in the EU. The combination of Article 5 prohibitions, Annex III high-risk classifications, Article 50 transparency, and GDPR Article 9 restrictions creates a multi-layered compliance burden that no other AI use case faces in the same way.

Compliance is achievable but requires careful classification work, deliberate design choices (preferring verification over identification where possible), and integration with GDPR rather than separate AI Act compliance. The Article 5 lines are absolute — practices on the prohibited list cannot be deployed, regardless of safeguards.

For broader context, see the prohibited AI practices article for Article 5 in full, Annex III explained for the standalone high-risk categories, and transparency obligations for the Article 50 regime.

Frequently Asked Questions

Is facial recognition banned in the EU?

Not entirely. Article 5(1)(h) prohibits real-time remote biometric identification in publicly accessible spaces for law enforcement, with narrow exceptions. Untargeted scraping of facial images to build facial recognition databases is prohibited under Article 5(1)(e). Other uses of facial recognition — biometric verification for authentication, post-event identification with proper safeguards, identity verification at borders — are permitted but typically classified as high-risk under Annex III.

What is the difference between biometric identification and biometric verification?

Article 3(35) defines biometric identification as automated recognition by comparing a person's biometric data against a database of many individuals (1-to-N matching). Article 3(36) defines biometric verification as automated 1-to-1 comparison to confirm that a person is who they claim to be. The EU AI Act treats these differently: identification systems used remotely are tightly regulated and sometimes prohibited; verification systems for authentication are largely outside the high-risk regime.

Is using fingerprint or face-unlock for my phone covered by the EU AI Act?

Generally no. Article 3(36) verification systems for confirming an authenticated user's identity (the typical phone-unlock scenario) are excluded from the Annex III, point 1(a) high-risk biometric identification category. They remain subject to GDPR for the biometric-data processing aspects but are not high-risk under the AI Act.

Can a private business use facial recognition to identify shoplifters?

This is heavily regulated. The Article 5(1)(h) prohibition on real-time remote biometric identification in public spaces applies only to law enforcement, not to private operators. However, private facial-recognition deployments are high-risk under Annex III, point 1(a) if they meet the remote-identification definition, and they must comply with GDPR Article 9 prohibitions on processing biometric data for unique identification (with limited exceptions). In practice, most EU data protection authorities have taken a restrictive view of private facial recognition deployments.

What is biometric categorisation, and when is it banned?

Article 3(40) defines biometric categorisation as assigning natural persons to specific categories on the basis of their biometric data. Article 5(1)(g) prohibits biometric categorisation systems that infer or deduce sensitive attributes (race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation). Biometric categorisation for other purposes is permitted but is high-risk under Annex III, point 1(b).

Make Your AI Auditable and Compliant

Ctrl AI provides expert-verified reasoning units with full execution traces — the infrastructure you need for EU AI Act compliance.

Explore Ctrl AI

Related Articles