Compliancearticle-50ai-generated-contentwatermarking

AI-Generated Content Labelling Under the EU AI Act

Article 50 of the EU AI Act requires machine-readable marking and user-facing disclosure of AI-generated content. Practical guidance on what to label, who is responsible, and the technical implementation.

May 12, 202611 min read

The proliferation of AI-generated content — from synthetic images and audio to AI-drafted articles and code — has driven one of the EU AI Act's more pragmatic interventions: a layered labelling regime that aims to preserve the integrity of the information environment without prohibiting AI generation outright.

This article walks through the labelling requirements step by step: what counts as AI-generated content, what marking and disclosure are required, who is responsible for each, and how compliant implementations work in practice.

The Two Layers of Article 50

Article 50 imposes labelling obligations on AI-generated content in two distinct paragraphs that operate independently:

Article 50(2) — Provider Machine-Readable Marking

Providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video or text content, shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated. Providers shall ensure their technical solutions are effective, interoperable, robust and reliable as far as this is technically feasible, taking into account the specificities and limitations of various types of content, the costs of implementation and the generally acknowledged state of the art, as may be reflected in relevant technical standards.

This applies to the provider of the AI system — the entity that places the generation tool on the market under its own name or trademark. It is a machine-readable mark — meant to be detected by automated tools, not necessarily visible to humans. And it applies to audio, image, video, and text content (all four modalities).

Article 50(4) — Deployer User-Facing Disclosure

Deployers of an AI system that generates or manipulates image, audio or video content constituting a deepfake, shall disclose that the content has been artificially generated or manipulated. […] Deployers of an AI system that generates or manipulates text which is published with the purpose of informing the public on matters of public interest shall disclose that the text has been artificially generated or manipulated.

This applies to the deployer — the entity using the AI to generate or publish content. It is a user-facing disclosure — visible or perceivable to humans. And it applies specifically to deepfakes (image, audio, video content meeting the Article 3(60) definition) and to AI-generated text used in public-interest communications.

The two layers are complementary. The Article 50(2) machine mark enables platforms, fact-checkers, and downstream consumers to detect AI generation automatically. The Article 50(4) disclosure ensures individual users see explicit labelling at the point of consumption.

What Counts as "AI-Generated or Manipulated Content"

The scope of Article 50(2) is broad: any synthetic audio, image, video, or text content generated by an AI system. This includes:

  • Fully synthetic images from text-to-image models (Midjourney, Stable Diffusion, DALL-E, etc.)
  • AI-generated audio (voice synthesis, music generation, sound effects)
  • AI-generated video (text-to-video, video synthesis)
  • AI-generated text (LLM outputs)
  • AI-manipulated content (real images modified by AI tools, AI-cleaned audio, AI-edited video)

It does not include:

  • Standard photo editing (cropping, exposure adjustment, basic colour correction) that does not involve AI generation
  • Algorithmic processing that does not involve generative AI (e.g., spell-check, traditional image compression)
  • Content with only minor AI assistance (e.g., AI-suggested edits that a human accepts or rejects manually)

The line between "AI-assisted" and "AI-generated" is fact-specific. A text where a human writes the substance and uses AI for proofreading is generally not AI-generated. A text where AI drafts the substance and a human reviews lightly is generally AI-generated. Recital 134 provides some interpretive guidance, indicating that the obligation does not extend to cases where the AI's role is "merely assistive" or where the AI output undergoes substantive human review and editorial control.

What Counts as a Deepfake

Article 3(60) defines a deepfake narrowly:

AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful.

Three elements:

  1. AI-generated or AI-manipulated
  2. Resembles existing persons, objects, places, entities, or events (real or fictional but realistic-looking)
  3. Would falsely appear to be authentic or truthful

This is narrower than "any AI-generated image." A stylised cartoon image generated by AI is not a deepfake (it does not appear authentic). A photorealistic image of a generic but non-existent person is a deepfake (it appears authentic even though the person is fictional). A realistic-looking video of a real political figure giving a speech they never gave is the paradigm case.

Provider Implementation: Machine-Readable Marking

For providers of generation tools, Article 50(2) compliance requires implementing a machine-readable mark on every output. Several approaches are now standard:

C2PA Content Credentials

The Coalition for Content Provenance and Authenticity (C2PA) has developed an open technical standard for cryptographic content provenance metadata. C2PA-compliant tools attach signed metadata to image, video, audio, and document outputs, indicating:

  • The tool that created or modified the content
  • The provenance chain of edits
  • The signing entity's identity (via certificate)

C2PA is widely adopted by major AI image-generation tools, Adobe products, news organisations, and camera manufacturers. It provides cryptographic verifiability and tamper-evidence.

Cryptographic Watermarks

Robust watermarking techniques embed imperceptible patterns into AI-generated content that survive common transformations (compression, cropping, format conversion). Modern approaches include:

  • SynthID (Google DeepMind) for images, audio, video, and text
  • Stable Signature for image diffusion models
  • Voice watermarks integrated into text-to-speech systems
  • Token-distribution watermarks for LLM outputs

Watermarks are more robust than metadata (which can be stripped on save) but harder to verify by third parties without access to the watermark detector.

Detector Models

Provider-trained classifiers that detect content generated by a specific model. Less robust than watermarks (detection accuracy decreases with content transformations and against adversarial inputs) but easier to deploy retroactively.

Metadata Tags

Standard formats like EXIF, XMP, and ID3 can carry "AI-generated" flags. These are easy to implement but easy to strip; they should not be the only marking approach.

Practical Implementation

A robust provider implementation typically combines:

  1. C2PA Content Credentials for cryptographic provenance
  2. Cryptographic watermark for robustness to metadata stripping
  3. Detector model as a backup
  4. Acceptable-use restrictions in terms of service requiring users not to remove marks

The Commission's expected implementing acts will likely require interoperability with established standards (notably C2PA), but flexibility for evolution.

Need auditable AI for compliance?

Ctrl AI provides full execution traces, expert verification, and trust-tagged outputs for every AI decision.

Learn About Ctrl AI

Deployer Implementation: User-Facing Disclosure

For deployers, Article 50(4) compliance requires user-facing disclosure when publishing deepfakes or AI-generated public-interest text.

Disclosure Patterns for Deepfakes

Visible labelling options include:

  • Overlay or watermark on the image, video, or audio (e.g., "AI-generated" badge)
  • Caption or attribution below or alongside the content
  • Platform-level label automatically added by the hosting service based on detected machine-readable marks
  • End-card for video content
  • Spoken disclosure for audio (e.g., "This audio was generated by AI" at the start of a clip)

The disclosure must be "clear and distinguishable" at the latest at the time of the first interaction or exposure.

Disclosure Patterns for AI-Generated Text

For AI-generated text published to inform the public on matters of public interest:

  • Author attribution identifying the AI nature (e.g., "Drafted with AI, edited by [name]")
  • Header or footer disclosure ("This article contains AI-generated content")
  • Sentence-level marking for partially AI-generated articles
  • Platform-level metadata indicating AI generation

The editorial-review carve-out is significant: if AI-generated text undergoes a process of human editorial control and a natural or legal person holds editorial responsibility, the disclosure obligation does not apply. Newsroom workflows where editors review and sign off on AI-drafted material can rely on this carve-out.

The Artistic/Creative/Satirical Carve-Out

Article 50(4) provides that for deepfakes forming part of "evidently artistic, creative, satirical, fictional or analogous works," the disclosure is limited to "disclosure of the existence of such generated or manipulated content in an appropriate manner that does not hamper the display or enjoyment of the work."

Examples of compliant disclosure in artistic contexts:

  • Film and TV credits naming the AI generation tools used
  • Liner notes in music albums identifying AI-generated tracks
  • Hashtag or caption on satirical social media posts (e.g., #aigenerated, #deepfake)
  • Wall text or catalogue notation in art exhibitions

The carve-out adjusts the form of disclosure but does not waive it entirely.

Specific Deployment Scenarios

A Photo-Editing App with AI-Powered Image Generation

Provider implements C2PA marking on AI-generated outputs. Users posting those outputs on social media are deployers — they have Article 50(4) disclosure obligations if the output is a deepfake (realistic-looking depiction).

A News Organisation Using AI to Generate Article Drafts

Writers and editors review and edit AI drafts before publication. Editorial responsibility lies with named editors. The editorial-review carve-out applies — Article 50(4) disclosure for the text is not required, though many news organisations choose to disclose voluntarily.

A Marketing Agency Using AI to Generate Product Imagery

AI generates photorealistic product imagery for ad campaigns. The agency is the deployer; the AI tool's provider implements Article 50(2) marking. Article 50(4) deepfake disclosure may apply if the imagery depicts real people or specific real environments; otherwise generally does not apply. Advertising law (UCPD, national advertising standards) may add specific disclosure requirements separately.

A Political Campaign Using AI to Generate Social Media Posts

Article 50(4) text disclosure applies (public-interest communications). Editorial-review carve-out may apply if a human signs off. Annex III, point 8(b) may make the AI system high-risk if it is "intended to be used for influencing the outcome of an election." Article 5(1)(a) manipulation prohibition may apply if the content is manipulative and causes significant harm.

A Voice-Cloning Service for Audiobook Narration

Provider implements voice watermarking. Audiobook publisher (deployer) may need to disclose AI-narrated content; many platforms (Audible, etc.) provide platform-level labelling that satisfies this.

A Customer Service Chatbot

Article 50(1) (chatbot disclosure) applies. Article 50(4) text disclosure for AI-generated public-interest text may apply if the chatbot is used in public-information contexts (government services, public-interest advisory).

A Content-Creation Platform Allowing Users to Generate AI Images

Platform combines roles: it is a provider of the generation tool (Article 50(2) applies — must implement machine-readable marking) and a host of user content (DSA obligations apply for platform-level moderation and labelling).

Interaction with Other Regulations

Digital Services Act

The DSA imposes additional content-moderation and transparency obligations on platforms hosting AI-generated content. VLOPs face systemic-risk assessment obligations that explicitly address AI-generated content risks.

GDPR

AI-generated content depicting real people processes personal data. GDPR applies independently: lawful basis under Article 6 (often consent for non-public figures, balancing of interests for satire of public figures), and Article 9 if biometric data is involved.

National copyright laws and image-rights laws apply to AI-generated content. Article 17 of Directive 2019/790 governs platform liability for copyright-protected content. National personality and image rights apply to deepfakes of real individuals.

Sector-Specific Rules

Advertising laws (UCPD, national advertising standards), broadcasting laws (AVMSD), and electoral laws may impose additional disclosure or restrictions on AI-generated content in specific contexts.

Compliance Checklist

For Providers of AI Generation Tools

  1. Implement machine-readable marking. Choose an approach (C2PA, watermark, detector, metadata) or combination.
  2. Verify robustness. Test that marking survives common transformations.
  3. Document the approach. Maintain a brief technical note describing the marking and its limitations.
  4. Provide downstream documentation. Help deployers understand how to comply with Article 50(4) using your tool's outputs.
  5. Update as state of the art evolves. The Article 50(2) standard is "as far as technically feasible" — keep current.

For Deployers Using AI to Generate Content

  1. Inventory your AI content. What content do you publish that is AI-generated or AI-manipulated?
  2. Classify deepfakes. Apply the Article 3(60) test to image, audio, and video content.
  3. Classify public-interest text. Determine which AI-generated text is published for the purpose of informing the public.
  4. Apply the editorial-review carve-out where appropriate. Document the editorial control workflow.
  5. Implement disclosure. Choose a disclosure pattern appropriate to each content type and context.
  6. Address the artistic carve-out for genuinely creative or satirical works — but still disclose, in adjusted form.
  7. Train your content team. Make sure people creating, editing, and publishing AI-generated content understand the labelling obligations.

Conclusion

The Article 50 labelling regime is one of the more practical interventions in the EU AI Act. It does not prohibit AI generation; it requires that the AI nature of content be detectable and disclosed. The provider-side machine mark and the deployer-side user-facing disclosure together create a coordinated system that supports trust without preventing creative or commercial use.

For deeper coverage of the deepfake-specific provisions, see deepfakes and the EU AI Act. For the broader transparency framework that Article 50 establishes, see transparency obligations under the EU AI Act.

Frequently Asked Questions

What does the EU AI Act require for AI-generated content?

Article 50 imposes two distinct obligations. Article 50(2) requires providers of AI systems generating synthetic content (audio, image, video, or text) to ensure that outputs are marked in a machine-readable format and detectable as artificially generated or manipulated. Article 50(4) requires deployers to disclose to users that the content has been artificially generated or manipulated, with specific provisions for deepfakes and for AI-generated text used in public-interest communications.

Who has the labelling obligation — the AI provider or the company using the AI?

Both, in different ways. The AI system provider must implement machine-readable marking under Article 50(2). The deployer — the entity using the AI to generate or publish content — must implement user-facing disclosure under Article 50(4). The provider's technical mark enables the deployer's compliance and supports downstream detection.

What is a 'machine-readable format' for AI-generated content?

Article 50(2) requires marking in a 'machine-readable format' that allows the content to be detected as artificially generated or manipulated. Standard implementations include cryptographic watermarks (e.g., C2PA Content Credentials), metadata tags, and robust detector models. The Commission is expected to issue implementing acts specifying acceptable technical approaches, but flexibility is built in to accommodate evolving state of the art.

Do I need to label AI-generated images on social media posts?

Yes, in two layers. The AI image generator's provider must implement machine-readable marking under Article 50(2). You, as the deployer (the person posting the image), must disclose to viewers that the image is AI-generated if it constitutes a 'deepfake' under Article 3(60) — that is, AI-generated or manipulated image content that resembles existing persons, objects, places, entities or events and would falsely appear authentic. Purely abstract AI art that is obviously not depicting reality is not a deepfake and is not subject to Article 50(4) disclosure, though many platforms encourage labelling regardless.

What is the penalty for failing to label AI-generated content?

Article 50 violations fall within the third penalty tier under Article 99: up to €15 million or 3% of total worldwide annual turnover, whichever is higher. SMEs benefit from the lower of the two amounts.

Make Your AI Auditable and Compliant

Ctrl AI provides expert-verified reasoning units with full execution traces — the infrastructure you need for EU AI Act compliance.

Explore Ctrl AI

Related Articles