General-Purpose AI (GPAI) Under the EU AI Act
Complete guide to GPAI model obligations under the EU AI Act — transparency requirements, systemic risk assessment, and what foundation model providers must do.
General-purpose AI models — the large language models, multimodal systems, and foundation models that power an increasingly broad range of applications — received dedicated treatment in the EU AI Act. Chapter V and Articles 51 through 56 of Regulation 2024/1689 establish a specific regulatory framework for these models, recognizing that they present unique challenges that the risk-based classification system alone cannot address.
The core issue is straightforward: when a model can be used for virtually any downstream application, you cannot classify its risk level by looking at the model itself. A large language model could power a harmless poetry generator or a high-risk medical diagnostic tool. The AI Act addresses this by regulating GPAI models at the model level, separate from and in addition to the risk-based rules that apply to specific AI system deployments.
GPAI obligations apply from August 2, 2025, making them among the earlier provisions to take effect.
What Qualifies as a General-Purpose AI Model
Article 3(63) defines a general-purpose AI model as "an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications."
The key characteristics are:
- Significant generality — the model is not designed for a single narrow task
- Wide range of distinct tasks — it can perform many different functions
- Integrability — it can be incorporated into various downstream systems
The definition is deliberately broad. It covers large language models (like GPT-4, Claude, Gemini, Llama), multimodal models, and other foundation models. It also covers fine-tuned variants of these models if they retain significant generality.
This means the following are likely GPAI models under the regulation:
- Large language models offered via API or open-source
- Multimodal models that process text, images, audio, and video
- Code generation models with broad programming capabilities
- Image and video generation models with general creative capabilities
Models that are narrowly specialized — for example, a model trained exclusively for a single medical imaging task — would generally not qualify as GPAI, even if they use large-scale training techniques.
The Two-Tier Framework
The AI Act creates two tiers of GPAI obligations:
Tier 1: All GPAI Models
All providers of general-purpose AI models must meet baseline transparency and documentation obligations. These apply regardless of the model's capabilities or risk profile.
Tier 2: GPAI Models with Systemic Risk
GPAI models that pose systemic risk face additional, more demanding obligations. A model is presumed to pose systemic risk when the cumulative amount of computation used for its training, measured in floating-point operations (FLOPs), exceeds 10^25 FLOPs.
High RiskThe European Commission can also designate a GPAI model as posing systemic risk based on criteria other than compute, including:
- The number of registered end users
- The model's capabilities, including its ability to perform tasks across modalities
- The model's reach through the value chain (how many downstream applications depend on it)
- The degree of autonomy the model enables
The 10^25 FLOPs threshold is a presumption, not a hard boundary. The Commission can designate models below this threshold as systemically risky, and providers can argue that models above the threshold do not actually pose systemic risk. However, rebutting the presumption requires substantial evidence.
Tier 1 Obligations: What All GPAI Providers Must Do
Technical Documentation
Providers must draw up and keep up to date technical documentation of the model, including its training and testing process and the results of its evaluation. This documentation must contain, at a minimum:
- A general description of the GPAI model, including the tasks it is intended to perform and the type and nature of the AI systems it can be integrated into
- A description of the data used for training, testing, and validation, including the type and provenance of data and the curation methodologies used
- The computational resources used for training (type and amount of compute, training time, other relevant details)
- Known or estimated energy consumption of the model
- The design choices and training methodologies, including techniques for building and optimizing the model
- A description of the model's capabilities and limitations, including the tasks it performs well and those where it is known to underperform
- The evaluation and testing procedures, including the results
The specifics of what this documentation must contain are further elaborated in Annex XI of the regulation and through implementing acts from the AI Office.
Copyright Policy Compliance
GPAI providers must put in place a policy to comply with EU copyright law, in particular Directive 2019/790 (the Copyright in the Digital Single Market Directive). Critically, this includes:
- Identifying and complying with rights reservations expressed by copyright holders pursuant to Article 4(3) of the DSM Directive (opt-out of text and data mining)
- Drawing up and making publicly available a sufficiently detailed summary of the training data used, following a template provided by the AI Office
The training data summary requirement is one of the most discussed aspects of the GPAI framework. Providers must make available a summary that is "sufficiently detailed" to enable copyright holders to identify whether their works were used. The AI Office has published a template for this summary.
Information to Downstream Providers
GPAI model providers must make available to providers of AI systems who intend to integrate the model sufficient information and documentation to enable those downstream providers to understand the model's capabilities and limitations and to comply with their own obligations under the AI Act.
This is particularly important because downstream providers — those building specific AI systems on top of GPAI models — are responsible for the risk classification and compliance of their systems. They cannot fulfill these responsibilities without adequate information about the underlying model.
Acceptable Use Policies
While not explicitly mandated in the same way as other requirements, the regulation expects GPAI providers to establish clear acceptable use policies that help downstream deployers understand what the model should and should not be used for.
Building on GPAI models? Ensure downstream compliance.
Ctrl AI provides the transparency and audit infrastructure needed to demonstrate compliance when deploying systems built on general-purpose AI models.
Learn About Ctrl AITier 2 Obligations: GPAI Models with Systemic Risk
Providers of GPAI models classified as posing systemic risk must comply with all Tier 1 obligations plus additional requirements designed to identify, assess, and mitigate risks at a systemic level.
Model Evaluation
Providers must perform model evaluations, including adversarial testing, to identify and mitigate systemic risks. These evaluations must be:
- Conducted in accordance with standardized protocols and tools, including those developed under codes of practice or harmonised standards
- Proportionate to the systemic risks identified
- Documented and reported to the AI Office
Adversarial testing (red-teaming) is explicitly mentioned as a key evaluation technique. This includes testing for:
- Vulnerability to prompt injection and other manipulation techniques
- Propensity to generate harmful, dangerous, or illegal content
- Capability to assist in activities that could pose systemic risks (e.g., cyber attacks, development of biological or chemical weapons)
- Tendency to produce biased or discriminatory outputs at scale
Systemic Risk Assessment and Mitigation
Beyond model evaluation, providers must assess and mitigate possible systemic risks, including their sources. Systemic risks may arise from:
- The model's capabilities being misused at scale
- Concentration of market power if the model becomes widely integrated into critical infrastructure
- Potential for cascading failures across interconnected systems that rely on the model
- The model's influence on information ecosystems (e.g., large-scale generation of misinformation)
Incident Tracking and Reporting
Providers must keep track of, document, and report serious incidents and possible corrective measures to the AI Office and, where relevant, to national competent authorities. A "serious incident" includes any event that results in or could have resulted in:
- Death or serious damage to health, property, or the environment
- A serious and irreversible breach of fundamental rights
- A serious disruption of critical infrastructure
Cybersecurity Protections
Providers must ensure an adequate level of cybersecurity protection for the GPAI model and the physical infrastructure of the model. This includes:
- Protecting model weights and parameters from unauthorized access
- Securing training pipelines and data
- Implementing safeguards against model extraction or theft
- Protecting inference infrastructure from attacks
Open-Source GPAI Models
The AI Act provides partial exemptions for open-source GPAI models. Article 53(2) states that providers of GPAI models released under a free and open-source licence are exempt from several Tier 1 requirements — specifically the technical documentation and downstream information obligations — provided they:
- Make the model parameters (weights) publicly available
- Provide sufficient information about the model's architecture and training
However, this exemption does not apply to:
- The copyright compliance obligation (all GPAI providers must comply)
- Open-source GPAI models with systemic risk (they must meet all Tier 2 requirements)
Open-source does not mean exempt. If an open-source GPAI model exceeds the 10^25 FLOPs threshold or is otherwise designated as posing systemic risk, the full Tier 2 obligations apply — including model evaluation, systemic risk assessment, incident reporting, and cybersecurity requirements.
Codes of Practice
The AI Act encourages the development of codes of practice as a mechanism for GPAI providers to demonstrate compliance. The AI Office coordinates the drawing up of these codes, which are developed with input from GPAI providers, downstream providers, civil society, academia, and other stakeholders.
Adherence to an approved code of practice creates a presumption of conformity with the relevant GPAI obligations. This is a significant incentive for providers to participate in code development and to adhere to the resulting standards.
The first drafts of GPAI codes of practice were developed throughout 2024 and early 2025, with the AI Office facilitating a multi-stakeholder process. These codes cover areas including:
- Technical documentation standards
- Training data transparency
- Copyright compliance procedures
- Systemic risk evaluation methodologies
- Incident reporting protocols
Enforcement and the AI Office
GPAI provisions are primarily enforced by the AI Office, a body within the European Commission. This is different from most other AI Act provisions, which are enforced by national market surveillance authorities.
The AI Office has the power to:
- Request documentation and information from GPAI providers
- Conduct evaluations of GPAI models
- Issue binding instructions to providers to take corrective measures
- Impose fines for non-compliance
Fines for GPAI providers can reach up to 15 million EUR or 3% of global annual turnover, whichever is higher. For providing incorrect, incomplete, or misleading information to the AI Office, fines can reach 7.5 million EUR or 1% of turnover.
Timeline for GPAI Compliance
Implications for Downstream Deployers
If your organization builds AI systems on top of GPAI models — for example, using an API from a foundation model provider — you are not a GPAI provider. Your obligations fall under the standard risk-based framework for AI systems. However, you need to be aware of several things:
Obtain adequate documentation. You have the right to receive sufficient information from the GPAI provider to comply with your own obligations. If you are building a high-risk AI system on a GPAI model, you need detailed documentation about the model's capabilities, limitations, and known risks.
Assess downstream risk independently. The fact that the underlying model complies with GPAI obligations does not automatically make your system compliant. You must independently assess and manage the risks of your specific application.
Monitor model updates. When the GPAI model you depend on is updated, your system's risk profile may change. Establish processes for evaluating the impact of upstream model changes on your compliance posture.
Contractual safeguards. Ensure your agreements with GPAI providers include provisions for receiving updated documentation, notification of material changes, and cooperation in case of incidents.
Conclusion
The GPAI framework represents the EU's recognition that foundation models require governance approaches that go beyond traditional product safety regulation. By regulating at the model level and introducing the systemic risk tier, the AI Act creates a framework that is proportionate — imposing baseline obligations on all GPAI providers while demanding more from those whose models could cause harm at scale.
For GPAI providers, compliance requires significant investment in documentation, evaluation, and risk management. For downstream deployers, it requires diligent vendor management and independent risk assessment. For the AI ecosystem as a whole, it establishes a foundation for trust and accountability in the most powerful AI technologies being developed today.
With GPAI obligations applying from August 2, 2025, the time to prepare is now.
Make Your AI Auditable and Compliant
Ctrl AI provides expert-verified reasoning units with full execution traces — the infrastructure you need for EU AI Act compliance.
Explore Ctrl AIRelated Articles
EU AI Act Transparency Obligations: What You Must Disclose
Guide to transparency requirements under the EU AI Act — disclosure obligations for AI systems, chatbots, deepfakes, and emotion recognition. Articles 50-52 explained.
EU AI Act Implementation by Country
How different EU member states are implementing the AI Act — national competent authorities, regulatory sandboxes, and country-specific approaches to AI governance.
Prohibited AI Practices Under the EU AI Act
Complete list of AI practices banned by the EU AI Act — social scoring, manipulative AI, real-time biometric surveillance, and more. Understand what's prohibited and why.