EU AI Act Implementation by Country
How different EU member states are implementing the AI Act — national competent authorities, regulatory sandboxes, and country-specific approaches to AI governance.
The EU AI Act (Regulation 2024/1689) is a regulation, not a directive — meaning it is directly applicable in all EU member states without requiring national transposition into domestic law. However, the regulation still requires significant national-level implementation. Each member state must designate national competent authorities and market surveillance authorities, may establish regulatory sandboxes, and can adopt certain national measures within the regulation's framework.
As of early 2025, member states are at different stages of this implementation process. Some have moved quickly to designate authorities and launch sandbox initiatives. Others are still working through their institutional arrangements. This article provides an overview of how key member states are approaching AI Act implementation.
Under Article 70, each member state must designate at least one notifying authority and at least one market surveillance authority for the purposes of the AI Act. These designations must be notified to the European Commission. Several member states had existing AI governance structures that they are now adapting to fulfil these roles.
EU-Level Governance Structure
Before examining individual member states, it is important to understand the EU-level architecture that sits above national implementation.
The AI Office
Established within the European Commission, the AI Office is the central body for AI Act implementation at the EU level. Its responsibilities include:
- Overseeing the implementation and enforcement of rules for general-purpose AI models
- Coordinating with national authorities through the AI Board
- Developing guidelines, codes of practice, and implementing acts
- Monitoring the AI landscape and emerging risks
- Facilitating the development of harmonised standards
The European Artificial Intelligence Board
The AI Board brings together representatives from all member states. It advises and assists the Commission and member states in consistent application of the regulation, issues recommendations, and provides opinions on matters related to AI Act implementation.
Advisory Forum and Scientific Panel
The Advisory Forum includes representatives from industry, SMEs, civil society, and academia, providing stakeholder input. The Scientific Panel of independent experts provides technical expertise, particularly on general-purpose AI models and systemic risks.
Country-by-Country Implementation
Germany
Germany has taken a decentralised approach to AI Act implementation, reflecting its federal structure. The federal government has designated the Federal Network Agency (Bundesnetzagentur) as the primary market surveillance authority for AI systems. This choice leverages the agency's existing experience in digital regulation and market oversight.
Germany has been particularly active on regulatory sandboxes. Several Laender (federal states) have established or are developing AI testing environments, with notable initiatives in Bavaria and North Rhine-Westphalia. The federal government has also been working with industry associations, particularly in the automotive and manufacturing sectors, to develop sector-specific guidance for AI Act compliance.
Germany's approach reflects a pragmatic concern for industrial competitiveness — ensuring that AI regulation does not undermine the country's manufacturing and engineering sectors while maintaining strong fundamental rights protections.
France
France has positioned the CNIL (Commission nationale de l'informatique et des libertes) and Inria (Institut national de recherche en sciences et technologies du numerique) as key institutions for AI Act implementation. The CNIL, already France's data protection authority, brings deep experience in regulating algorithmic decision-making and will handle aspects of market surveillance related to personal data processing.
France has been one of the most proactive member states in establishing AI sandboxes, building on its existing "France AI" strategy. The government has emphasised the importance of supporting the French AI ecosystem — particularly companies developing large language models and generative AI — while implementing the regulation's requirements.
The French approach reflects a dual emphasis on innovation promotion and rights protection, with significant government investment in AI research and development alongside regulatory implementation.
The Netherlands
The Netherlands has designated the Authority for Digital Infrastructure (RDI, Rijksinspectie Digitale Infrastructuur) as the coordinating authority for AI Act implementation. The Dutch Algoritmeregister (Algorithm Register), which requires government organisations to publicly register the algorithms they use, has been highlighted as a model that anticipates several of the AI Act's transparency and registration requirements.
The Netherlands has also been active in the development of impact assessment frameworks for AI systems, building on its existing practice of requiring impact assessments for government use of algorithms. This work provides a practical foundation for implementing the AI Act's fundamental rights impact assessment requirement under Article 27.
Navigating AI Act compliance across jurisdictions
Ctrl AI helps organisations operating across multiple EU member states maintain consistent compliance documentation — with audit-ready traces and trust-tagged outputs that satisfy any national authority.
Learn About Ctrl AISpain
Spain was among the first EU member states to establish a dedicated AI governance body. The Spanish Agency for the Supervision of Artificial Intelligence (AESIA) was created in 2023, well before the AI Act's final adoption. AESIA is expected to serve as Spain's primary market surveillance authority and national competent authority for the AI Act.
Spain has also been an early mover on regulatory sandboxes, launching a pilot sandbox in 2022 that provided practical experience in AI testing and compliance support. This early start gives Spain a relative advantage in institutional readiness.
Italy
Italy has designated the Agency for Digital Italy (AgID) and the Agency for Cybersecurity (ACN) as key authorities in its AI Act implementation framework. The Italian data protection authority (Garante per la protezione dei dati personali) will also play a role, particularly for AI systems that process personal data.
Italy's approach has been notable for its early enforcement actions against specific AI systems — most prominently its temporary ban on ChatGPT in 2023 on data protection grounds. This signals a potentially more interventionist regulatory approach compared to some other member states.
Ireland
Ireland's implementation is particularly significant given that many major technology companies have their European headquarters there. The government has indicated that AI Act market surveillance will involve multiple existing regulators, with coordination mechanisms to ensure coherent oversight across sectors. The Data Protection Commission (DPC), already a key regulator for big tech, will play a role alongside other sector-specific authorities.
Ireland's regulatory approach will directly affect many global AI providers who operate in the EU market through Irish entities, making its implementation choices consequential beyond its borders.
Sweden
Sweden has taken a characteristically pragmatic approach, building on its existing regulatory infrastructure. The government has indicated that market surveillance responsibilities will be distributed across existing sector-specific authorities, with coordination at the national level. Sweden has also been active in standardisation work, contributing to the development of harmonised standards through CEN and CENELEC.
Poland
Poland is developing its AI Act implementation framework through its Ministry of Digital Affairs. The country has expressed particular interest in regulatory sandboxes as a tool for supporting its growing technology sector. Poland's approach reflects the concern shared by several Central and Eastern European member states about ensuring that AI regulation does not create disproportionate burdens for smaller companies and developing technology ecosystems.
Belgium
Belgium has designated the Centre for Cybersecurity Belgium (CCB) as one of its key authorities for AI Act implementation. The federal structure of Belgium — with competences divided between federal and regional governments — creates complexity similar to Germany's, requiring coordination across multiple levels of government.
Implementation timelines and institutional designations are evolving. Member states have until August 2, 2025 to designate their national competent authorities and notify the Commission. The information in this article reflects the state of play as of early 2025 and will continue to develop.
Regulatory Sandboxes
Article 57 of the AI Act requires each member state to establish at least one AI regulatory sandbox by August 2, 2026. Some member states have moved well ahead of this deadline.
What AI Regulatory Sandboxes Are
Sandboxes provide controlled environments where AI systems can be developed, trained, tested, and validated under regulatory supervision. They offer:
- Temporary relaxation of certain requirements in a controlled setting
- Direct interaction with regulatory authorities during development
- Guidance on compliance requirements before market entry
- Opportunities to test novel AI applications where regulation is uncertain
Current Sandbox Activity
Several member states already have sandbox programmes operational or in development:
- Spain launched its sandbox pilot in 2022, one of the earliest in the EU
- France has integrated AI sandbox concepts into its broader innovation framework
- Germany has multiple sandbox initiatives at both federal and state levels
- The Netherlands has used sandbox approaches in fintech regulation and is extending them to AI
- Norway (as an EEA member applying the AI Act) has established an AI sandbox through the Norwegian Data Protection Authority
SME Provisions
The AI Act includes specific provisions to ensure that sandboxes are accessible to small and medium-sized enterprises and startups. Article 57 requires that sandbox conditions take into account the specific interests and needs of SMEs, including reducing administrative burdens and fees.
Regulatory sandboxes are not a compliance shortcut. Systems developed within sandboxes must still meet all applicable requirements before being placed on the market. However, the sandbox process provides valuable regulatory feedback that can reduce compliance costs and uncertainty.
Key Differences in National Approaches
Centralised vs. Distributed Authority
Member states are taking different approaches to institutional design:
- Centralised — some states (like Spain with AESIA) have created or designated a single primary authority for AI Act oversight
- Distributed — others (like Germany, Ireland, and Sweden) are distributing responsibilities across existing sector-specific regulators with coordination mechanisms
- Hybrid — several states designate a lead authority for coordination while distributing sector-specific oversight
Enforcement Posture
Early signals suggest variation in enforcement intensity:
- Some member states (notably Italy) have already taken enforcement actions related to AI systems, suggesting a more proactive approach
- Others emphasise guidance, education, and sandbox support before moving to enforcement
- The level of resources allocated to AI market surveillance varies significantly across member states
Innovation Focus
Member states with significant AI industries (France, Germany, Ireland, the Netherlands) tend to emphasise balancing regulation with innovation support. Those with developing AI ecosystems (Poland, several Central and Eastern European states) focus on ensuring regulation does not create disproportionate barriers.
Implementation Timeline
What This Means for Organisations Operating Across Member States
For companies operating in multiple EU member states, the variation in national implementation creates both challenges and opportunities.
Challenges include:
- Different national authorities with potentially different interpretive approaches
- Varying enforcement timelines and priorities
- Multiple registration and notification requirements
- Different sandbox access criteria and processes
Practical recommendations:
- Align to the highest standard. Build your compliance programme to meet the strictest national interpretation. This avoids having to maintain different compliance levels for different markets.
- Engage with relevant authorities early. Identify the national competent authorities in the member states where you operate and establish communication channels before enforcement begins.
- Leverage sandboxes strategically. If you are developing a novel AI system, consider which member state's sandbox offers the best fit for your technology and use case.
- Monitor developments. National implementation is still unfolding. Assign someone in your organisation to track developments in the member states relevant to your business.
- Centralise documentation. Maintain a single, comprehensive compliance documentation set that satisfies the requirements across all jurisdictions where you operate.
Conclusion
The EU AI Act provides a uniform regulatory framework, but its implementation is inherently national. The choices member states make about institutional design, enforcement approach, and sandbox operation will shape the practical experience of AI regulation across Europe.
For organisations, the key takeaway is that compliance cannot be purely theoretical. Understanding which authorities will oversee your AI systems, what their priorities are, and how they interpret the regulation's requirements is essential for effective compliance planning. The organisations that engage early with their relevant national authorities — whether through sandbox participation, industry consultations, or direct dialogue — will be best positioned to navigate this evolving landscape.
Make Your AI Auditable and Compliant
Ctrl AI provides expert-verified reasoning units with full execution traces — the infrastructure you need for EU AI Act compliance.
Explore Ctrl AIRelated Articles
General-Purpose AI (GPAI) Under the EU AI Act
Complete guide to GPAI model obligations under the EU AI Act — transparency requirements, systemic risk assessment, and what foundation model providers must do.
EU AI Act Transparency Obligations: What You Must Disclose
Guide to transparency requirements under the EU AI Act — disclosure obligations for AI systems, chatbots, deepfakes, and emotion recognition. Articles 50-52 explained.
Prohibited AI Practices Under the EU AI Act
Complete list of AI practices banned by the EU AI Act — social scoring, manipulative AI, real-time biometric surveillance, and more. Understand what's prohibited and why.