EU Approves General-Purpose AI Code of Practice: A Landmark Moment for AI Governance

EU Approves General-Purpose AI Code of Practice: A Landmark Moment for AI Governance
Photo by Antoine Schibler / Unsplash

The European Union has formally approved the world's first comprehensive framework for general-purpose AI compliance, setting a global precedent just hours before new obligations take effect.

EU Publishes Final General-Purpose AI Code of Practice: A Landmark Step Toward AI Regulation
Bottom Line: The European Commission published the final General-Purpose AI Code of Practice on July 10, 2025, marking a crucial milestone just weeks before AI Act obligations for GPAI model providers become applicable on August 2, 2025. This voluntary framework provides critical guidance for AI companies to demonstrate compliance with

In a dramatic last-minute development that underscores the EU's commitment to AI governance, the European Commission and European AI Board have officially approved the General-Purpose AI (GPAI) Code of Practice as an "adequate voluntary tool" for AI model providers to demonstrate compliance with the EU AI Act. This approval comes just one day before the Act's extensive provisions for GPAI models become applicable on August 2, 2025.

A Nine-Month Journey to Global AI Standards

The Code of Practice represents the culmination of an unprecedented multi-stakeholder effort that began in September 2024. Developed by 13 independent experts with input from over 1,000 stakeholders, including model providers, small and medium-sized enterprises, academics, AI safety experts, rightsholders, and civil society organizations, the framework establishes the world's first detailed compliance pathway for general-purpose AI systems.

The Safety and Security Chapter of the Code specifies obligations for "GPAI with Systemic Risk" (GPAISR), applying to the most advanced models on the EU market, such as OpenAI's o3, Anthropic's Claude 4 Opus, and Google's Gemini 2.5 Pro. Industry experts describe it as "the best framework of its kind in the world" and potentially "a starting point for global standards for the safe and secure development and deployment of frontier AI."

Navigating the EU AI Act: A Comprehensive Guide for Deployers of High-Risk AI Systems
The European Union’s Artificial Intelligence Act (EU AI Act) marks a significant milestone in the regulation of AI technologies. While much attention has been focused on AI providers, deployers of high-risk AI systems face equally important responsibilities. This guide breaks down the key requirements and considerations for deployers under the

Three Pillars of AI Compliance

The approved Code establishes a comprehensive framework built on three critical chapters:

Transparency Requirements

All GPAI model providers must implement robust documentation standards, including a comprehensive Model Documentation Form that details technical specifications, training data characteristics, computational resources, and energy consumption. The transparency chapter emphasizes the importance of clear and accessible information about GPAI models and their risks, providing guidance on how critical details can be effectively documented for regulatory bodies and other organizations.

EU Banned AI Systems Guide: Classification & Compliance Strategy
Navigate the EU’s prohibition of high-risk AI systems with expert analysis of banned applications, risk assessment frameworks, compliance strategies, and implementation approaches for organizations.

The Code addresses one of the most contentious issues in AI development by requiring providers to implement policies that respect intellectual property rights. Providers must implement safeguards to prevent their models from generating outputs that are essentially verbatim reproductions of protected works from their training data, including filtering mechanisms, prompt constraints, or post-processing checks. Additionally, providers must exclude websites notorious for copyright infringement when collecting training data.

Safety and Security for High-Risk Models

For the most advanced AI systems that could pose systemic risks, the Code establishes stringent safety protocols. These obligations apply to providers of the most advanced models on the EU market and represent the most comprehensive regulatory framework for frontier AI systems globally.

Industry Giants Already Committed

The Code already has 26 confirmed signatories, including major players like Google, Microsoft, OpenAI, and Anthropic. This early adoption by industry leaders signals broad acceptance of the framework and suggests it may become a de facto global standard for AI governance.

The EU AI Act: Comprehensive Regulation for a Safer, Transparent, and Trustworthy AI Ecosystem
In August 2024, the European Union introduced the EU Artificial Intelligence Act, marking a significant leap in the regulation of AI technologies. As the world’s first comprehensive AI law, the EU AI Act is poised to shape how artificial intelligence is developed, deployed, and governed across industries. It aims

The timing of today's approval is particularly significant, as it provides crucial legal certainty for companies operating in the EU market. Following the endorsement, AI model providers who voluntarily sign the Code can demonstrate compliance with the AI Act, reducing their administrative burden and providing more legal certainty than alternative compliance methods.

Grace Periods and Implementation Timeline

Recognizing the complexity of compliance, the EU has built in practical transition periods. Signatories of the GPAI Code of Practice are being granted a de facto 1-year grace period by the AI Office, while providers of GPAI models placed on the market before August 2, 2025 do not have to comply with the obligations for those models until August 2, 2027.

This phased approach acknowledges the technical challenges involved in implementing comprehensive AI governance while ensuring that new models entering the market meet the highest standards from day one.

Global Implications and Future Outlook

The approval of the GPAI Code of Practice represents more than just European regulation—it establishes a global benchmark for AI governance. As the world's first comprehensive framework for general-purpose AI compliance, it's likely to influence regulatory approaches in other jurisdictions and may become the foundation for international AI safety standards.

For the EU's regulatory regime for GPAISRs to be successful, experts recommend regular reviewing and updating mechanisms, with reviews and updates on a regular cadence, such as every 2 years, to increase predictability for providers. The Commission has indicated it will periodically review the guidelines to reflect technological advances and enforcement experience.

AI Compliance Guide: Regulations & Implementation Strategies
Navigate complex AI compliance requirements with expert guidance on regulatory frameworks, risk assessments, and implementation strategies for responsible artificial intelligence.

The Bottom Line

As one observer noted, "If there is one thing you should not underestimate the EU on, it is their ability for successful last-minute deal-making on matters of intense political and regulatory complexity." Today's approval validates this assessment, demonstrating the EU's determination to lead global AI governance while maintaining its August 2025 implementation timeline.

The formal approval of the GPAI Code of Practice marks a watershed moment in AI regulation, establishing the European Union as the global leader in AI governance and providing a comprehensive framework that could shape the future of artificial intelligence development worldwide. With major industry players already signed on and implementation beginning tomorrow, the EU has successfully transformed the landscape of AI regulation in a single day.

Meta’s Rejection of EU AI Code of Practice: Implications for Global AI Compliance Frameworks
Executive Summary In a significant development for AI governance, Meta Platforms announced it will not sign the European Union’s artificial intelligence code of practice, calling it an overreach that will stunt growth. This decision, made public by Meta’s Chief Global Affairs Officer Joel Kaplan, highlights the growing tension between regulatory

As emphasized by the European Commission, there will be no meaningful delay to the EU AI Act's implementation, setting the stage for a new era of regulated AI development that prioritizes safety, transparency, and respect for fundamental rights.

Navigating the Technical Landscape of EU AI Act Compliance
The European Union’s Artificial Intelligence Act (EU AI Act) is poised to reshape the development, deployment, and use of AI systems within the EU and for organizations whose AI outputs are used within the EU. Compliance with this regulation necessitates a deep understanding of its technical definitions, risk classifications,

Read more

Latin America's Digital Authoritarian Turn: How the Continent Became a Laboratory for Surveillance Capitalism and Censorship

Latin America's Digital Authoritarian Turn: How the Continent Became a Laboratory for Surveillance Capitalism and Censorship

The Continental Surveillance State Emerges Latin America has quietly become the world's most aggressive testing ground for digital authoritarianism. While global attention focuses on China's surveillance state or European privacy regulations, Latin American governments have systematically dismantled digital rights, implemented mass surveillance systems, and created censorship

By Compliance Hub
Generate Policy Global Compliance Map Policy Quest Secure Checklists Cyber Templates