EU Publishes Final General-Purpose AI Code of Practice: A Landmark Step Toward AI Regulation

EU Publishes Final General-Purpose AI Code of Practice: A Landmark Step Toward AI Regulation
Photo by Lukas S / Unsplash

Bottom Line: The European Commission published the final General-Purpose AI Code of Practice on July 10, 2025, marking a crucial milestone just weeks before AI Act obligations for GPAI model providers become applicable on August 2, 2025. This voluntary framework provides critical guidance for AI companies to demonstrate compliance with Europe's groundbreaking AI regulations, potentially setting the global standard for AI governance.

EU Compliance Mapping Tool | Map Cybersecurity Standards Across Frameworks
Compare and map cybersecurity standards across ISO 27001, NIST, ETSI, and national frameworks. Simplify compliance with our interactive mapping tool.

After months of intense negotiations involving nearly 1,000 stakeholders from industry, civil society, and academia, the European Commission has delivered what many consider the most comprehensive regulatory framework for general-purpose artificial intelligence models to date. The publication comes at a pivotal moment, as companies race to prepare for the world's first comprehensive AI law.

The Three-Pillar Framework

The Code consists of three separately authored chapters: Transparency, Copyright, and Safety and Security, each designed to address specific obligations under the EU AI Act.

Chapter 1: Transparency Requirements

The transparency chapter provides detailed guidance on how GPAI model providers can meet their disclosure obligations under Article 53 of the AI Act. Central to this chapter is a standardized Model Documentation Form that covers aspects of model design including data sources, training, energy consumption, licensing, distribution, and acceptable use.

This documentation must include technical documentation covering the model's training and testing process, evaluation results, and a general description including intended tasks and AI systems where it can be integrated. The requirements extend to metadata attributes such as training time, computational resources, energy consumption, and detailed information about data collection and curation methods.

The copyright chapter addresses one of the most contentious aspects of AI development: the use of copyrighted material in training data. It provides practical guidance for developing and implementing copyright policies as mandated by the AI Act, enabling developers to demonstrate compliance with EU copyright and intellectual property laws.

This chapter has been particularly significant given ongoing litigation and concerns from publishers and content creators about unauthorized use of their material in AI training datasets.

AI Security Risk Assessment Tool
Systematically evaluate security risks across your AI systems

Chapter 3: Safety and Security for Systemic Risk Models

The most stringent requirements apply to what the regulation terms "GPAI models with systemic risk" - models where the cumulative amount of compute used for training exceeds 10^25 floating point operations (FLOPs). This threshold represents roughly $7-10 million in training costs and currently captures only the most advanced models like GPT-4.

These providers must implement state-of-the-art practices for risk assessment, management, and mitigation, including model evaluations, adversarial testing, tracking and reporting serious incidents, and ensuring cybersecurity protections. The chapter outlines specific guidance on risk modeling, red-teaming, and safety mitigations that reflect current best practices in AI safety research.

Industry Reactions: Resistance and Concerns

The publication follows significant industry pressure and political maneuvering. More than 40 European companies, including Airbus, Mercedes-Benz, Philips and French AI startup Mistral, urged the bloc in an open letter to postpone the regulations for two years, citing concerns about "unclear, overlapping and increasingly complex EU regulations" that could undermine European competitiveness.

Several major US companies have actively lobbied to weaken the Code's provisions, with Meta announcing in February 2025 that it would not sign the Code, months before the text was finalized. This resistance reflects broader tensions between American tech giants and European regulators over digital governance.

However, European officials have stood firm. Executive Vice President Henna Virkkunen emphasized that "Today's publication of the final version of the Code of Practice for general-purpose AI marks an important step in making the most advanced AI models available in Europe not only innovative but also safe and transparent".

The Regulatory Timeline: No Delays Despite Pressure

Despite calls for postponement, the Commission confirmed last week that there will be no delay to the EU AI Act. The timeline remains firm:

  • August 2, 2025: AI Act rules for general-purpose AI models become applicable
  • Coming weeks: Member States and the Commission will assess the Code's adequacy
  • July 2025: Commission guidelines on key concepts related to general-purpose AI models will be published
  • Potential approval: If deemed adequate, the Code may be approved via Commission implementing act, giving it general validity within the Union

Providers of GPAI models placed on the market before August 2, 2025, have until August 2, 2027, to ensure full compliance.

Global Privacy & Compliance Explorer
Interactive map for exploring global privacy regulations and compliance requirements. Navigate GDPR, CCPA, PIPEDA, and more.

Technical Thresholds and Global Implications

The 10^25 FLOPs threshold for systemic risk classification represents a carefully calibrated approach to AI regulation. This differs from the US approach, where Executive Order 14110 sets the threshold at 10^26 FLOPs, meaning Europe will regulate a broader set of models than the United States currently does.

However, experts note that this FLOP-based methodology may decay over time as research shows current large language models are significantly undertrained, and smaller models with better training data can outperform larger ones. The Commission retains authority to update these thresholds through delegated acts as technology evolves.

US State Breach Notification Requirements Tracker
Comprehensive tool for researching breach notification laws, ransomware requirements, and privacy regulations across all 50 US states.

What This Means for Companies

Immediate Actions Required

Companies developing or deploying GPAI models must now:

  1. Assess applicability: Review whether their models qualify as GPAI under the Act's definition of systems displaying "significant generality" and capable of performing "a wide range of distinct tasks"
  2. Prepare documentation: Begin assembling the technical documentation required under the transparency chapter, including model cards, training data summaries, and energy consumption estimates
  3. Evaluate systemic risk status: Providers must notify the Commission within 2 weeks if their model meets the 10^25 FLOPs criterion
  4. Implement governance structures: Establish dedicated AI governance teams responsible for mapping AI use within the company and assessing AI Act applicability
PII Compliance Navigator | U.S. State Privacy Law Sensitive Data Categories
Comprehensive tool to explore which U.S. states classify different types of data as sensitive under privacy laws. Navigate compliance requirements across 19 states.

Strategic Considerations

The Code's publication creates several strategic implications:

Compliance pathways: While adherence to the Code provides a presumption of conformity, it does not guarantee compliance with the AI Act. Providers can also demonstrate compliance through alternative means.

Competitive dynamics: The Code may become the de facto global standard, meaning non-signatories will likely need to comply anyway if they wish to access European markets.

Innovation vs. regulation balance: The framework attempts to balance innovation with safety, but critics argue the minimum standards for smaller models are too weak, while others worry the requirements for larger models are too stringent.

GDPR & ISO 27001 Compliance Assessment Tool
Comprehensive tool for security leaders to evaluate GDPR and ISO 27001 compliance and prioritize remediation efforts

Open Source Models: A Notable Exemption

Free and open license GPAI model providers only need to comply with copyright obligations and publish training data summaries, unless they present systemic risk. This exemption recognizes the different risk profile and innovation benefits of open source development, though it remains controversial as some experts argue that open source models above certain thresholds could enable malicious abuse.

Zero Trust Maturity Evaluator | Free Assessment Tool for CISOs
Evaluate your organization’s Zero Trust security maturity across 7 critical pillars with our free assessment tool. Get personalized recommendations for your security roadmap.

Looking Ahead: Implementation and Enforcement

The publication of the Code marks the beginning, not the end, of EU AI regulation implementation. The AI Office has indicated it will work cooperatively with companies facing compliance challenges, inviting proactive engagement for those planning to launch GPAI models after August 2025.

Should the Code prove inadequate or if companies fail to adhere to it, the Commission has the authority to adopt more rigorous implementing acts, potentially making the Code's measures the official compliance framework.

The global implications cannot be overstated. As the world's first comprehensive AI regulation with extraterritorial reach, the EU AI Act - supported by this Code of Practice - is likely to influence AI governance frameworks worldwide, much as GDPR shaped global data protection standards.

Compliance Cost Estimator | Calculate Compliance Costs Accurately
Get precise compliance cost estimates for frameworks like SOC 2, ISO 27001, HIPAA, and PCI DSS based on your company size and industry using 2025 market data.

Conclusion

The publication of the General-Purpose AI Code of Practice represents a watershed moment in AI governance. While industry resistance continues and implementation challenges lie ahead, the EU has demonstrated its commitment to establishing comprehensive AI oversight.

For companies operating in the AI space, the message is clear: the era of self-regulation is ending. Those who adapt quickly to this new regulatory landscape will be best positioned to thrive in an environment where safety, transparency, and accountability are no longer optional but legally mandated.

The success or failure of this regulatory experiment will be closely watched globally, potentially determining whether other jurisdictions adopt similar approaches or chart different courses in governing artificial intelligence. With just weeks until the August 2 deadline, the AI industry's response to Europe's regulatory challenge is about to be tested.

Read more

Generate Policy Global Compliance Map Policy Quest Secure Checklists Cyber Templates