Meta's Rejection of EU AI Code of Practice: Implications for Global AI Compliance Frameworks

Meta's Rejection of EU AI Code of Practice: Implications for Global AI Compliance Frameworks
Photo by Julio Lopez / Unsplash

Executive Summary

In a significant development for AI governance, Meta Platforms announced it will not sign the European Union's artificial intelligence code of practice, calling it an overreach that will stunt growth. This decision, made public by Meta's Chief Global Affairs Officer Joel Kaplan, highlights the growing tension between regulatory frameworks and industry compliance approaches in the rapidly evolving AI landscape.

The voluntary code of practice, drawn up by 13 independent experts, aims to provide legal certainty to signatories who will have to publish summaries of the content used to train their general-purpose AI models and put in place a policy to comply with EU copyright law. Meta's refusal stands in stark contrast to other major tech companies, with Microsoft likely to sign the code of practice, creating a divergent compliance landscape.

The EU AI Act and Code of Practice Framework

The EU AI Act, which came into force in June 2024, represents the world's first comprehensive AI regulation. The act classifies AI applications into risk categories — unacceptable, high, limited and minimal — and imposes obligations accordingly. Any AI company whose services are used by EU residents must comply with the act. Fines can go up to 7% of global annual revenue.

The enforcement timeline is particularly crucial for compliance teams:

  • August 2, 2025: Deadline for companies to place models on the market to benefit from extended compliance periods
  • August 2, 2026: The Commission will enforce full compliance with all obligations for providers of general-purpose AI models with fines
  • August 2, 2027: Models placed on the market before August 2025 must comply with the AI Act obligations
AI Security Risk Assessment Tool
Systematically evaluate security risks across your AI systems

Code of Practice Benefits and Requirements

AI model providers who voluntarily sign the code can show they comply with the AI Act by adhering to the Code. This will reduce their administrative burden and give them more legal certainty than if they proved compliance through other methods. However, companies that refuse to sign, like Meta, may be exposed to more regulatory scrutiny.

Key requirements under the code include:

  • Publishing summaries of training data content
  • Implementing EU copyright law compliance policies
  • Addressing transparency, safety, and security issues
  • Meeting specific obligations for general-purpose AI models with systemic risks

Meta's Position and Industry Implications

Rationale for Non-Compliance

Meta's global affairs chief Joel Kaplan stated that "Europe is heading down the wrong path on AI," characterizing the code as an overreach that will stunt growth. This position reflects broader industry concerns about regulatory burden and innovation constraints.

Compliance Risk Assessment

Organizations following Meta's approach face significant compliance risks:

  • Financial Penalties: Companies that violate the AI Act can face hefty penalties. The European Commission can impose fines of up to seven percent of a company's annual sales. The penalties are a lower three percent for those developing advanced models
  • Increased Regulatory Scrutiny: Non-signatories may face more intensive oversight and compliance verification requirements
  • Market Access Challenges: Potential restrictions on AI service deployment within EU markets
GDPR & ISO 27001 Compliance Assessment Tool
Comprehensive tool for security leaders to evaluate GDPR and ISO 27001 compliance and prioritize remediation efforts

Comparative Analysis of Global AI Frameworks

NIST AI Risk Management Framework (AI RMF 1.0)

The United States National Institute of Standards and Technology has developed a comprehensive approach to AI risk management. The AI RMF follows the template of previous information risk management and governance frameworks from NIST, the Cybersecurity Framework released in 2014 and a Privacy Framework released in 2020.

Key characteristics of NIST AI RMF:

  • Voluntary Framework: Similar to other NIST frameworks, providing guidance rather than mandatory requirements
  • Risk-Based Approach: Focuses on identifying, assessing, and managing AI risks throughout the lifecycle
  • Flexible Implementation: Allows organizations to tailor approaches based on their specific context and risk tolerance
  • Cross-Sector Applicability: Designed for use across various industries and organizational types
AI RMF to ISO 42001 Crosswalk Tool
Navigate between NIST AI Risk Management Framework and ISO/IEC 42001 standards with our interactive crosswalk tool.

ISO/IEC AI Standards Suite

The International Organization for Standardization has developed a comprehensive suite of AI-related standards:

ISO/IEC 42001:2023 - AI Management Systems

ISO 42001 establishes an AI Management System focusing on ethical AI, transparency, and trust, providing a structured approach to AI governance within organizations.

ISO/IEC 23053 - AI Framework for ML Technology

ISO/IEC 23053 establishes an AI and machine learning (ML) framework for describing a generic AI system using ML technology, offering standardized terminology and concepts.

ISO/IEC 23894:2023 - AI Risk Management Guidance

ISO/IEC 23894 provides guidance on AI-related risk management, complementing the management system approach with specific risk management practices.

CMMC & NIST 800-171 Compliance Assessment Tool
Evaluate and improve your organization’s cybersecurity compliance with CMMC and NIST 800-171 standards.

Framework Comparison Analysis

Key points of comparison include: Scope and Coverage: ISO/IEC standards provide comprehensive guidelines across a broad range of AI-related topics, while NIST AI RMF is more focused on risk management. Global Applicability: ISO/IEC standards are internationally recognized and widely adopted.

Framework Jurisdiction Mandatory/Voluntary Focus Area Key Advantages
EU AI Act European Union Mandatory Risk-based regulation Legal certainty, market access
NIST AI RMF United States Voluntary Risk management Flexibility, industry acceptance
ISO/IEC 42001 International Voluntary Management systems Global recognition, certification
ISO/IEC 23894 International Voluntary Risk management Standardized approach

Strategic Compliance Recommendations

For Organizations Operating in Multiple Jurisdictions

  1. Adopt a Harmonized Approach: Implement frameworks that can satisfy multiple regulatory requirements simultaneously
  2. Prioritize Risk-Based Compliance: Focus on high-risk AI applications that face the most stringent requirements
  3. Establish Cross-Border Governance: Create governance structures that can adapt to varying regulatory requirements
EU Compliance Mapping Tool | Map Cybersecurity Standards Across Frameworks
Compare and map cybersecurity standards across ISO 27001, NIST, ETSI, and national frameworks. Simplify compliance with our interactive mapping tool.

For EU Market Participants

  1. Evaluate Code of Practice Participation: Consider the benefits of voluntary compliance versus independent compliance demonstration
  2. Implement Comprehensive Documentation: Ensure training data summaries and copyright compliance policies are robust
  3. Prepare for Enforcement: Develop systems to demonstrate compliance ahead of August 2026 enforcement deadline

For Global AI Developers

  1. Monitor Regulatory Evolution: Stay informed about emerging regulations in key markets
  2. Invest in Compliance Infrastructure: Build systems that can scale across multiple regulatory frameworks
  3. Engage with Standard-Setting Bodies: Participate in the development of international standards
Baseline Cyber | Cybersecurity Compliance Assessment Tool
Evaluate your organization’s security posture against essential security controls and get actionable recommendations aligned with industry frameworks.

Future Outlook and Emerging Considerations

Regulatory Fragmentation Risks

Meta's decision highlights the risk of regulatory fragmentation, where different jurisdictions impose incompatible requirements. This could lead to:

  • Increased compliance costs for global AI developers
  • Potential market segmentation based on regulatory approaches
  • Innovation constraints due to conflicting requirements

Convergence Opportunities

Despite current divergences, opportunities for regulatory convergence exist:

  • International standards development through ISO/IEC
  • Bilateral cooperation agreements between regulators
  • Industry-led initiatives for common compliance approaches

Technology Evolution Impact

Rapid AI technology advancement continues to outpace regulatory development:

  • Emergence of new AI capabilities requiring updated frameworks
  • Need for adaptive regulatory approaches
  • Importance of future-proofing compliance strategies
Compliance Cost Estimator | Calculate Compliance Costs Accurately
Get precise compliance cost estimates for frameworks like SOC 2, ISO 27001, HIPAA, and PCI DSS based on your company size and industry using 2025 market data.

Conclusion

Meta's rejection of the EU AI Code of Practice represents a critical juncture in AI governance, highlighting the tension between regulatory compliance and innovation concerns. Organizations must navigate an increasingly complex landscape of mandatory and voluntary frameworks while maintaining operational efficiency and competitive advantage.

The divergent approaches of major tech companies—with Microsoft likely to sign while Meta refuses—underscore the need for careful strategic consideration of compliance approaches. Success in this environment requires:

  • Comprehensive understanding of applicable frameworks
  • Strategic evaluation of compliance options
  • Robust risk management processes
  • Adaptive governance structures

As AI regulation continues to evolve globally, organizations that proactively address compliance requirements while maintaining innovation capacity will be best positioned for long-term success. The choice between voluntary code participation and independent compliance demonstration will become increasingly strategic, with implications extending beyond regulatory compliance to competitive positioning and market access.

The ongoing development of international standards through ISO/IEC and frameworks like NIST AI RMF provides valuable tools for organizations seeking to harmonize their compliance approaches across jurisdictions. However, the ultimate success of these efforts will depend on industry adoption and regulatory acceptance of common approaches to AI governance.

Meta’s $8 Billion Privacy Settlement: Key Compliance Lessons for Modern Organizations
The recent $8 billion settlement between Meta Platforms shareholders and CEO Mark Zuckerberg, along with current and former directors, marks a watershed moment in corporate privacy compliance. This landmark resolution offers critical insights for organizations navigating the complex intersection of data privacy, corporate governance, and regulatory compliance in today’s digital

This analysis reflects the current state of AI compliance frameworks as of July 2025. Organizations should consult with legal and compliance professionals for specific guidance on their regulatory obligations.

Read more

Generate Policy Global Compliance Map Policy Quest Secure Checklists Cyber Templates