Meta has officially declined to endorse the European Union’s newly introduced Code of Practice for general-purpose AI (GPAI) models, a voluntary framework designed to guide industry compliance ahead of the legally binding AI Act set to take partial effect on August 2, 2025.
The decision adds further tension to the evolving relationship between major tech firms and European regulators over the governance of advanced AI systems.
Meta Cites Legal Ambiguity and Overreach
In a public statement, Meta’s Chief Global Affairs Officer, Joel Kaplan, expressed concern over the framework’s scope, describing it as introducing “legal uncertainties” that extend beyond the AI Act itself.
According to Kaplan, the non-binding code includes provisions that could “throttle the development and deployment of frontier AI models” and disadvantage European startups in a competitive global market.
He emphasized that while Meta is committed to meeting its obligations under the AI Act, it views the Code of Practice as imposing additional expectations that could stifle innovation.
“This code introduces a number of legal uncertainties for model developers,” Kaplan wrote, comparing the EU’s approach unfavorably to more flexible regulatory frameworks elsewhere, such as the United Kingdom’s AI policy strategy.
Understanding the EU’s Code of Practice
Introduced in July 2025, the EU’s Code of Practice for GPAI aims to foster early collaboration between AI developers and regulators. Though voluntary, the code outlines measures like,
- Documentation of AI models
- Transparent disclosure of training data
- Respect for takedown requests from rights holders
- Avoidance of training on pirated or non-consensual content
It is intended as a stepping stone toward full compliance with the AI Act, which introduces a risk-based classification system for AI use cases ranging from “unacceptable risk” to “low risk.”
High-risk applications—such as biometric identification or hiring tools—will face strict requirements related to transparency, accountability, and safety.
Regulatory Guidance Ahead of Enforcement
To support companies preparing for compliance, the European Commission has issued updated guidelines for firms deploying GPAI systems considered to carry “systemic risk”—including models developed by Meta, OpenAI, Google, Mistral, and Anthropic. These guidelines emphasize,
- Conducting regular risk assessments
- Performing adversarial testing
- Transparent documentation of datasets and model behaviors
- Incident reporting and cybersecurity measures
- Adherence to copyright and data protection laws
Although participation in the Code of Practice remains optional, EU officials have indicated that signatories could benefit from regulatory leniency and reduced compliance burdens. Conversely, companies opting out—such as Meta—may be subject to increased regulatory scrutiny.
Industry-Wide Pushback
Meta’s decision is not isolated. Over 40 companies—including Bosch, SAP, and Airbus—have called for delays in implementing the AI Act, citing concerns around operational complexity and the need for clearer regulatory guidance.
These organizations argue that overly rigid regulations could hamper the EU’s global competitiveness in AI.
Comparing International Approaches
Meta and other tech leaders have drawn comparisons between the EU’s framework and alternative regulatory strategies.
For instance, the U.K. has adopted a more innovation-focused stance, investing £2 billion in AI development while emphasizing flexibility over early enforcement. Kaplan suggested such approaches may provide a more sustainable model for fostering responsible innovation.