Google has announced its intention to sign the European Union’s voluntary AI Code of Practice—an early framework aimed at guiding general-purpose AI developers toward compliance with the EU’s forthcoming Artificial Intelligence Act.
Highlights
- Early Compliance Move: Google commits to the EU’s voluntary AI Code of Practice ahead of the August 2025 deadline for systemic-risk AI systems.
- Non-Binding But Strategic: The Code promotes transparency, safety, and accountability—serving as a preparatory step for the upcoming AI Act.
- Code Principles: Includes commitments to ethical training data, respecting opt-outs, and robust documentation of AI systems.
- Split Among Tech Giants: Google, OpenAI, and Mistral support the Code; Meta declines, calling the approach too restrictive.
- Google’s Concerns: The company warns that regulatory missteps could delay model rollouts and expose trade secrets.
- EU’s Risk-Based AI Act: Unacceptable-risk AI applications are banned, while high-risk ones face strict compliance protocols.
The move signals Google’s support for responsible AI development, even as the company continues to voice reservations about the potential regulatory implications.
Early Compliance Ahead of Regulatory Deadline
The decision comes just ahead of the August 2, 2025, deadline when new obligations will begin for general-purpose AI systems deemed to carry “systemic risk.”
These requirements, part of the EU’s broader AI Act, will initially apply to major AI players including Google, OpenAI, Anthropic, and Meta.
While the AI Act gives companies two years to fully comply, early adopters of the Code may gain legal clarity and operational stability during this transitional period.
Voluntary Code
By signing the Code, Google agrees to adopt a set of non-binding principles that promote transparency, safety, and accountability in AI development.
- Maintaining thorough documentation of AI models
- Avoiding use of pirated or unauthorized content in training data
- Respecting content creators’ decisions to opt out of data collection
Although the Code itself is not legally enforceable, it is intended as a preparatory step toward full regulatory compliance under the AI Act.
Google Aligns, Meta Holds Back
In a recent blog post, Kent Walker, Google’s President of Global Affairs, stated that the final version of the Code had improved over earlier drafts.
He also expressed concern that some aspects of the AI Act could inhibit innovation. He cited possible misalignment with existing EU copyright laws, the risk of delayed model approvals, and potential exposure of trade secrets as challenges that need to be addressed.
Meta has taken a different position. The company has declined to sign the Code, characterizing the EU’s approach as excessive and warning it could restrict development of cutting-edge AI technologies in the region.
Meta’s stance underscores an emerging divide among major AI developers regarding the best path forward under tightening regulations.
Growing Participation
Not only Google, companies like OpenAI and French AI startup Mistral have also endorsed the Code. Microsoft is reportedly preparing to follow suit, leaving Meta among the few prominent firms to resist alignment at this stage.
This divergence highlights differing corporate strategies—some companies are opting for early cooperation to help shape final rules, while others view the voluntary Code as overly restrictive.
EU’s Risk-Based AI Regulation
The AI Act itself represents one of the world’s most comprehensive efforts to regulate artificial intelligence. It adopts a risk-based classification system:
- Applications posing “unacceptable risk,” such as social scoring or behavioral manipulation, are banned outright
- “High-risk” systems—like those used in biometric surveillance, education, or hiring—must meet strict safety and accountability requirements
Developers will be required to register their models, undergo independent assessments, and implement quality control protocols.
Regulatory Timing
The upcoming deadline and Google’s early alignment carry broader strategic weight. Companies that adopt the Code now may benefit from clearer implementation guidance and the opportunity to influence how EU regulators interpret and enforce AI rules.
In contrast, firms delaying adoption risk facing more complex or rigid compliance pathways later.
This regulatory assertiveness has drawn mixed reactions internationally. Some U.S. officials and tech leaders have expressed concern that stringent EU rules may act as trade barriers or limit innovation.