Anthropic has suspended OpenAI’s access to its Claude family of AI models, citing violations of its terms of service. The move comes amid increased scrutiny and competition within the generative AI space, particularly as OpenAI prepares to launch its next flagship model, GPT-5.
Highlights
- Access Suspended: Anthropic has revoked OpenAI’s access to Claude models over alleged breaches of its terms of service.
- Benchmarking Dispute: OpenAI reportedly used Claude for internal model comparisons, including coding tasks—raising red flags at Anthropic.
- Policy Violation: Anthropic’s terms prohibit use of its models for training or aiding competing AI products, including reverse-engineering or using outputs to build rivals.
- OpenAI Responds: OpenAI defended its actions as “industry standard” benchmarking and pointed out that its APIs remain available to Anthropic.
- Partial Access Maintained: OpenAI retains limited rights to use Claude for safety evaluations and neutral benchmarks under restricted conditions.
- Windsurf Parallel: A similar access ban was imposed on Windsurf, a startup rumored to have OpenAI ties, suggesting a pattern of defensive enforcement by Anthropic.
- Industry Shift: The move reflects a broader decline in AI “coopetition” as companies now prioritize exclusivity and protection over open collaboration.
- Startup Risks: Smaller AI startups relying on third-party APIs may face increasing instability if platform access becomes a competitive weapon.
- GPT-5 Timing: The restriction coincides with GPT-5’s final development phase, potentially preventing Claude’s influence on OpenAI’s next major release.
Usage Concerns Around Claude Prompt Action
According to Wired, OpenAI was reportedly using Anthropic’s Claude models for internal benchmarking, comparing capabilities such as code generation, writing quality, and safety.
While benchmarking is common practice in AI development, Anthropic viewed these actions as a potential breach of its commercial terms, which prohibit customers from using Claude to build or train competing products.
In a statement to TechCrunch, Anthropic claimed that OpenAI staff had used Claude’s coding tools during this benchmarking effort, raising concerns about the use of Claude outputs to enhance a rival model.
Despite the revocation, Anthropic stated that OpenAI will retain limited access for specific uses—namely safety evaluations and baseline benchmarking—reflecting some continued alignment on shared AI safety goals.
OpenAI Responds, Citing “Industry Standard” Practices
OpenAI expressed disappointment with the decision, stating that its use of Claude was consistent with common benchmarking norms across the industry.
A company spokesperson also noted that OpenAI’s own APIs remain fully accessible to Anthropic, hinting at a perceived imbalance in openness and cooperation between the two firms.
Anthropic’s Legal Basis
Anthropic’s Commercial Terms of Service explicitly prohibit using Claude for the development of competing AI products.
This includes restrictions on reverse-engineering, resale, and the use of Claude-generated content to train other models. These clauses are designed to ensure that Claude is used for assistance and integration—not competitive research or replication.
Benchmarking vs. Breach
- Anthropic’s View – The company claims that OpenAI accessed Claude Code through internal tools—not the standard chat interface—and used it to evaluate or enhance GPT-5, crossing the line from neutral benchmarking into strategic advantage-seeking.
- OpenAI’s View – OpenAI maintains that this type of usage aligns with industry norms and is part of routine model evaluation. The company disputes any suggestion that the actions violated platform terms.
Windsurf Precedent
Anthropic’s action mirrors a similar move made weeks earlier, when it revoked access for Windsurf—a startup rumored to be a future OpenAI acquisition.
That decision was based on similar concerns: that third-party access might indirectly support a competing product. Anthropic’s Chief Science Officer Jared Kaplan remarked, “It would be odd for us to be selling Claude to OpenAI,” reinforcing the company’s protective posture.
This pattern suggests a broader shift in Anthropic’s strategy—restricting access to its models from organizations with real or perceived ties to competitors.
Coopetition Under Strain
The suspension highlights a broader trend across the AI industry: a decline in “coopetition,” where companies collaborate even as competitors, and a rise in more protective, closed strategies.
By pulling back API access from perceived rivals, companies are increasingly drawing hard lines around proprietary technology.
This shift raises concerns for smaller startups that rely on third-party AI platforms. As larger firms become more protective of their models and data, startups may face increased risk of service restrictions or abrupt API changes, particularly if their products gain traction in similar domains.
GPT-5 Launch and Timing Dynamics
The timing of Anthropic’s move is notable—it coincides with the final development stages of GPT-5. By restricting Claude access now, Anthropic minimizes the possibility that its outputs could influence benchmarking or model design at a crucial time for OpenAI.
While this may be framed as policy enforcement, it also carries clear competitive implications.