Anthropic recently issued a DMCA takedown notice against a developer who attempted to reverse-engineer its AI-powered coding assistant, Claude Code.
Highlights
The move has sparked discussion within the developer community, drawing comparisons to OpenAI’s approach with its own coding tool, Codex CLI.
Both Claude Code and Codex CLI belong to the emerging class of “agentic” coding assistants—tools that help developers write, modify, and understand code using conversational or command-line interfaces.
While both rely on large-scale AI models, the development philosophies behind them diverge significantly.
The core of the dispute lies in licensing and transparency. Codex CLI is distributed under the permissive Apache 2.0 open-source license, which allows for modification, distribution, and integration with other models—including those developed by competitors.
In contrast, Claude Code is distributed under a commercial license that restricts redistribution and modification without prior approval.
Anthropic has also obfuscated Claude Code’s source code, limiting visibility into its inner workings. When a developer de-obfuscated and uploaded a version to GitHub, Anthropic responded with a DMCA complaint requesting its removal.
This led to criticism from some developers and open-source advocates, who expressed concerns over what they saw as a closed approach to AI tool development.
The incident was further amplified by its timing. In the days following the release of Codex CLI, OpenAI merged numerous community-submitted pull requests into the codebase—an unusual step for a company typically associated with closed-source projects.
This led to a perception that OpenAI was adopting a more collaborative stance on this specific product. CEO Sam Altman previously acknowledged that the company had been “on the wrong side of history” regarding open-source AI development.
Anthropic has not publicly commented on the takedown request. Some observers have suggested that tighter control may reflect the tool’s beta status, and that obfuscation could be intended to prevent security risks or protect intellectual property during early development.
Technical Issues and Community Impact
Claude Code’s early release has also encountered technical setbacks. A bug in its auto-update functionality included problematic commands that, in certain cases, caused system instability or even rendered devices inoperable—particularly when the tool was installed with administrative privileges.
Some users had to rely on “rescue instances” to repair file permission issues triggered by the update.
Anthropic addressed the issue by removing the affected commands and providing a troubleshooting guide. However, the original guide link contained a typo, which added to user frustration.
Accessibility and Cost Concerns
The operational cost of using Claude Code has been a point of discussion. The Claude 3.7 Sonnet model is priced at $3 per million input tokens and $15 per million output tokens.
Users have reported daily usage costs ranging from $28 to over $100, making it financially comparable to hiring a developer for some tasks.
Internal Use and Developer Feedback
Despite these challenges, Anthropic’s internal teams have used Claude Code extensively and have reported productivity gains.
According to Chief Product Officer Mike Krieger, internal testing led to the decision to make the tool publicly available. Some developers have praised its capabilities, noting that Claude Code has been responsible for generating a significant portion of their code in practice.
Regulation and Ethics
The technical issues associated with Claude Code have prompted broader conversations around the need for regulatory oversight of AI development tools.
Some experts argue that failures such as these highlight the importance of establishing safety standards and accountability frameworks.
Additionally, ethical and legal concerns are emerging as AI-generated code becomes more prevalent.
Questions around intellectual property rights, liability for software defects, and the preservation of core developer skills are becoming more prominent as these tools evolve.
A Tale of Two Approaches
The contrast between Claude Code and Codex CLI illustrates how developer trust and community engagement can be influenced by transparency, licensing choices, and responsiveness to feedback.
OpenAI’s decision to make Codex CLI open-source has been seen by some as a strategic public relations gain, especially when compared to Anthropic’s more controlled approach.