Anthropic has released a research preview of Claude for Chrome, its new AI-powered browser agent. The feature is currently available to 1,000 Max-plan subscribers — who pay between $100 and $200 per month — with a waitlist open for additional users.
Highlights
- Limited Launch: Research preview available to 1,000 Max-plan subscribers ($100–$200/month), with a waitlist for wider access.
- In-Browser Integration: Claude opens in a Chrome side panel, tracking activity and performing actions with user permission.
- Browser Wars Heat Up: Anthropic’s move follows Perplexity’s Comet browser, OpenAI’s rumored browser project, and Google’s Gemini integrations in Chrome.
- Security First: Anthropic cut prompt-injection success rates from 23.6% to 11.2% during testing, though risks remain.
- User Safeguards: Claude can be restricted to certain sites, is blocked from risky categories, and requires explicit approval for sensitive actions.
- Learning from the Past: Builds on Anthropic’s earlier, less successful PC-controlling assistant (2024), showing progress toward more reliable agents.
- Privacy Concerns: Browser agents can access webpage content, forms, and behavior data — raising risks of exposure via trackers and adversarial attacks.
- Cautious Rollout: Unlike rivals, Anthropic is testing with a small group first to address trust, safety, and ethical issues before scaling.
Once installed as a Chrome extension, Claude opens in a side panel that tracks user activity within the browser. With permission, the agent can also take certain actions directly inside Chrome, assisting with navigation and task execution.
The Browser as AI’s Next Battleground
Browsers are rapidly becoming the next major frontier for AI integration:
- Perplexity has already launched its AI-first browser, Comet.
- OpenAI is reportedly developing its own AI-powered browser.
- Google has rolled out Gemini integrations within Chrome.
This competitive push arrives as Google faces a looming antitrust case that could even force a divestiture of Chrome. Meanwhile, Perplexity has made an unsolicited $34.5 billion offer for the browser, and OpenAI CEO Sam Altman has hinted at similar interest.
Security and Safety in Focus
The shift toward AI-driven browsers raises significant safety concerns.
- Last week, Brave’s security team reported a flaw in Perplexity’s Comet that allowed hidden website code to trigger prompt-injection attacks.
- Anthropic acknowledges similar risks, framing this preview as an opportunity to evaluate and improve defenses.
- The company says its interventions have already reduced the success rate of such attacks from 23.6% to 11.2%.
Safeguards and User Controls
Anthropic has built Claude for Chrome with several protective measures:
- Users can restrict Claude’s access to specific websites.
- The agent is blocked by default from accessing financial services, adult content, or piracy-related sites.
- For higher-risk actions — such as making purchases, publishing online, or sharing personal details — Claude requires explicit user approval.
From Desktop Agents to Browser Agents
This is not Anthropic’s first attempt at an agent with system-level control. In October 2024, the company launched a PC-controlling assistant that was widely seen as slow and unreliable. Since then, AI agents have become more capable.
Today’s browser-based tools — from Comet to the ChatGPT Agent — already handle simple tasks effectively, though they still face challenges with more complex workflows.
A Cautious Rollout
Unlike rivals that have opted for broader public launches, Anthropic is deliberately limiting access by starting with a small test group of Max subscribers. The approach reflects growing concerns about security, ethics, and user trust as AI agents gain deeper autonomy.
New Risks Emerging
Academic researchers and security experts are flagging broader risks with AI browser agents.
- Privacy: Assistants embedded in browsers can access webpage content, form inputs, and user behavior. This data could reveal sensitive demographic or personal details, and in some cases, interact with third-party trackers such as Google Analytics.
- Adversarial Attacks: Techniques like SUDO Detox2Tox have shown the ability to bypass AI safety filters, achieving a 41% success rate in testing. Such methods highlight the difficulty of ensuring reliable safeguards as these tools become more powerful.