Home Technology Artificial Intelligence Anthropic blocks OpenClaw founder from using Claude

Anthropic blocks OpenClaw founder from using Claude

0

Tensions between Anthropic and the open-source community reached a boiling point this weekend. Anthropic temporarily banned Peter Steinberger, the founder of the popular open-source agent framework OpenClaw, from accessing Claude on April 10, 2026.

While his account was reinstated within hours, the move has ignited a fierce debate over “platform lock-in” and the sustainability of third-party AI agents.


1. The Ban and Reinstatement

Steinberger, who was recently hired by OpenAI to lead personal AI strategy but continues to run the OpenClaw Foundation, received an automated notice citing “suspicious signals” and a violation of usage policies.

  • The Trigger: Steinberger admitted he was trying to bypass a bug in a specific Claude feature (the -p fallback) to ensure OpenClaw remained compatible with Anthropic’s latest updates.
  • The “Vibe” Shift: The suspension occurred just days after Anthropic officially blocked Claude Pro/Max subscriptions from being used with third-party tools like OpenClaw. Users must now pay for extra “usage bundles” or use the more expensive API to power external agents.
  • The Reversal: Following a public outcry on X (formerly Twitter), Anthropic engineer Boris Cherny assisted in restoring the account, though the technical blocks on OpenClaw’s subscription-based access remain in place.

2. Why Anthropic is “Locking Out” OpenClaw

Anthropic’s shift from “open partner” to “controlled ecosystem” is driven by two major factors:

  • Infrastructure Strain: Agents like OpenClaw generate 30x to 50x more compute load than a standard chat user. Anthropic argued that $20/month subscriptions were never designed to handle the “industrial-scale” usage patterns of autonomous agents.
  • The “Mythos” Leak: A recent internal source code leak of Claude Code and the upcoming Mythos model has made Anthropic hyper-vigilant. They are moving toward “Project Glasswing,” a more restricted deployment model where advanced reasoning is gated behind enterprise audits.

3. The AMD Fallout: “Claude Cannot Be Trusted”

The OpenClaw ban coincides with a damaging report from Stella Laurenzo, AMD’s Director of AI, who documented a massive regression in Claude’s engineering capabilities.

MetricBefore March 8After March 8 (Beta/Redacted)
Code Reads per Edit6.6 Reads2.0 Reads
Stop-Hook Violations0 per day~10 per day
Action PatternResearch-FirstEdit-First (Cheapest Action)
AMD Team StatusFully IntegratedSwitched to Competitor

Laurenzo argued that Anthropic’s decision to “redact” thinking (hiding the reasoning block) has led to a “lobotomized” model that takes shortcuts, avoids complex tasks, and “phones it in” during peak hours.


4. Market Reaction: The “Great Rotation”

The combination of the OpenClaw ban and AMD’s critique has triggered a migration of power users:

  • Alternative Providers: Many developers are moving to MiniMax M2.7 or OpenAI’s new GPT-5.4 Pro, which offers a $100/month “vibe coding” tier specifically for these high-usage agentic workflows.
  • Open Source Surge: AMD has reportedly pivoted toward OpenClaw paired with locally-run models, citing a need to “own the reasoning” rather than relying on a black-box API that can be throttled without notice.

5. What This Means for You

As someone who monitors Indian market regulations and TCS’s AI revenue, this is a clear signal that the “Free/Cheap AI” era for professional engineering is ending.

  • Tiered Intelligence: Expect “Deep Reasoning” to become a premium commodity. If you want Claude to think like a senior engineer, you will likely have to pay for a “Max Thinking” tier or set your /effort to high.
  • Agent Compliance: If you use third-party tools for financial modeling or research, be prepared for higher costs as providers move away from flat-rate subscriptions toward token-based billing.

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version