President Trump officially ordered all U.S. federal agencies to “immediately cease” the use of Anthropic’s technology, marking a historic and unprecedented escalation in the clash between the administration and the AI industry over national security and safety guardrails.
The directive, announced via Truth Social, follows a high-stakes standoff between the Pentagon and Anthropic CEO Dario Amodei over the company’s refusal to remove ethical restrictions on its AI model, Claude.
The Breaking Point: The 5:01 PM Ultimatum
The ban was triggered by the expiration of a deadline set by Defense Secretary Pete Hegseth. The Pentagon had demanded that Anthropic provide “unrestricted access” for all lawful military purposes, specifically requiring the removal of two “red line” safeguards:
- Mass Domestic Surveillance: Anthropic refused to allow Claude to be used for large-scale, automated monitoring of American citizens.
- Lethal Autonomous Weapons: The company maintained that current AI is not reliable enough to select and engage targets without human intervention, and refused to enable its use in fully autonomous “kill chains.”
In a final response on Thursday, Amodei stated the company “cannot in good conscience accede” to the request, arguing that these specific uses would “endanger America’s warfighters and civilians.”
The “Supply Chain Risk” Designation
Shortly after the President’s order, Secretary Hegseth took the extraordinary step of designating Anthropic a “Supply Chain Risk to National Security” under 10 U.S.C. § 3252.
- Impact on Contractors: This designation effectively blacklists Anthropic from the entire defense industrial base. Any contractor, supplier, or partner doing business with the U.S. military is now prohibited from conducting any “commercial activity” with Anthropic.
- Precedent: This label is historically reserved for foreign adversaries (like Huawei or ZTE) and has never before been applied to a major American tech company.
- Phase-Out Period: While the ban is “immediate” for new business, a six-month transition window has been granted to agencies (including the Pentagon and FEMA) that already have Claude deeply embedded in their classified and unclassified networks.
The Fallout: OpenAI and Industry Reactions
The move has sent shockwaves through Silicon Valley and triggered a rare moment of industry-wide solidarity:
| Entity | Reaction / Status |
| OpenAI | Just hours after the ban, CEO Sam Altman announced a new deal to deploy OpenAI models in the Pentagon’s classified networks. Crucially, Altman claimed the deal includes the same “red lines” Anthropic fought for, suggesting the government may be more willing to negotiate with OpenAI. |
| Google & Amazon | Hundreds of employees at both firms signed an open letter titled “We Will Not Be Divided,” urging their companies to stand with Anthropic and refuse similar military demands. |
| xAI (Elon Musk) | Reportedly moved to fill the gap left by Anthropic, signing a classified contract with the Pentagon that includes no safety-related usage restrictions. |
| Anthropic | Vowed to challenge the “legally unsound” supply chain designation in court, calling it an act of “intimidation” and “politicization.” |
Why It Matters for You
If you are a commercial or individual user of Claude, your access remains unaffected. The ban currently only applies to U.S. government agencies and their direct military contractors. However, the designation of an American firm as a “supply chain risk” raises significant questions about the future of AI regulation and the “sovereign control” of technology.


