In a landmark victory for the AI industry, a federal judge in San Francisco has issued a preliminary injunction in favor of Anthropic, temporarily blocking the U.S. government’s attempt to blacklist the company.
U.S. District Judge Rita Lin ruled on Thursday that the Trump administration’s decision to label Anthropic a “supply chain risk” appeared to be an “arbitrary and capricious” act of retaliation. The ruling effectively halts a presidential directive that had ordered all federal agencies and defense contractors to immediately cease using Anthropic’s Claude models.
1. The Core of the Conflict: “Red Lines”
The legal battle stems from a breakdown in contract negotiations between Anthropic and the Pentagon (Department of Defense) over a $200 million AI initiative.
- Anthropic’s Stance: The company refused to remove “guardrails” that prevent Claude from being used for fully autonomous lethal weapons and mass domestic surveillance of U.S. citizens.
- The Government’s Reaction: Following the impasse, Secretary of Defense Pete Hegseth designated Anthropic as a “national security supply chain risk”—a label typically reserved for foreign adversaries like Huawei or ZTE.
- The Trump Directive: President Trump subsequently posted on Truth Social ordering an “IMMEDIATE CEASE” of all Anthropic technology across the federal government, later formalized as a six-month phase-out in favor of OpenAI.
2. The Judge’s Ruling: “Orwellian” Retaliation
In her scathing 43-page opinion, Judge Lin did not mince words regarding the government’s tactics.
“Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur… for expressing disagreement with the government,” Lin wrote.
- First Amendment Victory: The court found that the government likely violated Anthropic’s First Amendment rights by weaponizing procurement power to punish the company for its public stance on AI safety.
- Due Process: The judge noted that Anthropic was denied its Fifth Amendment right to due process, as it was given no opportunity to contest the “risk” label before it was applied.
- Attempt to “Cripple”: During the hearing, Lin remarked that the government’s actions looked less like a standard vendor change and more like “attempted corporate murder” designed to destroy the firm’s commercial reputation.
3. What Happens Now?
The injunction provides Anthropic with a critical lifeline, though the legal war is far from over.
- The 7-Day Stay: The ruling does not take effect for seven days to allow the Department of Justice (DOJ) time to file an emergency appeal.
- Business as Usual: For now, Anthropic can continue to compete for government and corporate contracts. Private defense contractors (like Palantir or Lockheed Martin) are no longer legally barred from using Claude in their internal workflows.
- Ongoing D.C. Case: Anthropic has a second, parallel lawsuit pending in the D.C. Circuit Court of Appeals challenging a separate statutory provision used by the General Services Administration (GSA) to cancel its “OneGov” contract.
4. Industry Impact: A Precedent for AI Safety
The tech sector has rallied behind Anthropic, with Microsoft, the ACLU, and even researchers from OpenAI filing amicus briefs in support of the company.
| Party | Perspective |
| Anthropic | “Grateful to the court… our focus remains on working productively with the government for safe AI.” |
| The Pentagon | Argues that a private vendor cannot “insert itself into the chain of command” by limiting lawful use. |
| Tech Industry | Views the ruling as a vital shield against “contracting by tweet” and political retaliation. |
