Home Technology Artificial Intelligence Microsoft urge court to pause Pentagon ban on Anthropic AI

Microsoft urge court to pause Pentagon ban on Anthropic AI

0

Microsoft filed an amicus brief in a San Francisco federal court, urging a judge to issue a temporary restraining order (TRO) to halt the Pentagon’s “supply chain risk” designation against the AI startup Anthropic.

The legal intervention marks a major escalation in the clash between the tech industry and the Trump administration over the ethical boundaries of military AI.


The “Ethical Red Lines” Conflict

The dispute began when Anthropic refused to remove specific safety guardrails from its Claude models for military use.

  • Anthropic’s Stance: The company maintains “red lines” prohibiting its AI from being used for fully autonomous lethal weapons or mass domestic surveillance of American citizens.
  • The Pentagon’s Demand: Defense Secretary Pete Hegseth insisted the military must have “unfettered access” to use the technology for any lawful purpose, arguing that private companies should not “insert themselves into the chain of command.”
  • The Retaliation: After negotiations collapsed, the Pentagon labeled Anthropic a “supply chain risk”—a designation typically reserved for foreign adversaries like Huawei—effectively blacklisting it from federal work and requiring contractors to purge Anthropic’s tech from their systems.

Why Microsoft is Intervening

Microsoft, which has committed up to $5 billion in investment to Anthropic, argued that the “sudden and unprecedented” ban creates chaos for the entire defense ecosystem.

Microsoft’s ArgumentDetail
Operational RiskMicrosoft warned the court that the ban could “hamper U.S. warfighters” at a critical time by disrupting systems where Claude is already embedded.
Contractor ChaosUnlike the Pentagon (which has a 6-month phase-out), contractors were not given a transition period, forcing them to “immediately alter” product configurations.
Ethical AlignmentMicrosoft explicitly sided with Anthropic’s red lines, stating that AI should not be used to “start a war without human control.”
Economic FalloutThe company argued that using security designations to resolve contract disputes sets a “dangerous precedent” for American innovators.

Broader Industry Support

Microsoft is not alone in its defense of the Claude-maker.

  • AI Researchers: A group of 37 top researchers from OpenAI and Google (including Google’s Jeff Dean) filed a separate brief supporting Anthropic, calling the government’s move “arbitrary and capricious.”
  • Tech Coalitions: Organizations like the Electronic Frontier Foundation (EFF) and the Cato Institute have also filed support, citing First Amendment and rule-of-law concerns.
  • The OpenAI Pivot: Within hours of the Anthropic ban, OpenAI signed a new deal to deploy its models on the Pentagon’s classified networks, though CEO Sam Altman has publicly stated he also opposes the “supply chain risk” label being applied to a U.S. firm.

Current Status of the Case

  • The Judge: U.S. District Judge Rita Lin is currently presiding over the case in San Francisco.
  • The Hearing: During a hearing on March 11, Anthropic’s attorneys revealed that over 100 enterprise customers have already expressed concerns about their contracts due to the government’s “blacklisting.”
  • Next Steps: The court is expected to rule on the request for a Temporary Restraining Order within the coming days. If granted, it would freeze the ban and allow Anthropic to continue its work while the full lawsuit proceeds.

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version