Home Technology Artificial Intelligence US Govt threatens to cut Anthropic off from agency’s supply chain

US Govt threatens to cut Anthropic off from agency’s supply chain

0

U.S. Department of Defense (DoD) issued a direct ultimatum to Anthropic, threatening to designate the AI company a “supply chain risk” and cut it off from all federal contracts if it does not remove specific ethical guardrails on its AI models.

The standoff reached a breaking point during a “high-stakes” meeting at the Pentagon between Defense Secretary Pete Hegseth and Anthropic CEO Dario Amodei.


The Ultimatum: “Get On Board or Get Out”

Secretary Hegseth reportedly gave Anthropic a deadline of Friday, February 27, 2025, at 5:00 PM EST to comply with a demand for “all lawful use” of its AI technology.

  • The Threat: If Anthropic refuses, the Pentagon may label it a “supply chain risk.” This is a “scarlet letter” typically reserved for foreign adversaries (like Huawei).
  • The Fallout: Such a designation would not only cancel Anthropic’s current $200 million DoD contract but would legally bar any other government contractor (including giants like Microsoft, Google, or Palantir) from using Anthropic’s technology in their own federal workflows.
  • The “Nuclear Option”: Officials hinted at invoking the Defense Production Act (DPA) to legally compel Anthropic to grant the military unrestricted access to its models, regardless of the company’s internal policies.

The “Two Red Lines” Dispute

The rift centers on Anthropic’s insistence on two non-negotiable ethical boundaries that Secretary Hegseth has publicly characterized as “woke AI.”

  1. Autonomous Weapons: Anthropic refuses to allow its AI (Claude) to be used for “autonomous kinetic operations” where the AI makes final, lethal targeting decisions without a human “in the loop.”
  2. Domestic Surveillance: The company objects to its tools being used for the mass surveillance of U.S. citizens, citing privacy laws and the potential for abuse.

The Pentagon’s Rebuttal: Defense officials argue that the military only issues “lawful orders” and that it is the DoD’s responsibility—not a software vendor’s—to determine what constitutes a legal military operation.


Context: The “Maduro Raid” Leak

The tension escalated following reports that Claude was used by the U.S. military to help plan the January 2026 operation that resulted in the capture of former Venezuelan leader Nicolas Maduro.

  • Anthropic reportedly reached out to its partner, Palantir, to raise concerns about whether the mission breached its usage policies.
  • This inquiry allegedly “alarmed” the Pentagon, leading officials to question if Anthropic could be trusted to support the military during a future hot-war crisis.

The Competitive Shift: xAI and Grok

As the relationship with Anthropic sours, the Pentagon is moving quickly to replace it.

  • xAI (Grok): On Feb 24, the DoD announced an agreement with Elon Musk’s xAI to deploy its “Grok” model on classified military networks.
  • The Standard: Unlike Anthropic, xAI, Google, and OpenAI have reportedly agreed to the Pentagon’s “all lawful purposes” standard, removing the standard safety guardrails for their military-specific deployments.

“Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people.” — Sean Parnell, Chief Pentagon Spokesman.

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version