Friday, February 27, 2026

Trending

Related Posts

Anthropic reject Pentagon’s AI use for defence

Anthropic CEO Dario Amodei officially rejected a final ultimatum from the Pentagon, stating that the company “cannot in good conscience” remove safety guardrails that prevent its AI, Claude, from being used for lethal autonomous weapons or mass domestic surveillance.

The rejection comes just hours before a Friday, February 27, 5:01 PM ET deadline set by Defense Secretary Pete Hegseth. The standoff marks one of the most significant public confrontations between a major AI lab and the U.S. government over the ethical boundaries of military technology.


The Two “Red Lines”

In a formal statement, Anthropic outlined two specific use cases it refuses to authorize, even under the threat of federal compulsion:

  1. Fully Autonomous Weapons: Anthropic argues that current frontier AI systems (like Claude) are not reliable enough to make life-and-death targeting decisions without human intervention. Amodei warned that using them in this way puts both American warfighters and civilians at risk.
  2. Mass Domestic Surveillance: Anthropic maintains that using AI to conduct automated, large-scale surveillance of Americans is incompatible with democratic values. While the Pentagon claims such actions would be illegal anyway, they have refused to explicitly prohibit them in the contract.

The Pentagonโ€™s Escalation & Threats

Defense Secretary Pete Hegseth and the “Department of War” (a term recently revived in 2026 communications) have signaled they will not let a private firm dictate military operational policy. They have threatened three specific actions if Anthropic does not relent:

  • The Defense Production Act (DPA): The government has threatened to invoke this Cold War-era law to legally compel Anthropic to provide unrestricted access, treating the AI as a critical national resource.
  • Supply Chain Risk Label: The Pentagon may designate Anthropic a “supply chain risk”โ€”a label typically reserved for adversaries like China. This would effectively blackball Anthropic from working with any other U.S. defense contractors (like Boeing or Lockheed Martin).
  • Contract Termination: The military is prepared to cancel Anthropicโ€™s $200 million contract and shift its classified workloads to competitors like xAI (Elon Musk) or OpenAI, who have reportedly already agreed to “all lawful use” terms.

A “Contradictory” Standoff

Amodei pointed out the irony of the government’s position, noting that the Pentagon is simultaneously calling Anthropic a “security risk” while trying to use the DPA to declare its technology “essential to national security.”

“We believe deeply in the existential importance of using AI to defend the United States… But given the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider.” โ€” Dario Amodei, Anthropic CEO


Impact on Military Operations

Claude is currently the only AI model integrated into the Pentagonโ€™s most sensitive, classified networks. If the contract is terminated on Friday evening:

  • Transition Period: Anthropic has promised a “smooth transition” to another provider to avoid disrupting active missions.
  • The “Maduro” Catalyst: The rift reportedly deepened after Anthropic questioned how its technology was used in the January 2026 raid to capture Venezuelan leader Nicolรกs Maduro, leading to accusations that the company was trying to “micromanage” active combat operations.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Popular Articles