On Sunday, March 1, 2026, explosive reports from The Wall Street Journal and other major outlets confirmed that the U.S. military utilized Anthropic’s Claude AI to coordinate massive airstrikes in Iran—occurring just hours after President Trump had publicly ordered all federal agencies to “immediately cease” using the technology.
The situation has created a bizarre paradox where the Commander-in-Chief is labeling a company “woke” and “radical left” on social media while his own battlefield commanders are actively relying on its algorithms to execute high-stakes combat missions.
The Iran Strikes: Claude’s Role
According to leaked reports from U.S. Central Command (CENTCOM), Claude was not just a “chatbot” but a core component of the mission’s intelligence layer:
- Target Identification: Claude was used to filter massive amounts of satellite and signals intelligence to identify high-value targets in Tehran.
- Battle Simulations: The AI ran hundreds of “combat simulations” to predict Iranian air defense responses before the first missiles were launched.
- Intelligence Assessment: It provided real-time evaluations of the strike’s effectiveness, helping commanders decide whether to proceed with secondary waves.
Why the Military is Ignoring the “Ban”
The primary reason for the continued use is the “Six-Month Phase-Out” clause buried in the President’s directive.
- Deep Integration: Claude is currently the only frontier AI model fully embedded in the Pentagon’s “Maven Smart System” and classified networks.
- No Immediate Alternative: While a deal with OpenAI was signed on Friday, it will take months to migrate the military’s specific datasets and workflows from Anthropic’s architecture to OpenAI’s.
- Mission Criticality: Military leaders reportedly told the White House that “turning off” Claude mid-operation would blind commanders and put American pilots at extreme risk.
The “Maduro” Precedent
This isn’t the first time Claude has been the military’s “secret weapon.” It was recently revealed that Anthropic’s AI was the primary engine used to plan and execute the January 2026 operation to capture Venezuelan President Nicolás Maduro.
The current rift actually started because of that mission; Anthropic reportedly raised concerns after learning how their AI was used in the raid, leading to the “red line” standoff over autonomous weapons.
A War of Words
The friction between the White House and Silicon Valley has reached an all-time high:
| Source | Statement / Position |
| President Trump | Called Anthropic “Leftwing nut jobs” and “woke” on Truth Social; accused them of “betrayal” for refusing unrestricted military access. |
| Sec. Pete Hegseth | Officially designated Anthropic a “Supply Chain Risk”—the first time this label (usually for China/Huawei) has been used on a U.S. firm. |
| Anthropic (Dario Amodei) | Vowed to challenge the designation in court; insists that “frontier AI is not yet reliable enough” for fully autonomous killing. |
| OpenAI (Sam Altman) | Successfully negotiated a deal by agreeing to “all lawful purposes” but reportedly secured his own set of (less restrictive) safety red lines. |
The Current Standoff
As of today, March 1, the U.S. military continues to use Claude under the transition period, even as the administration prepares a legal “war” against the company. Anthropic has stated it will sue the government to overturn the “supply chain risk” designation, arguing it is a politically motivated act of intimidation.


