In a decisive pivot that marks a new era of military-tech tension, the U.S. Department of Defense (DoD)—recently renamed the Department of War by the Trump administration—has begun developing its own proprietary large language models (LLMs). The move, confirmed on March 18, 2026, follows a total breakdown in negotiations with Anthropic over the ethical “red lines” of AI in combat.
The “Supply Chain Risk” Fallout
The conflict reached a breaking point when the Pentagon officially designated Anthropic as a “supply-chain risk to national security.” This rare label, typically reserved for foreign adversaries like Huawei, effectively blacklists the company from federal defense contracts.
- The Impasse: Anthropic refused to waive its “Constitutional AI” safeguards, which prohibit its models from being used for mass domestic surveillance and fully autonomous lethal targeting.
- The Pentagon’s Stance: Secretary of War Pete Hegseth argued that private companies cannot dictate national defense policy. The military demands “unconditional use” of AI to maintain a “speed of fight” that outpaces human decision-making.
- The $200M Void: The cancellation of Anthropic’s contract has opened a massive vacuum in the Pentagon’s $200 million “Agentic AI” budget.
The Shift to Proprietary and “Friendly” AI
Rather than relying on a single “safety-first” vendor, the Pentagon is pursuing a three-pronged strategy to build its AI arsenal:
- In-House Development: Under Chief Digital and AI Officer Cameron Stanley, the Pentagon has begun engineering multiple LLMs designed to run in government-owned, air-gapped environments. These models will have no built-in “ethical inhibitors” that could interfere with tactical operations.
- GenAI.mil Platform: The military has consolidated its AI efforts onto GenAI.mil, a secure enterprise platform. While it currently hosts models from OpenAI and xAI (which have signed “any lawful use” agreements), the goal is to integrate the Pentagon’s own models by late 2026.
- Open Source Customization: The Department is reportedly investing heavily in Llama-based (Meta) open-source architectures, which can be stripped of commercial guardrails and retrained on classified “combat-proven” data.
7 “Pace-Setting Projects” (PSPs)
The development of these alternatives is being fast-tracked through seven priority projects designed to achieve “Military AI Dominance”:
| Project Name | Objective |
| Swarm Forge | Scaling AI-enabled drone swarms and autonomous tactics. |
| Agent Network | AI-driven battle management and campaign planning. |
| Ender’s Foundry | High-fidelity AI simulations for “sim-to-field” training. |
| Open Arsenal | Accelerating weapons development via automated intelligence. |
Silicon Valley Divided
The dispute has fractured the tech industry into two camps:
- The “Responsible” Camp: Led by Anthropic, focusing on AI safety and democratic values. Despite the loss of Pentagon business, Anthropic’s revenue has climbed to $20 billion as commercial enterprises flock to its “safe” brand.
- The “Patriotic” Camp: Led by OpenAI, Palantir, and Anduril, who have positioned themselves as “friendly partners” to the administration. OpenAI’s share of government spending has surged as it fills the gap left by Anthropic.
Legal Battle Looms
Anthropic has filed a lawsuit against the Trump administration to challenge the “supply-chain risk” designation, calling it a punitive and politically motivated move. As of today, a temporary restraining order has been sought by Anthropic employees (with support from some OpenAI and Google staff) to prevent the immediate “ripping out” of its technology from current active operations in the Middle East.


