The US military was actively using Anthropic's Claude AI model during Friday night's strikes against Iran, deploying the technology for intelligence assessments, target identification, and combat scenario simulations just hours after President Trump ordered all federal agencies to immediately stop using the San Francisco startup's technology.

US Central Command in the Middle East relied on Claude for critical operational tasks as US and Israeli forces launched coordinated attacks against Iranian cities late February 28, according to reporting from The Wall Street Journal citing defense sources. The timing creates an extraordinary contradiction—Claude powered military intelligence operations the same day Trump publicly banned it across government.

The model had been cleared for classified military and intelligence work through partnerships with Palantir Technologies and Amazon Web Services following a $200 million Pentagon contract awarded in July 2025. Claude was the only AI model approved on Pentagon classified networks at the time of the Iran strikes.

Multiple Sensitive Operations

The Iran strikes weren't Claude's only recent deployment in high-stakes military operations. The Wall Street Journal separately reported that the Pentagon used Anthropic's model during the January 2026 operation that led to the capture of Venezuelan President Nicolás Maduro, raising questions about how extensively the technology has been embedded in sensitive missions despite ongoing policy disputes.

The Pentagon's operational dependence on Claude came into sharp relief Friday afternoon when Defense Secretary Pete Hegseth gave Anthropic a 5:01 PM deadline to remove all contractual safeguards preventing the model from being used for mass domestic surveillance of Americans or powering fully autonomous weapons systems. CEO Dario Amodei refused.

Hours later, Trump designated Anthropic a "supply chain risk"—a classification normally reserved for Chinese and Russian companies—and ordered the six-month phase-out. By that evening, CENTCOM was using Claude to analyze intelligence for strikes against Iran.

The Replacement Challenge

The contradiction highlights the Pentagon's operational dilemma. While Trump and Hegseth demand immediate termination of Anthropic contracts, military officials privately acknowledge that fully replacing Claude across all classified systems could take months given how deeply integrated the model has become into intelligence workflows.

The Pentagon has moved quickly to secure alternative AI tools. By Saturday, OpenAI announced it had signed a classified contract with the Defense Department to deploy ChatGPT in sensitive environments. The company emphasized its agreement includes "more guardrails than any previous deal for classified AI deployments, including Anthropic's."

OpenAI's contract explicitly prohibits using its technology for mass domestic surveillance, controlling autonomous weapons where law requires human oversight, or making high-stakes automated decisions like social credit scoring. CEO Sam Altman stated the company applies "technical guardrails, cloud-based deployment, oversight by cleared personnel and strict contractual protections" beyond just usage policies.

The Pentagon has also secured access to xAI's models for classified settings, though defense officials acknowledge neither replacement fully replicates Claude's current operational integration.

Operational Continuity vs Political Directive

Military experts say the Iran strikes usage demonstrates the gap between political directives and operational reality. "You can't just swap out AI models in the middle of active combat operations," explained one former Pentagon technology official speaking on condition of anonymity. "These systems take months to integrate, test, and certify for classified environments."

The supply chain risk designation legally bars defense contractors from any commercial activity with Anthropic, potentially impacting the startup catastrophically given major investors Amazon and Alphabet both hold extensive military contracts. Amodei called the designation "unprecedented" for an American firm and vowed to challenge it in court.

Trump's Truth Social post Friday attacked Anthropic as "leftwing nut jobs" whose "selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY."

The fact that those same troops were simultaneously relying on Claude for strike planning against Iran underscores the complexity of untangling AI systems from modern military operations—regardless of political pressure to do so immediately.

Keep Reading