Anthropic CEO Dario Amodei was summoned to the Pentagon on Tuesday for what Defense Department sources described as a confrontational meeting with Defense Secretary Pete Hegseth, who is expected to deliver an ultimatum over Claude's usage restrictions in classified military systems.

The meeting comes as the Pentagon has finalized an agreement with xAI to deploy Grok as an alternative AI model in classified environments, breaking Anthropic's three-year monopoly on the military's most sensitive artificial intelligence applications.

The Friday Deadline

According to sources familiar with the matter, the Pentagon has set a deadline of Friday, February 27 at 5:01 PM ET for Anthropic to remove all safeguards from Claude and agree to "all lawful purposes" usage terms. The 48-hour window effectively forces Anthropic to choose between its founding AI safety mission and commercial viability as a defense contractor.

A Defense official told Axios that Hegseth would present the ultimatum bluntly: comply with unrestricted military use requirements or face designation as a "supply chain risk"—a label typically reserved for foreign adversaries like Huawei that would legally bar federal contractors from doing business with the company.

The Safeguards in Question

Anthropic has refused the Pentagon's demand for two specific reasons: opposition to mass surveillance of American citizens and development of fully autonomous weapons capable of lethal action without human decision-making. These restrictions align with the company's founding principles established when former OpenAI executives left to create Anthropic specifically over AI safety concerns.

The standoff intensified following the January 3 special operations raid in Venezuela that resulted in the capture of President Nicolás Maduro. Claude was reportedly deployed during the operation through Anthropic's partnership with Palantir, marking the first known use of the model in classified military action.

When Anthropic executives subsequently questioned whether their safety policies had been violated during the lethal raid, Pentagon officials viewed the inquiry as an unacceptable attempt by a private corporation to audit classified operations. Defense Secretary Hegseth has since championed a policy of "non-woke" AI, arguing that private ethical frameworks cannot supersede commander-in-chief authority in wartime.

Commercial Catastrophe Scenario

If the "supply chain risk" label is applied, the consequences for Anthropic would extend far beyond losing Pentagon contracts. Most major US corporations—including Anthropic's lead investors Amazon and Alphabet—hold significant defense contracts and would be legally prohibited from continuing business relationships with a blacklisted entity.

The designation could catastrophically disconnect Anthropic from the Western commercial ecosystem, potentially forcing the company to choose between accepting Pentagon terms, selling to non-US entities, or shutting down operations entirely.

Technical Replacement Challenges

Defense officials have privately acknowledged that replacing Claude would be extraordinarily difficult from a technical standpoint. The model is deeply integrated into classified systems built over three years of partnership. Workflows optimized around Claude's specific reasoning style and capabilities would require substantial re-engineering and validation for high-stakes military applications.

Claude's integration runs through Palantir's AI Platform, which serves as the primary interface layer for classified AI deployment. Transitioning to Grok or other alternatives would require not just API changes but potentially fundamental architectural modifications to maintain operational continuity.

Investor and Market Implications

Anthropic raised $30 billion in its latest funding round at a $380 billion valuation, with Amazon and Alphabet as lead investors. Both companies face potential conflicts of interest: supporting Anthropic's safety stance could jeopardize their own defense contracts, while pressuring Anthropic to comply undermines their public AI ethics commitments.

The Tuesday meeting outcome will signal whether Anthropic maintains its safety-first positioning or capitulates to Pentagon pressure. Industry observers expect the decision to set precedent for how AI companies balance commercial incentives against ethical guardrails when national security demands conflict with corporate principles.