Washington/New Delhi: The US Department of Defense has designated AI company Anthropic as a “supply chain risk,” escalating an unprecedented dispute between the Pentagon and a major American AI developer over the military use of advanced AI systems.

The decision, confirmed by the United States Department of Defense in early March, could restrict the use of Anthropic’s AI models  including its flagship system Claude AI in defense-related contracts.

Under the designation, companies working with the Pentagon may be required to certify that they do not rely on Anthropic technology in projects tied to defense operations.

The move marks a rare instance in which the US government has applied a supply-chain risk label to a domestic technology company. Such designations have historically been used against foreign firms viewed as potential national security threats.

Background: A Contract and Two Red line

The roots of the conflict trace to July 2025, when Anthropic and the Pentagon signed a contract brokered with defence contractor Palantir which makes Claude the first frontier AI model approved for use on classified military networks. 

That contract included guardrails prohibiting two specific uses: powering fully autonomous lethal weapons (where no human is involved in targeting or firing decisions) and conducting mass domestic surveillance of American citizens.

Earlier this year in 2026, the Pentagon sought to renegotiate, demanding that Anthropic allow the military to use Claude for “all lawful purposes” without limitation.

Negotiations broke down after weeks of talks.

The Pentagon set a deadline of 5:01 p.m. on Friday, February 27, for Anthropic to agree to its terms. When Anthropic’s CEO Dario Amodei refused to budge, President Donald Trump directed all federal agencies to immediately cease use of the company’s technology, with a six-month phaseout period for certain agencies.

Defense Secretary Hegseth simultaneously announced the supply chain risk designation.

Even as the dispute escalated, Claude remained embedded in U.S. military operations. According to Bloomberg and CNBC, Claude is one of the main tools in Palantir’s Maven Smart System, which military operators in the Middle East rely on, including in support of U.S. operations related to Iran.

Scope of the Designation: Narrower Than Announced

Hegseth’s initial announcement suggested sweeping consequences, declaring that “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.”

However, Amodei, CEO of Anthropic pushed back, stating that the Pentagon’s own written notification limits the designation more narrowly applying only to Claude’s use “as a direct part of” specific Department of War contracts.

Major technology partners confirmed the narrower interpretation. A Microsoft spokesperson stated that its lawyers studied the designation and concluded that “Anthropic products, including Claude, can remain available to our customers other than the Department of War through platforms such as M365, GitHub, and Microsoft’s AI Foundry.”

Google and Amazon issued similar statements affirming that Anthropic products remain available for all non-Defense Department work.

Not all contractors drew the same conclusion. Lockheed Martin said it would “follow the President’s and the Department of War’s direction” and look to other AI providers, though it added that it does not depend on any single AI vendor.

Amodei: ‘No Choice But to Challenge in Court’

Dario Amodei, CEO of Anthropic issued a public statement on March 5 declaring, “We do not believe this action is legally sound, and we see no choice but to challenge it in court.” 

He described Anthropic’s guardrails as narrow usage restrictions that “relate to high-level usage areas, and not operational decision-making,” and said there had been “productive conversations” with the Pentagon in recent days about whether it could continue using Claude or establish a smooth transition.

In a separate memo to staff that was later reported by The Information, Amodei said the administration’s actions were politically motivated, suggesting the company was being targeted partly because it had not donated to Trump or offered what he called “dictator-like praise.” 

Amodei later publicly apologized for the internal memo, saying he regretted the tone and that it was “a difficult day for the company.”

OpenAI Steps In — and Draws Criticism

Hours after Hegseth’s announcement on February 27, OpenAI said it had reached a deal with the Pentagon to provide access to its models for classified military environments. 

OpenAI initially said it sought similar protections against domestic surveillance and autonomous weapons, but later had to amend its agreements. 

CEO Sam Altman publicly acknowledged that the company had “rushed a deal that looked opportunistic and sloppy.” Some OpenAI employees have since resigned in protest and public campaigns have called on users to abandon ChatGPT.

Broad Criticism and Industry Fallout

The designation drew broad condemnation from lawmakers, former officials, and legal experts. Democratic Sen. Kirsten Gillibrand, a member of the Senate Armed Services and Intelligence Committees, called it “shortsighted, self-destructive, and a gift to our adversaries” and described it as “a dangerous misuse of a tool meant to address adversary-controlled technology.” 

Republican Sen. Thom Tillis told Axios the public fight was “sophomoric.”

Thirty former military and intelligence officials, along with tech policy leaders, wrote jointly to Congress urging an investigation into the “dangerous precedent” set by the Pentagon. 

Major tech trade groups in Washington also warned the administration about the consequences of designating a U.S. company a supply chain risk. 

Dean Ball, a former Trump White House AI adviser, called the designation a “death rattle” of the American republic, warning that the government was treating domestic innovators worse than foreign adversaries.

Federal law defines supply chain risk as a “risk that an adversary may sabotage, maliciously introduce unwanted function, or otherwise subvert” a system language that critics say was clearly not designed to apply to a cooperating U.S. company.

Palantir, which relies on government contracts for about 60% of its US revenue and had integrated Claude into its defense platforms, saw its shares dip briefly on the news before closing roughly flat.

Consumer Surge for Anthropic

Despite the Pentagon’s action, Anthropic experienced a sharp surge in consumer support. The company said more than a million people signed up for Claude each day during the week of the designation, pushing it past OpenAI’s ChatGPT and Google’s Gemini as the top AI app in more than 20 countries on Apple’s App Store. 

The company said the Pentagon contracts represent only a small portion of its overall business.


Reporting based on accounts from Bloomberg, CNBC, NPR, CNN, TechCrunch, Fast Company, The Hill, and Inc.

Share this content: