Federal Court Sides with Pentagon Against Anthropic in Supply Chain Risk Dispute

Federal Court Sides with Pentagon Against Anthropic in Supply Chain Risk Dispute

Anthropic's attempt to block the Defense Department's national security supply chain risk designation has been denied by a federal appeals court while the case continues.

A three-judge panel at the US Court of Appeals for the District of Columbia Circuit has turned down Anthropic's request to temporarily halt the Defense Department's classification of the company as presenting a supply chain risk to national security.

On Wednesday, the panel rejected the emergency stay motion, determining that the government's stake in managing its acquisition of AI technology amid ongoing military operations took precedence over potential financial or reputational damage Anthropic might experience as a result of the classification.

The court's determination ensures that the Defense Department's official categorization of Anthropic's offerings as representing a "supply-chain risk to national security" continues to be enforced.

No American corporation has previously received this designation, which additionally prevents Defense Department contractors from utilizing Anthropic's AI systems. The move could establish a concerning precedent for additional technology firms that refuse to meet governmental requirements.

The panel of three judges stated that "In our view, the equitable balance here cuts in favor of the government."

"On one side is a relatively contained risk of financial harm to a single private company. On the other side is judicial management of how, and through whom, the Department of War secures vital AI technology during an active military conflict."

US Court of Appeals order
Order from the US Court of Appeals, case No. 26-01049. Source: Courtlistener

Challenging the label in two courts

The controversy originates from an agreement reached between the artificial intelligence company and the Defense Department in July 2025 regarding a contract that would establish Anthropic's AI system Claude as the initial large language model authorized for deployment on classified government networks.

The negotiations broke down in February, however, as the government attempted to renegotiate terms and demanded that Anthropic permit unrestricted military application of Claude. Anthropic held firm to its position that the technology should be prohibited from use in lethal autonomous weaponry and widespread domestic surveillance operations targeting American citizens.

In late February, US President Donald Trump issued an order directing all federal agencies to cease utilizing Anthropic products, declaring that the company had committed a "disastrous mistake trying to strong-arm the Department of War."

In March, Anthropic initiated legal action against the Trump administration, characterizing it as an "unlawful campaign of retaliation."

The District Court for the Northern District of California issued a preliminary injunction against the Pentagon concerning the designation in late March and placed a temporary hold on Trump's directive, characterizing it as "Orwellian."

Due to the structure of federal procurement regulations, however, Anthropic was required to contest the designation through two distinct legal avenues — challenging it in a California district court based on constitutional principles and directly appealing to the D.C. Circuit under the particular statute that permitted the designation.

The court's decision recognized that Anthropic will "likely suffer some degree of irreparable harm absent a stay," and noted that "substantial expedition is warranted."

On X, Acting US Attorney General Todd Blanche characterized the decision as a "resounding victory for military readiness."

"Military authority and operational control belong to the Commander-in-Chief and Department of War, not a tech company."