Federal Court Issues Temporary Halt to Pentagon's Ban on Anthropic

Federal Court Issues Temporary Halt to Pentagon's Ban on Anthropic

According to Judge Rita Lin, the US government's decision to "cripple Anthropic" came only after the company voiced its objections regarding potential applications of its technology.

In San Francisco, a federal judge has approved Anthropic's petition for temporary relief following the Pentagon's classification of the artificial intelligence company as a threat to supply chain security.

Judge Rita Lin of the Northern District of California's District Court issued a preliminary injunction on Thursday against the Pentagon regarding this designation. The order also puts a temporary stop to a directive issued by US President Donald Trump that required federal agencies to discontinue their use of Claude, Anthropic's chatbot platform.

"Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the US for expressing disagreement with the government,"

Judge Lin

According to data from Menlo Ventures, Anthropic held the leading position in enterprise AI markets with a 32% share, surpassing OpenAI's 25%, as of 2025. A nationwide government prohibition on Anthropic would cause this market position to collapse dramatically.

Judge Lin characterized these "broad punitive measures" implemented against Anthropic by the Trump administration and Defense Secretary Pete Hegseth as appearing "arbitrary, capricious, [and] an abuse of discretion."

This order followed Anthropic's decision to file a legal action in a Columbia federal court on March 9, contending that Hegseth exceeded his legal authority in classifying the company as a national security supply-chain risk.

Screenshot from court ruling
Image from the court's ruling. Source: Courtlistener

Anthropic opposed autonomous weapons and mass surveillance

The conflict originated from an agreement reached in July 2025 between the AI company and the Pentagon regarding a contract that would establish Claude as the initial frontier AI model authorized for deployment on classified networks.

The negotiations broke down in February when the Pentagon sought to renegotiate terms, demanding that Anthropic permit military application of Claude "for all lawful purposes" without any limitations.

Anthropic stood firm in its position that its technology must not be employed for lethal autonomous weapons systems and mass domestic surveillance operations targeting Americans.

Trump issued an order on Feb. 27 directing all federal agencies to stop using Anthropic products. "The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War," he wrote on Truth Social.

In San Francisco on March 24, a 90-minute court hearing was conducted, where Judge Lin questioned government attorneys about whether Anthropic was facing punishment for its public criticism of the Pentagon.

Classic illegal First Amendment retaliation

"Punishing Anthropic for bringing public scrutiny to the government's contracting position is classic illegal First Amendment retaliation," the March 26 ruling stated.

In a statement, Anthropic expressed that it was "grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits."

← Volver al blog