A federal judge in San Francisco has granted Anthropic a temporary victory, issuing an injunction that blocks the Defense Department’s designation of the company as a national security supply chain risk and pauses a presidential directive that told federal agencies to stop using Anthropic’s chatbot, Claude.
In an order issued Thursday, U.S. District Judge Rita Lin for the Northern District of California found that the government’s actions raised serious legal questions. She wrote that the governing statute does not permit labeling an American company as a potential adversary or saboteur for expressing disagreement with the government, and described the administration’s punitive measures as appearing arbitrary, capricious and an abuse of discretion.
Anthropic — which Menlo Ventures reported held about 32% of the enterprise AI market in 2025, ahead of OpenAI’s 25% — argued that a government-wide ban would jeopardize its market position. The company sued on March 9 in federal court in Washington, D.C., alleging that Defense Secretary Pete Hegseth exceeded his authority when he designated Anthropic a supply chain risk.
The dispute traces back to a July 2025 draft agreement under which Claude was to become the first frontier AI model approved for use on classified networks. Negotiations collapsed in February after the Pentagon sought to renegotiate terms and pressured Anthropic to allow military use of Claude “for all lawful purposes” without the restrictions Anthropic insisted upon. The company has said its technology should not be used to build lethal autonomous weapons or to enable widescale domestic surveillance of Americans.
After the talks broke down, President Trump on Feb. 27 ordered all federal agencies to cease using Anthropic products and criticized the company on Truth Social. At a 90-minute hearing in San Francisco on March 24, Judge Lin questioned government lawyers about whether the designation and related actions were retaliatory — in other words, punishment for Anthropic’s public criticism of the Pentagon.
In a March 26 ruling the court concluded that penalizing Anthropic for publicly scrutinizing the government’s contracting position would amount to illegal First Amendment retaliation. Anthropic said it was grateful for the court’s prompt action and pleased that the court found the company likely to succeed on the merits of its claims.
A screenshot accompanying the court ruling was sourced from CourtListener. This article follows Cointelegraph’s editorial policy; readers are encouraged to verify information independently (see the Cointelegraph Editorial Policy).