A federal court in San Francisco on Thursday delivered a major legal victory to generative AI company Anthropic, blocking key actions by the Trump administration and the Department of Defence that had escalated a month‑long dispute over the military use of artificial intelligence.
U.S. District Judge Rita Lin issued a preliminary injunction that prevents the government from enforcing a designation labelling Anthropic a “supply‑chain risk” and barring federal agencies from using the company’s AI tools, including its flagship Claude model.
Judge Lin wrote in her decision, “The defendants’ designation of Anthropic as a ‘supply chain risk’ is likely both contrary to law and arbitrary and capricious. There is no legitimate basis to infer that the company’s protective usage restrictions somehow make it a saboteur.” The injunction restores the status quo, temporarily blocking the Pentagon’s punitive measures while the underlying lawsuit proceeds and halting the administration’s broader ban on federal use of Anthropic’s technology until further judicial review.
The legal clash began earlier this year when Anthropic resisted Defence Department efforts to require unrestricted military use of its AI tools, specifically opposing deployment for autonomous lethal weapons and domestic mass surveillance.
In response, the Pentagon threatened to cut Anthropic out of defence contracts and designated the company a supply‑chain risk, a label typically used for firms tied to foreign adversaries. Anthropic’s lawsuit contends that the government’s actions amount to unlawful retaliation that threatens the company’s business relationships and chilled its protected speech. The company has maintained that it had no choice but to challenge the designation.
Government lawyers defended the DoD’s authority, arguing that the military must be able to use AI tools for all lawful purposes, including defence and national security missions, without being constrained by corporate usage policies. Legal analysts have described the ruling as a check on executive overreach, noting that labelling a leading U.S. tech company a security risk over contractual disagreements could set a troubling precedent. During hearings, Judge Lin reportedly expressed concern that the DoD’s actions appeared “like an attempt to cripple Anthropic” for asserting usage restrictions on its product.
The injunction is preliminary, meaning the ultimate outcome will depend on further proceedings as both sides prepare for a more extended legal battle. A government compliance report is expected in early April, outlining how it will comply with the court order. Anthropic welcomed the ruling, saying it protects innovation and responsible governance of artificial intelligence. The Department of Defence has not yet indicated whether it will appeal.
The dispute highlights the ongoing tension between private AI companies seeking to enforce safety safeguards and government agencies demanding operational flexibility, raising broader questions about how emerging AI technologies should be governed in defence and national security contexts.
Related Read:
- Trump Orders US Federal Agencies to Halt Use of Anthropic AI
- Anthropic Threatens Lawsuit After Pentagon Labels AI Firm
- Anthropic Files Lawsuit Against Pentagon
Senior Reporter/Editor
Bio: Ugochukwu is a freelance journalist and Editor at AIbase.ng, with a strong professional focus on investigative reporting. He holds a degree in Mass Communication and brings extensive experience in news gathering, reporting, and editorial writing. With over a decade of active engagement across diverse news outlets, he contributes in-depth analytical, practical, and expository articles exploring artificial intelligence and its real-world impact. His seasoned newsroom experience and well-established information networks provide AIbase.ng with credible, timely, and high-quality coverage of emerging AI developments.