A federal appeals court in Washington, D.C., denied Anthropic's request for a stay in its lawsuit against the Department of Defense (DOD), preventing the artificial intelligence startup from temporarily halting its blacklisting by the Pentagon as the case proceeds [1]. The DOD officially designated Anthropic a supply chain risk in early March, citing concerns that the company's Claude AI models purportedly threaten U.S. national security. As a result, defense contractors are now required to certify that they do not use Anthropic's Claude models in their military-related work [1].
Anthropic argued in its filing that the Pentagon's determination was unconstitutional, arbitrary, capricious, and not in accordance with required legal procedures, and sought the court's intervention to prevent further monetary and reputational harm [1]. The appeals court, however, sided with the government, stating, "the equitable balance here cuts in favor of the government," and emphasized the importance of judicial management in securing vital AI technology during an active military conflict [1]. The court acknowledged that Anthropic "will likely suffer some degree of irreparable harm absent a stay," but characterized the company's interests as primarily financial in nature. The court also noted that Anthropic did not demonstrate that its right to free speech had been chilled during the litigation [1].
The DOD's supply chain risk action was justified under two distinct legal designations: 10 U.S.C. § 3252 and 41 U.S.C. § 4713, which must be challenged in separate courts. The 41 U.S.C. § 4713 designation falls under the jurisdiction of the appeals court in Washington, D.C. [1]. In a related but separate case, a San Francisco federal court granted Anthropic a preliminary injunction late last month, barring the Trump administration from enforcing a ban on the use of Claude [1].
CONCLUSION
The appeals court's decision to deny Anthropic's request for a stay reinforces the Pentagon's supply chain risk designation, intensifying financial and reputational challenges for the AI startup. The ruling signals heightened scrutiny of AI technology in defense applications and underscores the government's prioritization of national security concerns over corporate interests.