Microsoft has refused to stay silent in the face of what it considers an existential threat to Anthropic and the broader AI industry, filing a court brief in a San Francisco federal court that calls for a temporary restraining order against the Pentagon’s supply-chain risk designation. The filing argued that the designation, which has never before been applied to a US company, threatens the technology networks that underpin both commercial AI and national defense. Amazon, Google, Apple, and OpenAI have also backed Anthropic through a joint court filing, creating a comprehensive wall of industry opposition to the government’s action.
The conflict traces back to a $200 million contract negotiation in which Anthropic refused to allow its Claude AI to be used for mass surveillance of American citizens or to power autonomous lethal weapons. Defense Secretary Pete Hegseth labeled the company a supply-chain risk following the collapse of those talks, triggering the cancellation of Anthropic’s existing government contracts. Anthropic filed two simultaneous lawsuits challenging the designation in California and Washington DC, arguing it was unconstitutional and ideologically motivated.
Microsoft’s refusal to stay silent is grounded in its direct integration of Anthropic’s technology into military systems it provides to the federal government. As a partner in the Pentagon’s $9 billion Joint Warfighting Cloud Capability contract and holder of additional federal agreements worth billions more, Microsoft has both a commercial and strategic interest in ensuring Anthropic survives this legal challenge. The company publicly called for a collaborative path forward in which government and industry jointly define responsible standards for AI use in national security.
Anthropic’s court filings argued that the supply-chain risk designation, traditionally reserved for companies with ties to adversarial foreign nations, was being misused as a political weapon against a domestic company for its publicly stated AI safety positions. The company disclosed that it does not currently believe Claude is safe or reliable enough for lethal autonomous decision-making, which it said was the genuine basis for the restrictions it sought. The Pentagon’s technology chief publicly foreclosed any possibility of renegotiation.
Congressional Democrats are separately pressing the Pentagon for answers about whether AI was involved in a strike in Iran that reportedly killed over 175 civilians at an elementary school, asking specifically about AI targeting tools and human oversight processes. These legislative inquiries are adding political urgency to an already extraordinary legal confrontation. Together, Microsoft’s refusal to stay silent, the industry coalition, and congressional pressure are transforming Anthropic’s legal battle into a defining national debate about the governance of artificial intelligence in warfare.
Microsoft Refuses to Stay Silent as Anthropic Faces Existential Threat From Pentagon’s AI Blacklist
4