A historic alignment of technology industry giants, led by Microsoft, has formed in defense of Anthropic as the AI company battles a Pentagon designation that threatens its ability to do government work. Microsoft submitted an amicus brief in a San Francisco federal court calling for a temporary restraining order, while Google, Amazon, Apple, and OpenAI joined a separate supporting brief. The unified front from these competing companies reflects just how seriously the industry views the precedent this case could set.
The dispute erupted after Anthropic refused to allow its AI to be used for mass surveillance or autonomous weapons as part of negotiations over a $200 million Pentagon contract. When talks collapsed, Defense Secretary Pete Hegseth branded the company a supply-chain risk, a designation that has never before been applied to a US technology firm. The Pentagon’s chief technology officer subsequently declared that there was no chance of the department renegotiating with Anthropic.
Microsoft’s filing is especially notable given that the company directly integrates Anthropic’s AI into systems it builds for the US military. As a partner in the $9 billion Joint Warfighting Cloud Capability contract and holder of additional multibillion-dollar agreements with federal agencies, Microsoft has both a business and strategic interest in ensuring Anthropic remains a viable supplier. The company’s public statement framed responsible AI governance and national security as complementary goals rather than competing priorities.
Anthropic’s twin lawsuits, filed in California and Washington DC on the same day, claim the Pentagon violated the company’s First Amendment rights by punishing it for publicly advocating AI safety. The company’s filings acknowledged that Claude is not currently considered reliable or safe enough for lethal autonomous decision-making, which Anthropic said justified the restrictions it sought to impose. The company argued the supply-chain risk label, traditionally used for firms linked to China or other adversaries, was being grossly misapplied.
The backdrop to this legal battle includes growing congressional concern over the use of AI in military targeting, particularly following a strike in Iran that reportedly killed more than 175 civilians at an elementary school. Lawmakers have written to the Pentagon asking whether AI tools were used to select the target and whether human oversight was in place. The questions being asked in both the courts and Congress are shaping up to be among the most consequential debates over AI governance in American history.
