Microsoft on Tuesday filed a legal brief supporting Anthropic’s lawsuit seeking to temporarily block the US Department of Defense from designating the artificial intelligence startup as a supply-chain risk, reports Reuters.
In an amicus brief submitted to a federal court in San Francisco, Microsoft backed Anthropic’s request for a temporary restraining order (TRO) against the Pentagon’s decision, arguing that the designation should be paused while the court reviews the case.
Microsoft, which integrates Anthropic’s technology into services it provides to the US military, said the designation directly affects its operations.
Anthropic, the maker of the Claude AI model, filed a lawsuit on Monday to stop the Pentagon from placing it on a national security blacklist, escalating a dispute with the US military over restrictions on the use of its technology.
Microsoft argued that granting a temporary restraining order would help prevent costly disruptions for suppliers that rely on Anthropic’s products. Without a pause, contractors could be forced to quickly redesign or replace systems built around the company’s AI tools.
The judge overseeing the case must approve Microsoft’s request to submit the brief before it becomes part of the official record, although courts frequently allow third parties to weigh in on significant cases.
According to Microsoft, the Pentagon allowed itself a six-month transition period to phase out Anthropic’s technology but did not provide a similar timeline for contractors that depend on the company’s products or services to fulfil defense contracts.
“Should this action proceed without the entry of a temporary restraining order, Microsoft and other government contractors with expertise in developing solutions to support US government missions will be forced to account for a new risk in their business planning,” the company said in the filing.
Microsoft added that a temporary restraining order would allow time to negotiate a solution while maintaining military access to advanced technology and ensuring AI systems are not used for domestic mass surveillance or autonomous warfare without human control.
On Monday, a group of 37 researchers and engineers from OpenAI and Google also filed an amicus brief supporting Anthropic in the case.
Bd-pratidin English/ Jisan