
Photo: Decrypt
Microsoft has formally backed artificial intelligence startup Anthropic in a growing legal dispute with the U.S. Department of Defense, urging a federal judge to temporarily halt the Pentagon’s decision to classify the company as a supply chain risk.
In a filing submitted to the U.S. District Court in San Francisco, Microsoft argued that a temporary restraining order is necessary to prevent immediate disruptions to defense technology systems currently using Anthropic’s AI models. The company warned that without such an order, technology providers working with the U.S. military would be forced to quickly reconfigure products and contracts that rely on Anthropic’s tools.
Microsoft said a sudden cutoff could have serious operational consequences.
According to the filing, immediate compliance with the Pentagon’s directive would require significant technical changes across multiple defense platforms, potentially impacting the military’s access to advanced AI capabilities.
The company emphasized that a temporary pause would allow all parties time to reach a negotiated solution without compromising critical defense infrastructure.
The conflict escalated last week when the Department of Defense officially designated Anthropic as a supply chain risk, effectively banning the company’s technology from use in defense-related contracts.
This classification requires contractors and technology vendors working with the Pentagon to certify that none of their systems rely on Anthropic’s AI models.
Such designations are typically reserved for entities linked to foreign adversaries or national security threats, making the move particularly controversial within the technology sector.
The Pentagon’s decision took effect immediately, leaving contractors with little time to adjust existing systems that may rely on Anthropic’s models.
Technology firms warned that the ruling could force rapid changes to infrastructure that supports everything from data analysis tools to AI-assisted decision systems used by defense agencies.
Microsoft’s filing argued that abrupt changes could slow down key technological capabilities used by the military.
The company stated that such disruption could “hamper U.S. warfighters at a critical moment” by limiting access to cutting-edge artificial intelligence technologies.
Anthropic responded to the Pentagon’s decision by filing a lawsuit against the federal government, arguing that the designation is both unprecedented and legally unjustified.
The company claims the action could cause irreparable financial damage, placing hundreds of millions of dollars in defense-related contracts at risk in the near future.
Anthropic maintains that it has consistently prioritized responsible AI development and that the government’s designation unfairly threatens its reputation and business operations.
The legal complaint seeks to overturn the Pentagon’s classification and prevent enforcement of the restrictions.
Industry observers say the case could set an important precedent for how governments regulate artificial intelligence providers that supply advanced technologies to defense agencies.
Microsoft’s support for Anthropic is not purely theoretical.
The tech giant announced plans in November to invest up to $5 billion in Anthropic, positioning the startup as a key player in the rapidly expanding artificial intelligence market.
The investment comes alongside Microsoft’s long-standing relationship with OpenAI, another leading AI company in which Microsoft has invested tens of billions of dollars since 2019.
By supporting Anthropic’s legal challenge, Microsoft is protecting not only a strategic partner but also broader industry stability.
Microsoft’s court filing was submitted as an amicus brief, a legal document filed by parties who are not directly involved in the case but whose interests may be affected by its outcome.
Such filings are common in high-profile technology disputes where court rulings could impact entire sectors.
Before the ban was imposed, Anthropic had been engaged in negotiations with the Department of Defense regarding the use of its AI models.
Those discussions ultimately collapsed after the two sides failed to reach an agreement on how the technology could be deployed.
Anthropic sought guarantees that its models would not be used for fully autonomous weapons systems or mass domestic surveillance.
The company has publicly emphasized that its AI systems should remain under meaningful human oversight, particularly when used in sensitive government or defense applications.
The Pentagon, however, reportedly insisted that it maintain broad access to the technology for all lawful military uses, a position that Anthropic declined to accept.
The disagreement led to a breakdown in negotiations and eventually triggered the supply chain risk designation.
Following the Pentagon’s announcement, major cloud providers quickly reassured customers that Anthropic’s technology would remain available for commercial use.
Microsoft, Amazon and Google all confirmed that Anthropic models would continue to operate on their cloud platforms for non-defense applications.
Anthropic’s flagship AI models, including its Claude family of systems, are widely used by businesses for tasks such as software development, research analysis, customer support automation and enterprise productivity tools.
The cloud giants made it clear that the Pentagon’s restriction applies specifically to defense-related contracts, not to the broader commercial market.
Founded in 2021 by former OpenAI executives, Anthropic has quickly become one of the fastest-growing artificial intelligence companies in the world.
The startup has attracted billions of dollars in funding from major technology firms and investors, positioning itself as a major competitor in the AI race.
Recent investment rounds have reportedly pushed the company’s valuation to around $380 billion, making it one of the most valuable private technology startups globally.
Anthropic’s rapid rise reflects the explosive demand for advanced AI models capable of powering everything from chatbots and research assistants to enterprise automation tools.
The legal battle between Anthropic and the U.S. government highlights the growing tension between national security concerns and the rapid expansion of artificial intelligence technologies.
Governments increasingly rely on AI for defense, intelligence analysis and cybersecurity operations. At the same time, developers and technology companies are pushing for safeguards that limit how their systems can be used.
Microsoft’s request for a temporary restraining order aims to buy time for negotiations between Anthropic and the Pentagon.
The company argues that a measured approach would allow both sides to reach a compromise that protects national security while ensuring that the U.S. military continues to benefit from cutting-edge AI innovation.
With billions of dollars in contracts and the future of defense AI development at stake, the court’s decision could shape how technology companies and governments collaborate on artificial intelligence for years to come.









