Photo: UPI
A Growing Rivalry in the AI Arena
Anthropic is scaling rapidly in its quest to compete with OpenAI, a company now valued at nearly $500 billion and backed by industry giants like Microsoft and Nvidia. OpenAI’s influence has expanded across sectors — from cloud computing to enterprise solutions — while Anthropic, with its $183 billion valuation, has been steadily carving out its place through its Claude AI models, which have gained traction among enterprise users seeking safety-focused AI systems.
But while Anthropic races to match OpenAI’s pace in innovation, it’s also waging a separate battle — this time, with the U.S. government and prominent figures in the new Trump administration.
Tensions with Washington
David Sacks, venture capitalist and President Donald Trump’s AI and crypto czar, recently took aim at Anthropic, accusing the company of engaging in what he called “a regulatory capture strategy based on fear-mongering.” His remarks came after Anthropic co-founder and head of policy Jack Clark published an essay titled “Technological Optimism and Appropriate Fear,” discussing the existential risks of advanced AI.
Sacks’ criticism underscores a deeper ideological divide between Anthropic and the Trump administration’s AI priorities. While the White House has embraced OpenAI through initiatives like Project Stargate — a multibillion-dollar AI infrastructure partnership with Oracle and SoftBank — Anthropic has positioned itself as a cautious voice advocating for stricter oversight.
The Origins of the Conflict
Founded in 2020 by siblings Dario and Daniela Amodei, both former OpenAI researchers, Anthropic was created with a mission to develop AI systems that prioritize safety and alignment. The company’s founding philosophy was a direct response to OpenAI’s rapid commercialization and close ties with corporate power.
Today, that founding ethos continues to shape Anthropic’s policy stance. The company has opposed federal efforts to limit state-level AI regulation, most notably pushing back against a Trump-backed bill that sought to ban state AI laws for 10 years. Instead, Anthropic threw its support behind California’s SB 53, which mandates greater transparency and accountability from AI developers.
“Without transparency, labs might face incentives to scale back safety programs in order to compete,” Anthropic stated in a recent blog post.
Sacks’ Counterargument: Innovation Over Regulation
David Sacks argues that the U.S. cannot afford to slow down innovation due to excessive regulation, especially as it competes with China’s $60 billion AI sector. “The U.S. is in an AI race, and our chief competition is China,” Sacks said at Salesforce’s Dreamforce conference in San Francisco.
Sacks insists that his comments aren’t part of a broader campaign against Anthropic but rather a push to keep America at the forefront of AI development. He pointed out that just months ago, the White House approved Anthropic’s Claude app for use across government agencies through the GSA App Store, highlighting the company’s continued access to federal partnerships.
Politics, Policy, and Perception
Sacks and other conservative investors claim Anthropic has cast itself as a political underdog — a defender of ethical AI under attack from partisan interests. Sacks cited instances such as Anthropic’s co-founder Dario Amodei comparing Trump to a “feudal warlord” and the company’s hiring of former Biden-era officials as examples of political bias.
Keith Rabois, another influential tech investor, echoed Sacks’ sentiments, saying, “If Anthropic truly believed their rhetoric about AI safety, they could always shut down the company — and lobby then.”
These attacks come amid heightened scrutiny of Anthropic’s leadership, which has maintained that its focus on AI safety is not political but essential for long-term global stability.
Anthropic’s Ongoing Federal Footprint
Despite the growing tension, Anthropic continues to secure major U.S. government contracts. It recently inked a $200 million deal with the Department of Defense and maintains strong partnerships with federal agencies through the General Services Administration (GSA). The company also launched a National Security Advisory Council to align its AI development with U.S. strategic interests.
In an unprecedented move, Anthropic began offering a specialized version of its Claude AI model to government institutions for $1 per year, signaling a strong commitment to collaboration — even amid political disagreements.
The Bigger Picture: AI’s Future at a Crossroads
The Anthropic–OpenAI rivalry isn’t just about technology — it reflects a philosophical divide shaping the future of artificial intelligence. OpenAI’s approach centers on scale, speed, and integration with powerful corporate and government partners. Anthropic, in contrast, champions a slower, more cautious path focused on safety, transparency, and long-term stability.
As billions pour into AI development and regulatory frameworks tighten worldwide — from the EU’s AI Act to new U.S. state laws — Anthropic’s balancing act between innovation and responsibility could define the next decade of the industry.
For now, one thing is clear: as the AI race intensifies, Anthropic isn’t just competing with OpenAI — it’s redefining what responsible innovation looks like in an era dominated by power, politics, and profit.