Getty Images
As Anthropic rapidly expands to keep pace with AI powerhouse OpenAI, it is simultaneously confronting scrutiny from the U.S. government, highlighting the growing intersection of innovation and regulation in artificial intelligence.
OpenAI, valued at $500 billion and backed by major players like Microsoft and Nvidia, dominates the consumer AI space with apps such as ChatGPT and Sora. Anthropic, founded by former OpenAI executives Dario and Daniela Amodei in late 2020, has carved a niche in enterprise AI with its Claude models, currently valued at $183 billion. The company’s mission focuses on building safer AI, contrasting OpenAI’s push toward broad commercial adoption.
David Sacks, venture capitalist and President Trump’s AI and crypto czar, has publicly criticized Anthropic for what he describes as a politically motivated campaign to influence AI regulation. On X, Sacks accused the startup of running a “sophisticated regulatory capture strategy based on fear-mongering,” highlighting tension between innovation and policy.
Anthropic co-founder Jack Clark recently penned an essay titled “Technological Optimism and Appropriate Fear,” cautioning about AI systems’ growing complexity and the potential risks if their objectives aren’t carefully aligned with human priorities. Sacks sees these warnings as slowing innovation and creating regulatory uncertainty.
The regulatory philosophies of Anthropic and OpenAI diverge sharply. OpenAI has lobbied for fewer restrictions, aligning closely with federal policymakers, while Anthropic has actively supported state-level regulation like California’s SB 53, which mandates transparency and safety disclosures for AI companies. Anthropic has opposed provisions in the Trump-era “Big Beautiful Bill” that would have blocked state-level rules for a decade, emphasizing the need for safety and disclosure.
Anthropic continues to work with the federal government, offering its Claude model to agencies through the General Services Administration and securing contracts like a $200 million deal with the Department of Defense. The company also established a national security advisory council to ensure alignment with U.S. strategic interests.
Despite regulatory pushback, investors have continued to pour capital into Anthropic. Its Claude enterprise models are gaining traction across Fortune 500 companies, while OpenAI maintains dominance in consumer-facing applications. Analysts note that the differing regulatory strategies may impact speed of deployment, adoption, and market positioning over the next 12–24 months.
Keith Rabois, another influential Republican tech investor, criticized Anthropic’s approach, suggesting the company could act decisively on safety while also engaging in lobbying efforts if it truly prioritized public security over political posturing.
Sacks has emphasized that the overarching goal for U.S. AI policy is to maintain a competitive edge over China, which he identifies as the primary global rival in AI innovation. He insists his critiques of Anthropic are not politically motivated but aimed at ensuring rapid progress in AI development.
Still, Anthropic’s strategy positions the company as a cautious, safety-first alternative, navigating both the competitive pressures from OpenAI and the evolving political landscape in Washington. The outcome of this balancing act may influence not only Anthropic’s trajectory but also the broader direction of AI regulation and adoption in the United States.
As the AI race intensifies, Anthropic faces the dual challenge of accelerating growth while addressing the scrutiny of policymakers — a test of both its technological vision and strategic diplomacy.