
Getty Images
OpenAI CEO Sam Altman has acknowledged that the company moved too quickly in finalizing its recent agreement with the United States Department of Defense, admitting the rollout “looked opportunistic and sloppy” amid mounting criticism from the public and industry observers.
In a statement shared publicly, Altman said the ChatGPT maker is now revising the contract to clarify how its artificial intelligence systems can and cannot be used. The updated language will explicitly prohibit the intentional use of OpenAI models for domestic surveillance of U.S. citizens and nationals. According to Altman, the Defense Department also affirmed that OpenAI’s tools would not be deployed by intelligence agencies such as the National Security Agency.
The rare public concession from one of Silicon Valley’s most influential AI executives comes at a delicate moment for the generative AI industry, which is navigating intensifying regulatory scrutiny, geopolitical tensions, and rising public concern about the military applications of advanced machine learning systems.
OpenAI announced its agreement with the Pentagon on Friday, only hours after U.S. President Donald Trump reportedly directed federal agencies to halt the use of tools developed by rival AI firm Anthropic. The timing raised eyebrows across the tech sector, particularly as the announcement coincided with escalating geopolitical tensions and military operations abroad.
The federal decision to suspend Anthropic’s systems followed a breakdown in negotiations between the company and defense officials over safety guardrails. Defense Secretary Pete Hegseth indicated that Anthropic would be designated a supply chain threat, a move that effectively shut it out of certain federal deployments.
Anthropic, founded in 2021 by former OpenAI researchers including CEO Dario Amodei, has positioned itself as a safety-first AI lab. Its flagship Claude models were previously deployed across the Defense Department’s classified network after a landmark agreement last year, marking one of the first large-scale integrations of frontier AI into sensitive government infrastructure.
However, Anthropic later sought formal assurances that its systems would not be used for domestic surveillance or to develop or operate autonomous weapons without meaningful human oversight. Talks reportedly stalled over those conditions.
Altman has now stated that OpenAI shares similar “red lines” with its competitor. In his public comments, he emphasized that there are still significant technical and ethical limitations to today’s AI systems.
“There are many things the technology just isn’t ready for,” Altman noted, pointing to unresolved trade-offs between capability and safety. He added that OpenAI will work with the Pentagon on additional technical safeguards to prevent misuse.
The revised contract language is designed to address specific concerns that surfaced online over the weekend. Among the changes is an explicit prohibition against intentional domestic surveillance. Altman also said he communicated to defense officials that Anthropic should not be labeled a supply chain risk and expressed hope that the government would offer it similar contractual terms.
Despite these assurances, questions remain about why the Defense Department was willing to accept OpenAI’s proposed safeguards but not Anthropic’s. Some government officials have privately criticized Anthropic for what they describe as excessive caution, arguing that overly restrictive conditions could limit operational effectiveness.
The controversy intensified after reports that Anthropic’s Claude system had been used by the U.S. military during a January operation targeting Venezuelan President Nicolás Maduro. Although Anthropic did not publicly object to that specific use case, the revelation sparked broader debate about how AI models are integrated into real-world military missions.
For OpenAI, the backlash was swift. Following news of the Pentagon agreement, some users reportedly removed ChatGPT from their devices and switched to Claude, causing short-term shifts in app store rankings. While the overall impact on OpenAI’s user base, which exceeds hundreds of millions globally, remains unclear, the episode underscores how sensitive customers have become to ethical positioning.
The generative AI market is projected to surpass $1 trillion in economic impact over the next decade, according to multiple industry forecasts. As federal agencies expand AI procurement budgets—spending billions annually on digital modernization and cybersecurity—major AI labs are competing for lucrative government contracts. That financial incentive increases pressure to move quickly, but it also heightens reputational risks.
Altman’s admission reflects a broader tension within the AI sector. Companies are balancing rapid commercialization and strategic government partnerships against calls for transparency, accountability, and strict ethical guardrails.
OpenAI’s relationship with Washington has evolved rapidly over the past two years, as generative AI tools such as ChatGPT have become embedded in education, enterprise software, cybersecurity workflows, and increasingly, defense-related applications. Meanwhile, Anthropic has secured multibillion-dollar backing from major technology firms and positioned itself as a counterweight focused on alignment research and risk mitigation.
The dispute between the two firms illustrates how AI governance is becoming a competitive differentiator. For policymakers, the challenge lies in ensuring national security readiness without sidelining safety concerns. For AI developers, the stakes are both financial and reputational.
By acknowledging the rushed nature of the deal and committing to clearer contractual boundaries, Altman appears to be attempting damage control while preserving OpenAI’s foothold in the federal market. Whether those revisions will restore public trust—or intensify scrutiny of military AI partnerships—remains to be seen.
What is certain is that as artificial intelligence systems become more deeply woven into defense infrastructure, transparency and enforceable safeguards will move from optional commitments to central pillars of industry credibility.









