Photo: Digital Watch Observatory
Washington, D.C. — A controversial provision buried within President Donald Trump’s sweeping domestic policy bill has triggered widespread concern across the legal, tech, and policy communities. The measure, backed by Senate Republicans, would prevent individual states from enforcing any artificial intelligence-related regulations for the next decade, effectively stripping local governments of their ability to respond to the fast-moving risks posed by AI.
As the use of artificial intelligence rapidly expands into health care, hiring, education, law enforcement, and even personal relationships, critics argue the proposal would leave the public vulnerable to digital manipulation, deepfakes, algorithmic discrimination, and privacy invasions — all without any meaningful legal recourse.
The moratorium, as written, would nullify state-level AI laws by tying enforcement bans to eligibility for federal infrastructure funds — including broadband expansion. Legal experts say this approach not only undermines local autonomy but also incentivizes states to abandon regulatory oversight in exchange for much-needed federal support.
“This provision is extremely broad and dangerous. It would wipe out existing protections and tie our hands for the next decade,” said Jeff Jackson, North Carolina’s Attorney General and a former member of Congress.
Over the past few years, several states — including California, Illinois, and New York — have enacted narrowly tailored legislation addressing deepfakes in political campaigns, AI bias in hiring, and unauthorized facial cloning. These regulations were designed to prevent digital impersonation, election misinformation, and unethical corporate practices.
If the federal moratorium is enacted, these laws would likely become unenforceable, despite widespread bipartisan support at the state level.
“The laws on the books today are basic consumer protection. It’s things like making it illegal to digitally clone your face or voice without consent,” Jackson said. “This isn’t about stifling innovation — it’s about protecting people.”
In an unusual show of unity, 40 attorneys general from both parties signed a letter to Congress warning against the provision. They argue that Congress has consistently failed to act on meaningful tech legislation — from online privacy to social media reform — and is now asking Americans to trust it again on AI with no track record of delivering.
“Congress has failed to regulate the internet, social media, or privacy,” said Jackson. “To suggest they’ll suddenly get AI right, while simultaneously banning state-level action, is disingenuous at best.”
Although President Trump recently signed the Take It Down Act, which criminalizes non-consensual explicit imagery — including AI-generated ones — critics say it’s a narrow exception that doesn’t reflect a willingness to pass comprehensive, forward-looking AI regulations.
Jackson, who served in the House between 2023 and 2024, recalled multiple narrowly focused AI bills that had wide support but were blocked by leadership before ever reaching the floor.
“I saw dozens of AI-related bills introduced, and not one of them passed,” he said. “Republican leaders made it clear they weren’t going to allow any AI regulation through. That’s why this moratorium is especially troubling — it effectively locks in that inaction.”
Proponents of the moratorium, including some Silicon Valley leaders, argue that a unified national approach is needed to avoid a patchwork of conflicting laws. Others have warned that over-regulation could stifle innovation and allow China to leap ahead in AI development.
Jackson acknowledged the concern but added that the proposed 10-year freeze goes too far.
“There’s a valid argument about regulatory overreach,” he said. “But this isn’t that. This is a hard stop — no regulation, no guardrails, no protection — for ten years. That’s unacceptable.”
Experts agree that AI's societal impact over the next decade could be transformative, and potentially harmful if misused. From autonomous weapons to misinformation, bad actors could exploit AI without legal consequences, should this moratorium pass.
“This isn’t just about deepfakes or hiring tools,” said Jackson. “We’re talking about an era where misinformation becomes hyper-personalized and indistinguishable from reality — and we’d have no legal tools to fight it.”
As AI seeps deeper into every aspect of modern life, from workplaces to elections, the debate over who should regulate it — and how — is becoming more urgent. With no federal AI legislation on the horizon, critics argue that eliminating state-level safeguards amounts to regulatory negligence.
Unless Congress acts quickly to revise or remove the moratorium provision, states could be left powerless to protect their citizens — while tech companies and lobbyists operate with unprecedented freedom for the next ten years.