The Trump administration’s tax bill — also called its “big, beautiful bill,” which is facing a vote today — includes a rule that would prevent states from enforcing their own AI legislation for five years, and would withhold up to $500 million in funding for AI infrastructure if states don’t comply.
Over the weekend, senators also added exemptions for state laws targeting unfair or deceptive practices and child sexual abuse material (CSAM). The initial version of the rule — which banned states from enforcing AI regulation for 10 years and made broadband internet funding dependent on states’ compliance — did not account for those cases.
How the moratorium works
If passed, the rule would constitutionally prohibit states from enforcing AI legislation for five years and simultaneously put AI funding for states in limbo. It wouldn’t only impact in-progress legislation; laws that states have already passed would stay intact in writing but would effectively be rendered useless, lest states want to put their AI funding on the line.
Also: What ‘OpenAI for Government’ means for US AI policy
In practice, this would effectively create a patchwork imbalance across the country: Some states would have thorough legislation but no funding to advance AI safely, while others have no regulation but plenty of funding to keep up in the race.
“State and local governments should have the right to protect their residents against harmful technology and hold the companies responsible to account,” said Jonathan Walter, a senior policy adviser at The Leadership Conference’s Center for Civil Rights and Technology.
Federal AI policy remains unclear
The administration is due to release its AI policy on July 22. In the meantime, the country is effectively flying blind, which has prompted several states to introduce their own AI bills. Under the Biden administration, which took some steps to regulate AI, states were already introducing AI legislation as the technology evolved rapidly into the unknown.
Walter added that the vagueness of the ban’s language could block states’ oversight of non-AI-powered automation as well, including “insurance algorithms, autonomous vehicle systems, and models that determine how much residents pay for their utilities.”
“The main issue here is that there are already real, concrete harms from AI, and this legislation is going to take the brakes away from states without replacing it with anything at all,” said Chas Ballew, CEO of AI agent provider Conveyor and a former Pentagon regulatory attorney.
By preventing states from enforcing individual AI policy when federal regulation is still a big question mark, the Trump administration opens the door for AI companies to accelerate without any checks or balances — what Ballew called a “dangerous regulatory vacuum” that would give companies “a decade-long free pass to deploy potentially harmful AI systems without oversight.”
Given how rapidly generative AI has evolved just since ChatGPT’s launch in 2022, a decade is eons in technological terms.
President Trump’s second term thus far doesn’t suggest AI safety is a priority for federal regulation. Since January, the Trump administration has overridden safety initiatives and testing partnerships put in place by the Biden administration, shrunken and renamed the US AI Safety Institute the “pro-innovation, pro-science” US Center for AI Standards and Innovation, and cut funding for AI research.
Also: AI leaders must take a tight grip on regulatory, geopolitical, and interpersonal concerns
“Even if President Trump met his own deadline for a comprehensive AI policy, it’s unlikely that it will seriously address harms from faulty and discriminatory AI systems,” Walter said. AI systems used for HR tech, hiring, and financial applications like determining mortgage rates have been shown to act with bias toward marginalized groups and can display racism.
Understandably, AI companies have expressed a preference for federal regulation over individual state laws, which would make maintaining compliant models and products easier than trying to abide by patchwork legislation. But in some cases, states may need to set their own regulations for AI, even with a federal foundation in place.
“The differences between states with respect to AI regulation reflect the different approaches states have to the underlying issues, like employment law, consumer protection laws, privacy laws, and civil rights,” Ballew points out. “AI regulation needs to be incorporated into these existing legal schemes.”
He added that it’s wise for states to have “a diversity of regulatory schemes,” as it “promotes accountability, because state and local officials are closest to the people affected by these laws.”
Also: Anthropic’s new AI models for classified info are already in use by US gov
The bill passed the House of Representatives with the moratorium included, to the displeasure of some Republican representatives who would prefer their states have a say in how they protect their rights, jobs, and privacy in the face of rapidly expanding AI. It’s now awaiting a vote in the Senate; as of Thursday, the Senate parliamentarian asked Republicans to rewrite the moratorium to clarify it won’t impact the existing $42.25 in broadband funding.
Previous proposals withheld internet funds
Broadband Equity, Access, and Deployment (BEAD) is a $42-billion program run by the National Telecommunications and Information Administration (NTIA) that helps states build infrastructure to expand high-speed internet access. Before it was revised, the Senate rule would have made all of that money, plus $500 million in new funding, contingent on states backing off their own AI laws.
Get the morning’s top stories in your inbox each day with our Tech Today newsletter.
Leave a Reply