How the Senate’s ban on state AI regulation imperils internet access

ANDREY DENISYUK/Getty

The Trump administration’s tax bill — also called its “big, beautiful bill” — which rounds up key pieces of the president’s agenda, also includes a rule that would prevent states from enforcing their own AI legislation for 10 years, if passed. After an initial budget hiccup, Republican senators successfully amended the rule to comply with budgetary requirements by adding that states trying to enforce AI regulations would not receive federal broadband funding. 

Here’s why that matters. 

How the moratorium works

Broadband Equity, Access, and Deployment (BEAD) is a $42-billion program run by the National Telecommunications and Information Administration (NTIA) that helps states build infrastructure to expand high-speed internet access. The Senate rule makes all of that money, plus $500 million in new funding, contingent on states backing off their own AI laws. 

The issue is twofold: if passed, the rule would both constitutionally prohibit states from enforcing AI legislation and put often critical funding for internet access at risk. 

And it wouldn’t only impact in-progress legislation. Laws that states have already passed would stay intact in writing, but would effectively be rendered useless, lest states want to put their broadband funding on the line. 

Also: What ‘OpenAI for Government’ means for US AI policy

“States like New York, Texas, and Utah would all have to choose between protecting their residents against faulty AI and billions in funding to help expand broadband access across their state,” Jonathan Walter, a senior policy adviser at The Leadership Conference’s Center for Civil Rights and Technology, told ZDNET. 

Earlier this month, the New York State Senate passed the RAISE Act, a first-of-its-kind bill that would require larger AI companies to publish safety, security, and risk evaluations, disclose breaches and other incidents, and allow the state’s attorney general to bring civil penalties against companies when they don’t comply. 

Walter added that the vagueness of the ban’s language could block states’ oversight of non-AI-powered automation as well, including “insurance algorithms, autonomous vehicle systems, and models that determine how much residents pay for their utilities.” 

Federal AI policy remains unclear

The administration is due to release its AI policy on July 22. In the meantime, the country is effectively flying blind, which has prompted several states to introduce their own AI bills. Even under the Biden administration, which took some steps to regulate AI, states were already introducing AI legislation as the technology evolved rapidly into the unknown. 

“The main issue here is that there are already real, concrete harms from AI, and this legislation is going to take the brakes away from states without replacing it with anything at all,” said Chas Ballew, CEO of AI agent provider Conveyor and a former Pentagon regulatory attorney.

By preventing states from enforcing individual AI policy when federal regulation is still a big question mark, the Trump administration opens the door for AI companies to accelerate without any checks or balances — what Ballew called a “dangerous regulatory vacuum” that would give companies “a decade-long free pass to deploy potentially harmful AI systems without oversight.”

Given how rapidly generative AI has evolved just since ChatGPT’s launch in 2022, a decade is eons in technological terms. 

What’s more, President Donald Trump’s second term thus far doesn’t suggest AI safety is a priority for federal regulation. Since January, the Trump administration has overridden safety initiatives and testing partnerships put in place by the Biden administration, shrunken and renamed the US AI Safety Institute the “pro-innovation, pro-science” US Center for AI Standards and Innovation, and cut funding for AI research. 

“Even if President Trump met his own deadline for a comprehensive AI policy, it’s unlikely that it will seriously address harms from faulty and discriminatory AI systems,” Walter said. AI systems used for HR tech, hiring, and financial applications like determining mortgage rates have been shown to act with bias towards marginalized groups and can display racism. 

Also: AI leaders must take a tight grip on regulatory, geopolitical, and interpersonal concerns

Understandably, AI companies have expressed a preference for federal regulation over individual state laws, which would make maintaining compliant models and products easier than trying to abide by patchwork legislation. But in some cases, states may need to set their own regulations for AI, even with a federal foundation in place.

“The differences between states with respect to AI regulation reflect the different approaches states have to the underlying issues, like employment law, consumer protection laws, privacy laws, and civil rights,” Ballew points out. “AI regulation needs to be incorporated into these existing legal schemes.” 

He added that it’s wise for states to have “a diversity of regulatory schemes,” as it “promotes accountability, because state and local officials are closest to the people affected by these laws.”

Also: Anthropic’s new AI models for classified info are already in use by US gov

The principles of federalism, like the Tenth Amendment reserving to states “the powers not delegated to the United States by the Constitution, nor prohibited by it to the States,” and the idea of states as “laboratories of democracy” are based on the idea self-governance is good, and that too much top-down governance is counterproductive.

The bill passed the House of Representatives with the moratorium included, to the displeasure of some Republican representatives who would prefer their states have a say in how they protect their rights, jobs, and privacy in the face of rapidly expanding AI. It’s now awaiting a vote in the Senate; as of Thursday, the Senate parliamentarian asked Republicans to rewrite the moratorium to clarify it won’t impact the existing $42.25 in broadband funding. 

Internet access for states on the line 

How would losing BEAD funding impact states if the moratorium passes as written? 

“This ban on state and local AI laws would allow NTIA to deobligate the $42.45 billion already obligated BEAD funding to states,” Walter explained. “When NTIA reobligates the funding, the new AI Moratorium and Master Service Agreement conditions would apply. This creates a backdoor to apply new AI requirements to the entire $42.45 billion program, not just the new $500 million.”

“This will likely mean fewer people will end up getting access to high-quality, affordable broadband,” he concluded. 

ZDNET will update this story as the Senate debate over the moratorium language continues. 

Get the morning’s top stories in your inbox each day with our Tech Today newsletter.



Original Source: zdnet

Leave a Reply

Your email address will not be published. Required fields are marked *