Artificial intelligence is so mainstream, it’s inescapable. Using computer systems to perform tasks previously done by humans is both exciting and concerning for many, as we grapple with the promises of innovation and the challenges that innovation can create.
States have waded into the fray with legislation meant to control AI’s use and development.
What has emerged is a growing patchwork of laws that tech companies, which operate across state lines, need to navigate.
Congress quietly stepped into the fray with a little-noticed provision in the “Big, Beautiful, Bill Act” last week. The House-passed tax, energy, and border security bill imposes a moratorium on all state-level AI legislation. So for the next decade, states would be prohibited from enforcing laws regulating artificial intelligence models and “automated decision systems.”
If signed into law, this bill will impact AI laws for nearly every state.
AI State Laws
According to the National Conference of State Legislatures, this year, 48 states and Puerto Rico have introduced legislation, and 26 states have adopted or enacted more than 75 new measures. (Some estimates place this number above 1,000.) Examples of bills meant to restrict the use of AI:
- Criminalizing the fraudulent use of deepfakes
- Prohibiting AI-generated deception that might influence elections
- Requiring developers and deployers of high-risk AI systems to avoid algorithmic discrimination
- Criminalizing AI-generated child pornography
- Prohibiting AI voice and image cloning of citizens
- Expanding current harassment and stalking laws to prohibit AI-powered robots from stalking or harassing other individuals
- Prohibiting medical insurance denials by AI
- Standards for driving by autonomous vehicles
Not all laws are restrictive. Seeing the immense opportunity for public safety, education, and even government efficiency, some states have legislated pro-AI uses in their jurisdictions:
- Florida provided grants to school districts to implement AI in support of students and teachers.
- Hawaii required the University of Hawaii to develop a statewide wildfire forecast system using AI.
- Indiana created an AI task force.
- Tennessee required public higher education institutions, local education boards, and public charter schools to adopt AI policies for students, teachers, faculty, and staff.
- The U.S. Virgin Islands established a real-time, centralized crime data system within the state police department.
- Washington appropriated funds for the city of Seattle to lease space for nonprofit and academic institutions to incubate technology business startups, especially those focusing on AI, and develop and teach curricula to skill up workers to use AI as a business resource.
The Pros and Cons of Congress’s Efforts
Conservatives support this bill to protect innovation against the surge of anti-AI laws. House Energy and Commerce Committee Chair Brett Guthrie, who introduced the moratorium legislation adopted into the BBB, explained that the United States must “make sure that we win the battle against China” and the key to that is to ensure America does not “regulate like Europe or California regulates,” because “that puts us in a position where we’re not competitive.”
In a lengthy analysis, Kevin Frazier and Adam Thierer advocated for a federal moratorium for a period of time, as Congress adopted, and national preemption.
One thousand artificial intelligence (AI) bills have already been introduced just over four months into 2025. This means almost eight new AI-related bills have been introduced every day this year… Though state legislators may have the best of intentions in pushing their respective bills, they risk disrupting the development of a transformative technology.
If this growing patchwork of parochial regulatory policies takes root, it could undermine the nation’s efforts to stay at the cutting edge of AI innovation at a critical moment when competition with China for global AI supremacy is intensifying.
Liberals oppose this moratorium, as The Verge reports:
Democrats have slammed the provision’s inclusion in the reconciliation bill, with Rep. Jan Schakowsky (D-IL) saying the 10-year ban will ‘allow AI companies to ignore consumer privacy protections, let deepfakes spread, and allow companies to profile and deceive consumers using AI.’ In a statement published to X, Sen. Ed Markey (D-MA) said the proposal ‘will lead to a Dark Age for the environment, our children, and marginalized communities.’
In the Desert News, Texas A&M University professor Valerie M. Hudson pulled no punches.
As co-editor and contributor to ‘The Oxford Handbook on AI Governance,’ I condemn this mischief in the strongest possible terms. This is not simply irresponsible but actually prevents state governments from acting responsibly to safeguard its citizens from the increasingly visible downsides of unfettered AI deployment. In that sense, the insertion of this one sentence is patently malicious.
“Malicious” is a strong word. Members seeking to slow down states intend to give Congress time to craft a thoughtful framework for the federal and state governments to adopt. There is precedent for this approach. The “light touch” adopted in the 1990s, just as the internet began to grow, provides a great example. As Frazier and Thierer noted,
This is in line with the national framework Congress and the Clinton administration crafted for the internet and digital marketplace and speech in the 1990s. Notably, that framework specified that the internet ‘should be governed by consistent principles across state, national, and international borders that lead to predictable results regardless of the jurisdiction in which a particular buyer or seller resides.’ Likewise, it called for governments ‘to establish a predictable and simple legal environment.’
The internet blossomed as a result. What would the information superhighway have become had states each set the rules for the road?
What’s Next
This moratorium was tucked into the must-pass budget reconciliation bill that now sits in the U.S. Senate awaiting tweaks, nips, and tucks. It could be stripped out because it’s too sweeping. Or, senators could join with their House colleagues to put a pause on state legal AI laws while the federal government crafts its own regulatory framework.
States have not waited for federal lawmakers to tackle AI regulation. This bill would wrest that power out of the hands of state legislators. As one proponent opined:
Crucially, the measure is constitutional. The Interstate Commerce Clause — a provision in the U.S. Constitution — authorizes federal regulation of economic activity that crosses state lines. Congress can invoke it to argue that only the federal government can set rules for industries operating across state lines, which includes nearly all major tech companies and the AI startup world.
We can only wait to see if the Senate decides to adopt this moratorium, revise it, or remove it from the “Big, Beautiful Bill.” However, if there was ever a good opportunity for Congress to set up and ensure that well-intentioned state laws don’t hamper innovation, it’s now.