Over the summer, Congress considered and ultimately dropped a moratorium on state-level AI laws. The proposal exposed a constitutional struggle: Should AI be governed by a state patchwork of rules or by a centralized federal framework?
This tension took center stage in a recent hearing by the House Judiciary Subcommittee on Courts, Intellectual Property, Artificial Intelligence, and the Internet. Lawmakers and experts debated where the authority to regulate emerging AI technologies should lie, the extent of federalism, and how to build a system that supports innovation while protecting citizens.
THE CASE FOR A FEDERAL FRAMEWORK
Several lawmakers believed from the outset that the federal government needed to be involved. Chairman Darrell Issa (R-CA) warned that state-by-state regulation creates a “patchwork of indecision” that makes it harder for developers to innovate and compete. Rep. Kevin Kiley (R-CA) emphasized this, calling the idea of state-led development rules “a fairly fantastical notion.”
- Adam Thierer, representing R Street Institute, recalled the 1990s, when policies let “technology be born free as opposed to…in a regulatory cage.” He argued that a clear national framework today would prevent small companies from being drowned by compliance costs.
- Kevin Frazier, visiting from University of Texas Law, grounded his position in the Constitution, pointing to the Commerce Clause and Intellectual Property Clause as proof that Congress holds this responsibility. He cautioned that if too many “legal gears” are thrown into the process, “we won’t get the AI we deserve.”
- David Bray of the Stimson Center prioritized a flexible, light-touch framework, adaptable as AI evolves while protecting consumers. He stressed the need to adjust “the applications without limiting innovations on the technology.”
Together, they argued that only Congress can provide the infrastructure and clarity to prevent chaos and to stop any one state from dictating national policy by default.
THE CASE FOR A STATE-LED PATCHWORK
Others argued that the states should remain the central, driving piece of the framework. Ranking Member Hank Johnson (D-GA) emphasized the role of common law as the “…glue that holds the law together,” with lawsuits already shaping the rules of generative AI. Disrupting the system that’s currently emerging could have discouraging impacts on ongoing court cases, he argued.
Prof. Neil Richards from Washington University Law warned that sweeping federal preemption would be a “grievous error.” He argued that because AI is a “cluster of related and changing technologies,” no single federal law could cover it with care. States, he noted, have historically pioneered consumer protections and built public trust.
The patchwork approach, in this view, reflects federalism at work with the states as laboratories of democracy testing approaches in real time.
A MEASURED PATH FORWARD
A middle ground emerged from the conversation: let the federal government oversee development, while states regulate applications of AI.
- Federal role: national training, safety, and data privacy standards so developers don’t have to build 50 versions of the same model.
- State role: regulation of AI use in healthcare, education, and the judiciary, where context and local accountability matter.
This hybrid model gives innovators clarity while keeping protections close to the people.
“It is one of the happy incidents of the federal system that a single courageous State may, if its citizens choose, serve as a laboratory; and try novel social and economic experiments without risk to the rest of the country”.
JUSTICE LOUIS BRANDEIS
TAKEAWAYS FROM THE PROCESS
As Rep. Lou Correa (D-CA) observed, regulation often “is second to the fact that we just can’t move on this stuff at the federal level.” With AI advancing rapidly, delay risks confusion at home and ceding leadership abroad.
The debate over AI regulation is also a debate over federalism’s limits. Who decides is a constitutional question still very much alive. Hearings like this showcase the importance of civic education: understanding not only the technology, but the framework of government that shapes how it will be managed.
Encouragingly, this hearing stood out for its tone. Lawmakers and experts brought opposing views into the same room and engaged with respect and seriousness. In an era too often defined by division, it was a reminder that democracy works best when tough questions are met with genuine debate.
ENGAGE IN THE AI REGULATION CONVERSATION
- Read up on the debate and broaden your understanding with The Policy Circle’s new Why Tech Policy Matters Now Insight. Dive into the U.S. Constitution Brief to review the relationship between federal, state, and local governments.
- Get ahead on the process that AI legislation will have to go through in the U.S. House of Representatives and the U.S. Senate Briefs.
- Look up your state’s AI regulations.
- What has passed and what is in progress?
- Would a federal framework for AI development impact your state’s actions?
- Talk about AI regulation within your community.
- Host a Circle conversation on the above materials. Let us know what your group discovers together through respectful dialogue.
FOLLOW UP QUESTIONS
Use these questions to start a conversation in your areas of influence!
- Should AI regulation be led at the federal level, or do the states provide better oversight?
- Do state-level rules better reflect local needs, or do they create unnecessary complexity compared to federal rules?
- Could a shared federal–state framework strike the right balance, or would it create more confusion?