Why States Can't Wait: The Case for AI Regulation in 2026
As the Trump administration moves to block state action, California and New York show us why local leadership matters more than ever.
On December 11, 2025, President Trump signed an executive order designed to stop states from regulating artificial intelligence. The order threatens to withhold federal broadband funding from states with “onerous” AI laws, directs the Department of Justice to sue states over their regulations, and aims to establish what the White House calls a “minimally burdensome national standard.”
The message is clear: This administration wants AI companies free from state oversight.
But here’s what the administration won’t tell you: there is no federal AI regulation. Seriously, there’s none. While the executive order promises a future national framework, it offers no protections in the present, only the dismantling of the protections that states are building.
This matters because AI harms aren’t theoretical. They’re happening now, to us, in ways that demand immediate response.
The Harms Are Already Here
Lets consider what happened in Louisiana just this month. A 13 year old girl spent an entire school day being mocked as AI-generated nude images of her circulated on social media. She begged school officials for help. They doubted the images existed. When she finally confronted a boy sharing the images on the school bus, she was expelled for 89 days. He faced no immediate school discipline.
This isn’t a hypothetical risk that requires years of study. This is a child whose education, mental health, and sense of security were destroyed by technology that didn’t exist as realistic as it does now just 5 years ago.
Or consider the employment algorithms already making hiring decisions across the country. Studies show these systems consistently discriminate against women and people of color, replicating historical biases at scale while giving employers plausible deniability. “The AI made the decision,” they can say, even as entire communities are systematically excluded from opportunities.
This federal government has offered no response to either problem. Meanwhile, states are stepping in.
California and New York Lead Where Washington Won’t
In late 2025, both California and New York passed landmark AI safety legislation despite intense industry pressure and explicit threats from the Trump administration.
California’s Transparency in Frontier Artificial Intelligence Act (TFAIA), signed into law by Governor Gavin Newsom on September 29, and New York’s Responsible AI Safety and Education (RAISE) Act, enacted by Governor Kathy Hochul on December 19, represent the strongest AI developer transparency requirements in the country. Both laws require companies developing the most advanced AI systems to:
Publish detailed safety protocols explaining how they assess and mitigate catastrophic risks
Report critical safety incidents to state authorities within 72 hours (in New York) or 15 days (in California)
Implement cybersecurity protections against unauthorized access
Protect whistleblowers who raise safety concerns
These aren’t “fear-based” regulations, as Trump administration officials have claimed. They’re baseline transparency requirements—the minimum we should expect from companies building systems that could reshape society.
The laws specifically target “frontier models” which are the most advanced AI systems being developed by companies like OpenAI, Anthropic, and Google DeepMind. As the Brookings Institution explains, California defines these as foundation models trained using more than 10^ 26 floating-point operations, a threshold that currently exceeds all existing models by several orders of magnitude. In New York, covered entities must have over $500 million in revenue. These thresholds ensure the laws focus on the systems with the greatest potential for both benefit and harm.
Significantly, both OpenAI and Anthropic have expressed support for these state laws, with OpenAI stating that “the combination of the Empire State with the Golden State is a big step in the right direction.” To a degree, major AI companies recognize that having two large economies establish similar baseline standards is preferable to an unregulated free for all.
The states have also acted where it matters most urgently: protecting children. Both California and New York now regulate AI companions, chatbots designed to simulate human relationships. As Reuters reported, these laws require AI companions to warn users they’re not talking to a real person and to implement protocols for detecting expressions of self-harm or suicidal ideation. These protections take effect in early 2026.
Both laws include enforcement mechanisms, though they differ in severity. California allows penalties of up to $1 million per violation. New York’s approach evolved significantly during the legislative process: the original RAISE Act proposed fines of up to $10 million for first offenses and $30 million for subsequent violations, but after negotiations with those in the industry, the final version reduced these to $1 million and $3 million respectively. This compromise reflects the delicate balance states are striking between establishing accountability and maintaining their attractiveness to AI companies.
Why Federal Preemption Is a False Solution
The Trump executive order frames state AI laws as creating an inefficient “patchwork” that will cause American companies to lose the “AI race” to China. This argument is both legally questionable and factually misleading.
First, the legal reality: executive orders cannot override state law. Only Congress or the courts can do that. The order directs federal agencies to study preemption, threaten funding, and prepare litigation, but none of these actions eliminates existing state laws. Companies remain legally obligated to comply with state regulations, and states retain the authority to enforce them.
Second, the patchwork argument misunderstands how regulatory federalism works. When California, the world’s fifth-largest economy, establishes a standard, it often becomes a de facto national standard because companies choose to comply rather than maintain separate systems for different markets. This is exactly what’s happening with TFAIA and the RAISE Act: the laws are so similar that companies can create one set of protocols that satisfies both.
Far from creating inefficiency, aligned state standards can actually drive better outcomes than waiting for federal action. For example the auto industry offers a clear parallel: California’s vehicle emission standards, repeatedly challenged by industry and various presidential administrations, ultimately pushed the entire country toward cleaner cars because manufacturers found it easier to meet one high standard than maintain separate production lines.
Lastly, the “China will win” framing is a red herring. China’s AI development is shaped by very different priorities like mass surveillance, social control, and state-directed industrial policy. The suggestion that America will lose technological leadership because we require basic transparency from AI developers is not serious policy analysis. It’s industry lobbying dressed up as national security.
What’s really at stake is whether the public has any visibility into how the most consequential technology of our generation is being developed. The Trump administration’s answer is no: trust the companies, don’t ask questions, and don’t slow them down with basic requirements to document their safety practices.
The Vacuum Congress Has Created
The hardest truth about this moment is that Congress has had ample opportunity to act and has chosen not to.
Bipartisan legislation has been proposed. Expert testimony has been heard. The AI Safety Institute has been established (though the Trump administration is already moving to reshape it). But comprehensive federal AI legislation remains stalled, victim to the same dynamics that have paralyzed tech regulation for decades: intensive industry lobbying, the complexity of the technology, and fundamental disagreements about whether AI should be regulated at all.
Into this vacuum, states have stepped. Not because state regulation is ideal, a cohesive federal framework would be preferable, but because someone has to act. When young girls are having their images weaponized, when hiring algorithms are encoding discrimination, when AI systems are making consequential decisions about credit, housing, and healthcare, “wait and see” is not a policy. It’s an abdication.
The Trump executive order doesn’t fill this vacuum. It simply aims to ensure the vacuum persists.
What 2026 Must Bring
Other states should follow California and New York’s example in 2026, for three reasons:
First, legitimacy. If only two states have acted, the Trump administration can frame this as coastal elites imposing their values. But if a dozen states, across different regions and political leanings, establish similar baseline standards, that narrative collapses. Broad state action demonstrates genuine public demand for AI oversight.
Second, legal resilience. The administration’s preemption strategy relies on arguing that state AI laws create an unconstitutional burden on interstate commerce. This argument becomes much harder to sustain when multiple states have adopted similar frameworks. Courts have consistently upheld states’ authority to regulate within their borders, particularly on matters of consumer protection and public safety.
Third, practical necessity. AI harms are not evenly distributed. They hit hardest in communities that are already vulnerable: children, job seekers without college degrees, people with limited English proficiency, and communities of color. These communities cannot afford to wait for a federal framework that may never come, or that arrives only after industry has written its preferred terms.
States should prioritize three areas for 2026 legislation:
Frontier AI transparency: Following the California and New York model, require the developers of the most advanced AI systems to publish safety protocols, report incidents, and protect whistleblowers. These requirements don’t prevent innovation—they ensure that when innovation goes wrong, we know about it quickly enough to respond.
AI in high-stakes decisions: Require disclosure when AI is used in consequential decisions about employment, housing, credit, education, or criminal justice. People have a right to know when an algorithm is shaping their life chances, and to contest decisions that are unfair or discriminatory.
Child safety protections: Regulate AI companions, deepfakes, and other AI applications that pose particular risks to minors. The Louisiana case shows why we can’t wait on this: the technology to weaponize children’s images already exists and is being used.
The Real Innovation Question
There’s a broader question beneath this debate: what does “AI leadership” actually mean?
If it means developing the most powerful systems fastest, with the fewest constraints, then to an extent state regulation might slow us down. We might be second to deploy certain capabilities.
But if AI leadership means developing systems that actually work for people, that don’t encode discrimination or enable harassment, that respect basic privacy and safety, then transparency and accountability are features, not “bugs.” They’re what separates systems we should want from systems we should fear.
The developers who succeed in that world won’t be the ones who fought transparency. They’ll be the ones who embraced it as essential to building technology people can trust.
A Closing Note on Trust
The Trump administration’s executive order rests on a premise: that AI companies, left largely unregulated, will develop these systems responsibly because it’s in their business interest to do so.
This premise should make us all uncomfortable. Not because every AI company is reckless, quit a few are trying hard to do “the right thing”, but because systems this consequential shouldn’t depend on corporate goodwill.
We don’t trust pharmaceutical companies to self-regulate without FDA (or what’s currently left of it) oversight. We don’t trust banks to self-regulate without financial regulators. Not because these companies are always evil (that’s a conversation for another day), but because when mistakes happen in complex systems, the stakes are too high to rely on voluntary compliance.
AI systems as consequential as drugs, financial products, or food safety. They deserve at least as much oversight.
Organizations like ControlAI, a nonprofit working to reduce risks from artificial intelligence through policy development and legislative advocacy, have been pushing for exactly this kind of baseline accountability. Their work authoring draft bills and influencing policymakers reflects a growing recognition that AI safety cannot be treated as optional.
California and New York have recognized this. Other states should follow in 2026, not in spite of federal opposition, but because of it. When Washington refuses to act, state leadership isn’t optional. It’s essential.
The question isn’t whether states will regulate AI. Given the urgency of the harms and the vacuum of federal action, they must. The question is whether they’ll do so in the coming months and year, while there’s still time to shape how these systems develop or whether they’ll wait until the harms are so widespread that regulation becomes primarily reactive rather than preventative.
California and New York have shown the path forward. The rest of the country should take it.


Completely agree that local leadership is more important than ever. Thanks for sharing!