A bipartisan group of attorneys general pushes back against Washington and Big Tech, insisting states must keep power to write their own AI rules.
Today’s biggest global tech-politics story isn’t about a new gadget or viral app – it’s about who gets to control artificial intelligence in the real world.
In the United States, 35 state attorneys general plus Washington, D.C., have launched a rare, loud, bipartisan warning to Congress: do not strip states of the right to regulate AI.
Their message lands right as Washington debates a defence bill that could quietly override state-level AI laws, and just days after the Trump administration rolled out its huge “Genesis Mission” plan to supercharge federal AI research.
---
What exactly are the states angry about?
At the centre of the fight is a proposal in the National Defense Authorization Act (NDAA) – a must-pass yearly defence bill. Buried inside is language that would block individual states from making or enforcing many of their own AI regulations.
Supporters of the proposal – including major tech firms – say AI is too important and too global to be governed by a patchwork of 50 different rules. They want one national approach, ideally lighter-touch, that would make it easier to build and ship AI products everywhere in the US at once.
The attorneys general see it the opposite way. Led by New York’s Letitia James, and joined by Republicans and Democrats from states like Utah, North Carolina and New Hampshire, they argue that states have always stepped in first when new technologies start harming people – from data privacy to addictive apps.
Their letter to Congress makes one simple point:
if Washington still hasn’t passed a serious AI law, it shouldn’t stop states from trying to protect their own residents.
---
Why are states so focused on AI right now?
In the last two years, AI tools have moved from labs into everyday life – chatbots, deepfakes, automated hiring tools, facial recognition, and more. With that has come a wave of real-world problems:
scams and fraud using cloned voices and faces,
AI-generated sexual content and deepfakes,
political misinformation ahead of elections,
biased algorithms affecting jobs, housing and credit.
Because Congress has struggled to agree on a national law, individual states have started building their own rules.
California has drafted some of the toughest AI safety and transparency rules in the world, due to kick in from 2026.
New York is debating its RAISE Act, designed to clamp down on harmful AI systems and protect consumers from discriminatory algorithms.
Tech companies, however, are nervous. They warn that 50 different AI laws could make it harder to innovate, launch products, or even know what’s legal where. Industry heavyweights like OpenAI, Google and Meta have been pushing hard for a single national standard that would pre-empt state rules.
---
The bigger backdrop: a global race to regulate – or not
This US fight is happening as the European Union moves in the opposite direction – trying to simplify but still keep strong AI rules through its “Digital Omnibus” reforms and delays to some parts of its AI Act.
In Europe, regulators are softening some timelines after heavy lobbying from Big Tech, but they’re not abandoning the idea of strict oversight. Meanwhile, in Washington, powerful voices are arguing for less restriction, especially if it helps the US keep an edge in the AI arms race.
The result is a split-screen world:
Europe: “We’ll regulate AI – but we’ll try to make it easier to comply.”
US federal government: “We want rapid AI growth and fewer roadblocks.”
US states: “We don’t want to be gagged while AI harms our residents.”
---
What happens next?
Politically, this is a real test of who holds power over AI in America:
If Congress keeps the pre-emption language in the defence bill, many state AI laws could be frozen or killed before they properly start.
If lawmakers back down under pressure from the attorneys general, states will remain free to experiment – meaning stricter rules in places like California and New York, and looser ones elsewhere.
Behind the scenes, you can expect intense lobbying over the next few days from both sides:
Tech companies pushing to avoid what they see as regulatory chaos.
Consumer advocates and state officials arguing that real-world harms need real-world guardrails, now.
One sign of how sensitive this all is: the US Senate already voted 99–1 earlier this year against a previous attempt to shut down state AI laws – a huge signal that many senators still want states to have a say.
---
Why this story matters beyond the US
Even if you don’t live in America, the outcome of this battle will influence how AI is built and used worldwide. The US is home to most of the biggest AI companies. If states win, those companies may have to design systems that meet tougher safety and transparency standards in key markets – standards that could spread globally over time.
If, instead, federal lawmakers succeed in blocking state rules and keeping regulation minimal, AI development may move even faster – but with fewer safety brakes, and more responsibility pushed onto users, workers and local communities to cope with the consequences.
Either way, today’s showdown makes one thing clear:
the fight over AI is no longer just about cool tech – it’s about power, rights, and who gets to decide the rules for our digital future.