The AI Regulation Race: Will States Outpace the Feds?
Will States Shape AI Regulation Before the Feds Do?
In 2024, California pushed hard on AI regulation, passing 18 new laws that touch everything from data privacy to adult content. Yet it also famously vetoed a headline-grabbing bill that would have required extensive compliance from AI companies. This raised a pressing question: Now that California’s made it clear it’s serious about regulating AI—but willing to veto bills that seem too strict—might we see states racing ahead of federal lawmakers to shape the AI rulebook?
Welcome to today’s edition of DK Review, where we’ll dig into the interplay between federal and state-level AI policymaking. In a conversation with Mark Weatherford, a former state CISO for California and Colorado and Deputy Under Secretary for Cybersecurity under President Barack Obama, he made a compelling case for the states’ role in leading AI legislation. Let’s break down the key points from the recent TechCrunch interview, and think about how these evolving regulations might reshape the way AI developers and tech leaders operate.
California’s Latest AI Moves
California has long been a testing ground for new tech-centered legislation—remember the early privacy laws and pioneering emissions regulations. In AI, the state is once again ahead of the curve. According to Weatherford:
“California does kind of push the envelope … People don’t like to hear this, but they do a lot of the heavy lifting.”
He’s referring to how California lawmakers do the research, consult experts, and propose detailed frameworks that often become models for other states. That’s what we saw in 2024: Governor Gavin Newsom signed off on numerous niche AI laws (TechCrunch counts 18 in total) targeting specific areas like generative media and data usage. But in a surprise twist, Newsom vetoed an all-encompassing bill (SB-1047) that would’ve mandated burdensome “pre-deployment” testing, citing potential overreach that might stifle innovation.
The big takeaway: politicians are prepared to regulate certain aspects of AI strongly, but remain cautious about unilaterally slowing progress. This balancing act—protecting consumers while fostering innovation—may well define the future of state-level AI laws.
Why States May Take the Lead
AI regulation on a national scale seems far off—Congress has plenty of competing legislative agendas and a notoriously slow process. But states proliferate bills more freely, as Weatherford points out. Over 400 AI-related proposals surfaced in various state legislatures last year alone.
While tech companies see value in a single (federal) set of rules, many lawmakers in states like California, New York, and Colorado believe they can’t wait around for Congress to pass sweeping policies. Instead, they’re rolling out narrower legislation, focusing on holy-grail issues like:
- Data Privacy and Security
Think consumer consent, encryption standards, and stiff penalties for data misuse. - Generative Content and Deepfakes
From political ads to revenge porn, new laws aim to criminalize malicious content creation. - Transparency
Calls for labeling AI-generated output, so consumers know if a video or image is synthetic. - Workforce Impact
Proposed guidelines on how AI tools can (or cannot) be used in hiring and firing decisions.
It’s an opportunity for states to set a legislative pace, especially if the federal government is slow to act. As Weatherford says, “When the safety and security of society is at stake, as it is with AI, there’s definitely a place for more regulation.”
Harmonization: The Big Challenge
Having multiple state-level AI regulations can spell confusion for companies. If you operate in ten different states, each state might have a nuanced compliance requirement. Weatherford uses the term “harmonization” to describe the challenge:
“How do we harmonize all these rules so that we don’t have … everybody doing their own thing, which drives companies crazy?”
But political realities don’t favor a national blueprint. Each state has different backgrounds, priorities, and political leanings. In states with smaller tech industries, legislators might not see AI as an immediate concern. Meanwhile, states like California can’t ignore it—local companies are building the large language models, recommendation algorithms, and synthetic data platforms driving the new AI boom.
In practice, states may “borrow” language from each other, gradually converging on de facto standards. Yet a perfect regulatory symphony across 50 states seems unlikely.
The Federal Government Isn’t Hands-Off—Just Slow
State-led AI regulation doesn’t mean federal agencies are on the sidelines. Last December, a House task force released a lengthy report on AI, after a year of deliberation. Meanwhile, the White House can still act via executive orders, frameworks from NIST (National Institute of Standards and Technology), and interagency coordination. The question: will these moves come soon enough to rein in urgent concerns like data leaks, deepfakes, disinformation, or algorithmic bias?
At the same time, the new administration is leaning toward “less regulation.” Some officials (and key tech donors) want to protect innovation from a “heavy-handed” approach. Tech companies warn that overly prescriptive rules will impede the iterative, rapid approach to AI development. On the other hand, state and federal authorities share a bipartisan interest in cybersecurity and consumer protection—two angles that cross the aisle. That might accelerate at least some national standards.
Spotlight on Synthetic Data and Privacy
Weatherford, now VP of policy and standards at Gretel, sees synthetic data as a key enabler in safe AI development. Synthetic data is artificially generated—rather than using real personally identifiable information (PII)—to train AI models. The promise is that you can develop robust AI systems without exposing or collecting raw personal data. This approach, if validated, could ease compliance with privacy laws:
- Privacy by Design
Synthetic data can mask sensitive info, reducing risk of personal data exposure. - Bias Mitigation
Properly curated synthetic data might reduce harmful biases, though it can still reflect original data flaws if not carefully audited. - Regulatory Check
If regulators demand strict controls on data usage, synthetic data sets can fulfill those mandates more easily than reliance on raw data sources.
Yet it’s no silver bullet—critics argue synthetically generated data can preserve or even amplify biases. Gretel and other synthetic-data proponents, however, say they’ve implemented checks to ensure these biases don’t accumulate. Done right, they see it as a kind of “data refill” that solves the volume, privacy, and compliance trifecta.
AI Censorship: A New Battlefront?
Meanwhile, generative AI’s content policies—often labeled as “censorship” by critics—are attracting attention. Some policy experts believe that to protect society from AI-driven misinformation or harassment, certain guardrails are a must. Opponents, sometimes aligned with free-speech arguments, say heavy moderation threatens creativity and the open exchange of ideas.
Behind the political swirl, companies moderate their AI models partly to avoid legal liability—for instance, if an AI is used to generate defamatory or dangerous content. Expect more debates on exactly where the line should be drawn and whether states or federal agencies will have the final say. As Weatherford notes,
“When there is a perceived risk to society, it’s almost certain the government will take action.”
Finding the Right Balance
If you’re an AI builder or startup founder worried about a legislative patchwork, you’re not alone. With states forging ahead at different paces, here’s what you can do:
- Stay Informed:
Track the top regulations in California, New York, Colorado, and emerging AI hubs. Follow any big changes in your home state. - Invest in Compliance:
As soon as you scale across state lines, you’ll be juggling multiple policies. An early investment in privacy-by-design and compliance frameworks reduces headaches later. - Contribute to Policy Discussions:
Consider offering your expertise to local or state policymakers. The more constructive input they get from people actively developing these technologies, the more balanced the bills might be. - Adopt Data Protection Strategies:
Whether that’s synthetic data, anonymization, or advanced encryption, building robust privacy strategies can help you maintain trust and avoid legal pitfalls.
The ultimate goal is to harness the best aspects of AI innovation without sacrificing people’s safety, privacy, or well-being. Striking that balance is easier said than done. But if AI creators and legislators can collaborate thoughtfully—even if it’s messy at first—they might arrive at a set of rules that safeguard the public while fueling future breakthroughs.
What’s Next?
Looking ahead, 2025 could be another year of big changes. Weatherford anticipates that California might refine and pass a new marquee AI regulation—this time striking a more realistic balance between oversight and innovation. Other states will watch closely. By the time any robust federal policy is enacted, half the country may already be aligned, at least in spirit, with California’s approach.
As a Tech Lead and someone who’s been in the industry for 20 years, I’m used to seeing these waves of tech policy swirl around new paradigms—cloud computing, social media, mobility, you name it. AI just happens to be bigger, more transformative, and more politically charged than most. The good news? We can put lessons learned from past tech evolutions into practice. The next few years are about bridging the knowledge gap between lawmakers, the public, and the innovators racing to deploy new AI solutions. With enough collaboration, we can find a sweet spot that fosters responsible AI.
Final Thoughts
AI is moving faster than regulators—and maybe that’s unavoidable. Yet states are stepping in to fill the vacuum, for better or worse, ensuring some guardrails. Whether you’re building the next big AI platform or just a curious observer, it’s vital to keep a finger on the legislative pulse. California’s push is only the beginning. The real question: can states harmonize enough to avoid stifling genuine innovation or fracturing the market with inconsistent rules?
I’ll be discussing this topic further on my DK Review podcast, so be sure to tune in if you want a deeper dive. As someone who’s juggled architecture decisions, stakeholder expectations, and engineering best practices—yep, like having a thousand arms—my advice is to stay nimble. Embrace data protection, transparency, and open communication with regulators. That’s how we ensure AI’s future is both bright and responsibly guided.
Until next time, thanks for reading. Stay informed, stay curious, and keep building responsibly.
Written by Denys (DK), Tech Lead & Blogger at DK Review. Proudly discussing AI, tech leadership, and the future of innovation.