On Sunday, California Governor Gavin Newsom vetoed Senate Bill 1047, a bill that aimed to implement the most extensive AI regulations in the United States. The bill, seen as a model for national AI legislation, sought to establish sweeping oversight over the booming artificial intelligence industry in California.
The veto sparked mixed reactions. AI advocates and tech companies welcomed the move, citing concerns that strict regulations could stifle innovation and competitiveness in California's tech sector. Supporters of the bill, including privacy and ethics groups, expressed disappointment, arguing that strong regulation is necessary to prevent misuse of AI and protect consumer rights.
Gov. Newsom cited the need for further study before enacting such broad legislation, emphasizing the importance of balancing innovation with public safety and privacy concerns. He expressed interest in developing more comprehensive AI regulations but argued that the proposed bill might have unintended consequences, potentially hindering the growth of AI in California.
In his veto message, Gov. Newsom said: "By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology. Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047—at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good."
"Let me be clear—I agree with the author—we cannot afford to wait for a major catastrophe to occur before taking action to protect the public. California will not abandon its responsibility. Safety protocols must be adopted. Proactive guardrails should be implemented, and severe consequences for bad actors must be clear and enforceable. I do not agree, however, that to keep the public safe, we must settle for a solution that is not informed by an empirical trajectory analysis of Al systems and capabilities. Ultimately, any framework for effectively regulating Al needs to keep pace with the technology itself."
California Senator Scott Wiener, a co-author of the bill, criticized Newsom's veto decision. "This veto leaves us with the troubling reality that companies aiming to create an extremely powerful technology face no binding restrictions from U.S. policymakers, particularly given Congress's continuing paralysis around regulating the tech industry in any meaningful way," Wiener wrote in a post on X/Twitter.
"One size does not fit all in AI regulation—even where that size is large. The crux of Newsom's justification for his veto is that the AI bill was focused on size of the AI system, and not on the potential risk of its use," said Myriah Jaworski, Member, Data Privacy & Cybersecurity, at Clark Hill Law. "There is truth in this assessment. Globally, including most recently with the EU AI Act and Colorado's AI Act, we see legislation that meaningfully grapples with the risk of the AI system before assessing a regulatory requirement to it—be it around disclosure, testing/assessments, post-deployment monitoring, and other safeguards. Newsom's veto appears to be a clear indication that he wants to see a risk-based regime in future California AI proposals."
"While I like the idea of a mandatory 'kill switch' and related oversight, my big question is about enforcement," said Kip Boyle, vCISO at Cyber Risk Opportunities LLC. "The threshold for AI models covered by the law would have been those requiring over $100 million to develop. It's likely we'll see lots of AI model development at well under that amount in the years ahead, so why start with such a high bar?"
Big business in Silicon Valley lobbied hard against the bill, saying it would limit innovation and further advances in AI and related products and services.
"SB 1047 would threaten that growth, slow the pace of innovation, and lead California's world-class engineers and entrepreneurs to leave the state in search of greater opportunity elsewhere," OpenAI's Chief Strategy Officer Jason Kwon wrote in a letter sent last month to Wiener.
"Big tech breathed a sigh of relief this weekend, and the veto decision underscores a critical gap in AI governance," said Violet Sullivan, AVP, Cyber Solutions, at Crum & Forster. "The absence of clear boundaries leaves consumers vulnerable to unchecked AI advancements. Status quo is without well-defined regulations governing how artificial intelligence (AI) is developed, deployed, and used."
"Yes, several states in the U.S. have taken steps to regulate AI in specific areas," Sullivan continued, "but there isn't yet a comprehensive, uniform framework like what California's SB 1047 proposed. The existing state-by-state AI laws tend to focus on narrower aspects of AI, such as biometric data privacy or automated decision-making systems, rather than comprehensive oversight of the technology as a whole."
The veto leaves a significant gap in the state's regulatory framework for AI, delaying what many saw as a potential blueprint for national standards. The conversation around AI governance will likely continue, with advocates pushing for safeguards and industry players advocating for less restrictive measures. As AI continues to grow, both California and national lawmakers will face ongoing pressure to develop regulations that protect citizens without hampering innovation.
"AI is, in a lot of ways, still in its infancy. And, California is a great example of how regulators are trying to balance protecting against the potential negative aspects of the technology while also allowing for innovation at such a crucial time in the technology's development," said Jordan L. Fischer, Founding Partner at Fischer Law, LLC.
"This veto brings a mixed bag of critical thinking. While I understand Governor Newsom's position to avoid 'repeating the mistake of passing poor legislation' out of fear, Senator Wiener makes an excellent point," said Kimberly Haywood, Principal CEO at Nomad Cyber Concepts and Adjunct Cybersecurity Professor at Collin College.
"States can't afford to wait for the White House or Congress to intervene. The stakes are too high. This new evolution of AI and ML is outpacing our ability to fully comprehend its implications. It's well known by security leaders that our adversaries are working 24/7 to design more enhanced AI tools for malicious intent.
I believe it's essential for states to implement regulatory guidelines, but they must be careful and avoid impulsive or biased decisions that focus too heavily on 'large-scale' ML model developments, potentially overlooking smaller, possibly more dangerous developments. It's like constructing a high-rise building with attention to height while ignoring the foundation. The strength of both is necessary for the structure to stand successfully."
In August of this year, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) took a pioneering step in the realm of artificial intelligence and cybersecurity by appointing its first Chief Artificial Intelligence Officer, Lisa Einstein.
Colorado and Utah have enacted laws tailored to address how AI could perpetuate bias in employment and health-care decisions, as well as other AI-related consumer protection concerns.
Newsom did recently sign more than a dozen other AI bills into law, including one to crack down on the spread of deepfakes during elections. Another protects actors against their likenesses being replicated by AI without their consent.
Read the full text of SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, or check out this summary.
[RELATED: The Emerging Role of the Chief AI Officer]