California Gov. Gavin Newsom vetoed a bill aiming to implement increased AI industry safety regulations on Sept. 29. The bill targeted the most advanced “covered AI models” and sought to mitigate potential harm.
First introduced in February 2024 by State Sen. Scott Wiener, Senate Bill 1047 proposed to hold AI companies and developers legally responsible for any harm caused by their models. It required developers to report safety incidents to the Frontier Model Division, an oversight committee proposed by the act within the state Department of Technology.
The bill also mandated that developers conduct self- and third-party tests of the safety of their models prior to full development and report annual audits that would have been exempt from protections under the Public Records Act. Compliance with the regulatory provisions would be reported to the attorney general.
The bill specifically targeted “covered AI models,” which according to, dla piper, are models that are “trained using a quantity of computing power greater than 10^26 integer” or models that perform similar to a “state-of-the-art” foundation model.
Newsom voiced his concern about lawmakers targeting large, high-scale and advanced AI models, questioning if the size and cost of the developer mattered more than the actual uses of the models. He held that the bill would have given the public “a false sense of security about controlling this fast-moving technology,” further claiming that smaller AI models pose just as much of a threat to user security as the larger models.
“While well-intentioned, [Senate Bill] 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data,” Newsom said in a statement after vetoing the bill. “Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology.”
Newsom clarified that his opposition to SB 1047 did not equate to opposition to AI regulation. Just a week before his recent veto, Newsom signed three different bills — two of which crack down on AI-generated sexual content, and one focuses on using online tools to help the public identify AI-generated content.
“To those who say there’s no problem here to solve, or that California does not have a role in regulating potential national security implications of this technology, I disagree,” Newsom said in his statement. “A California-only approach may well be warranted — especially absent federal action by Congress — but it must be based on empirical evidence and science.”
UCI is one of many schools in the U.S. that has utilized generative AI with its creation of ZotGPT, a play on OpenAI’s ChatGPT, made in connection with Google Gemini and Microsoft Copilot. The resource allows students to experiment with AI while experiencing security information protections.
UC Institutional Information and IT Resources are labeled by protection levels with security controls that are meant to protect the integrity of information and documents. P1 has the minimum amount of controls, and P4 has the maximum amount of controls. ZotGPT supports up to P3, while OpenAI ChatGPT currently supports up to P1.
The bill gained a large amount of support within the film industry, with multiple actors, directors and producers signing a “Artists 4 Safe AI” letter to Newsom demanding he pass the bill in response to growing fears of “deepfakes” and AI projects imposing on actors.
“This bill is not about protecting artists — it’s about protecting everyone. Grave threats from AI used to be the stuff of science fiction, but not anymore,” the letter read.
Prominent supporters also included Tesla Motors CEO Elon Musk and AI company Anthropic. A large group of employees from various AI companies, such as OpenAI and Meta, also urged Newsom to sign the bill. They explained that without any strict regulations, AI models will continue to have serious risks, like “expanded access to biological weapons and cyber attacks on critical infrastructure,” according to Axios.
However, prominent tech companies and Democratic politicians, including San Francisco Mayor London Breed and former House Speaker Nancy Pelosi, opposed the bill, arguing that it may take away the opportunity for future AI innovation due to the bill’s broadness.
“California has the intellectual resources that understand the technology, respect the intellectual property and prioritize academia and entrepreneurship,” Pelosi said in an August statement. “The view of many of us in Congress is that SB 1047 is well-intentioned but ill informed.We must have legislation that is a model for the nation and the world.”
Newsom announced that his administration will continue to work with experts and officials to develop new protections and limitations within generative AI creations following the veto.
“I am committed to working with the Legislature, federal partners, technology experts, ethicists, and academia, to find the appropriate path forward, including legislation and regulation,” Newsom said in his statement. “Given the stakes — protecting against actual threats without unnecessarily thwarting the promise of this technology to advance the public good — we must get this right.”
Grace Hefner is a News Intern for the fall 2024 quarter. She can be reached at ghefner@uci.edu.
Edited by Karen Wang and Jaheem Conley