California Governor Vetoes AI Safety Legislation Following Industry Opposition
California Governor Gavin Newsom vetoed Senate Bill 1047 (SB1047), a proposed AI safety bill, on Sunday, eliciting a range of reactions from lawmakers, tech leaders, and advocacy groups.
The bill, introduced by Democratic State Senator Scott Wiener, sought to mandate safety testing for advanced AI models to avert "catastrophic harm" prior to their public release and to hold developers accountable for any damage their systems might cause. It specifically targeted models with development costs exceeding $100 million or those requiring substantial computing power, proposing the establishment of a state body to oversee the development of "Frontier Models" that would outpace current AI capabilities.
In his veto, Governor Newsom argued that the bill imposed uniform standards on all AI systems without considering the diverse environments in which they operate or their varying risk levels. In a letter addressed to the state Senate, Newsom highlighted the importance of adopting an empirical, science-based approach to AI regulation, mentioning that he has sought the assistance of leading experts on generative AI to aid the state in formulating effective safety measures.
The tech industry generally favored the veto. Chamber of Progress, a tech coalition, commended the decision, asserting that California's tech economy thrives on competition and openness. Major AI firms such as Google, Meta, and OpenAI also opposed the bill, cautioning that it could stifle innovation and diminish both the state's and the nation’s global competitiveness in AI development.
On the other hand, supporters of the bill, including Senator Wiener, voiced their disappointment over the governor's decision, asserting that it leaves powerful AI developers without oversight and "makes California less safe." Wiener criticized the voluntary safety commitments of the AI industry as often lacking enforceability and effectiveness.
Advocates for AI safety, including Tesla CEO Elon Musk, have underscored the importance of regulation for responsible AI development. Conversely, some AI experts have aligned with Newsom's perspective, calling for a balanced, evidence-based approach to regulation. Fei-Fei Li, co-director of Stanford's Institute for Human-Centered Artificial Intelligence, echoed the governor’s call for careful regulation that mitigates risks while fostering innovation.
Camille Lefevre contributed to this report for TROIB News