US's First AI Safety Bill Vetoed
California Governor Gavin Newsom has vetoed an AI bill that was anticipated to set the stage for nationwide regulations in the industry. Read Full Article at RT.com
The proposed regulations would have mandated that technology companies with advanced AI models conduct safety testing before making these models available to the public. Additionally, companies would have been required to disclose their safety protocols to prevent manipulation that could lead to harmful outcomes, such as compromising critical infrastructure.
In his veto message on Sunday, Governor Newsom acknowledged that while the proposal was “well-intentioned,” it focused too heavily on the “most expensive and large-scale” AI models, overlooking the risks posed by “smaller, specialized models.” Newsom further suggested that the legislation failed to consider the context in which an AI system is deployed, particularly regarding critical decision-making and the use of sensitive information.
“Instead, the bill applies stringent standards to even the most basic functions... I do not believe this is the best approach to protecting the public from real threats posed by the technology,” the governor stated. He emphasized that, while he agrees the industry requires regulation, there is a need for more “informed” initiatives grounded in “empirical trajectory analysis of AI systems and capabilities.”
“Ultimately, any framework for effectively regulating AI needs to keep pace with the technology itself… Given the stakes – protecting against actual threats without unnecessarily thwarting the promise of this technology to advance the public good – we must get this right,” he concluded.
As the governor of California, Newsom plays a key role in the evolving landscape of AI regulation. His office reports that the state hosts 32 of the world’s “50 leading AI companies.”
The bill's author, state Senator Scott Weiner, described the veto as “a setback” for those who “believe in oversight of massive corporations that are making critical decisions” impacting public safety. He expressed his commitment to continue working on the legislation.
Responses to the bill were mixed among tech companies, researchers, and lawmakers. Some saw it as a potential step towards national regulations for the industry, while others warned it could hinder AI development. Former US House Speaker Nancy Pelosi characterized the proposal as “well-intentioned but ill-informed.”
Meanwhile, many employees from leading AI firms, including OpenAI, Anthropic, and Google’s DeepMind, supported the bill because it included whistleblower protections for individuals who raise concerns about the risks associated with the AI models their companies are creating.
Max Fischer for TROIB News