California Governor Gavin Newsom has vetoed a landmark bill aimed at establishing first-in-the-nation safety measures for large artificial intelligence models.
Senate Bill 1047 would have put certain safeguards and policies in place to have a regulatory framework for the burgeoning AI industry.
Why he vetoed: Newsom expressed concerns over the bill’s potential to have a chilling effect on the industry, citing the possibility of harming the homegrown industry due to the imposition of rigid requirements.
- Newsom also said in the veto that the regulatory framework that would have been established by SB 1047 could have given the public a false sense of security about controlling AI.
What they’re saying: “While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data,” Newsom said in a statement. “Instead, the bill applies stringent standards to even the most basic functions – so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology.”
- Sen. Scott Weiner (D–San Francisco), who authored SB 1047, said the veto is a setback for people who believe in oversight of massive corporations.
- “The companies developing advanced AI systems acknowledge that the risks these models present to the public are real and rapidly increasing,” Wiener said in a statement. “While the large AI labs have made admirable commitments to monitor and mitigate these risks, the truth is that voluntary commitments from industry are not enforceable and rarely work out well for the public.”