WHY THIS MATTERS IN BRIEF
When AI goes rogue or harms people we need to be able to “Kill it” but that’s not easy – or doable yet.
Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trends, connect, watch a keynote, or browse my blog.
A controversial bill aimed at enforcing safety standards for large Artificial Intelligence (AI) models has now passed the California State Assembly by a 45–11 vote. Following a 32–1 state Senate vote in May, SB-1047 now faces just one more procedural state senate vote before heading to Governor Gavin Newsom’s desk.
As I’ve previously explained, SB-1047 asks AI model creators to implement an AI Kill Switch – which so far only very few companies have toyed around with creating and found very problematic to design let alone implement – that can be activated if that model starts introducing “novel threats to public safety and security,” especially if it’s acting “with limited human oversight, intervention, or supervision.”
Some have criticized the bill for focusing on outlandish risks from an imagined future AI rather than real, present-day harms of AI use cases like deepfakes or misinformation.
In announcing the legislative passage Wednesday, bill sponsor and state senator Scott Weiner cited support from AI industry luminaries such as Geoffrey Hinton and Yoshua Bengio – who both last year also signed a statement warning of a “Risk of extinction” from fast-developing AI tech.
In a recently published editorial in Fortune magazine, Bengio said the bill “outlines a bare minimum for effective regulation of frontier AI models” and that its focus on large models – which cost over $100 million to train – will avoid any impacts on smaller startups.
“We cannot let corporations grade their own homework and simply put out nice-sounding assurances,” Bengio wrote. “We don’t accept this in other technologies such as pharmaceuticals, aerospace, and food safety. Why should AI be treated differently?”
But in a separate Fortune editorial from earlier this month, Stanford computer science professor and AI expert Fei-Fei Li argued that the “well-meaning” legislation will “have significant unintended consequences, not just for California but for the entire country.”
The bill’s imposition of liability for the original developer of any modified model will “force developers to pull back and act defensively,” Li argued. This will limit the open source sharing of AI weights and models, which will have a significant impact on academic research, she wrote.
A group of California business leaders sent an open letter Wednesday urging Newsom to veto the “fundamentally flawed” bill that improperly “regulates model development instead of misuse.” The bill would “introduce burdensome compliance costs” and “chill investment and innovation through regulatory ambiguity,” the group said.
If the Senate confirms the Assembly version as expected, Newsom will have until September 30 to decide whether to sign the bill into law. If he vetoes it, the legislature could override with a two-thirds vote in each chamber (a strong possibility given the overwhelming votes in favor of the bill).
At a UC Berkeley Symposium in May, Newsom said he worried that “if we over-regulate, if we overindulge, if we chase a shiny object, we could put ourselves in a perilous position.”
At the same time, Newsom said those over-regulation worries were balanced against concerns he was hearing from leaders in the AI industry.
“When you have the inventors of this technology, the godmothers and fathers, saying, ‘Help, you need to regulate us,’ that’s a very different environment,” he said at the symposium. “When they’re rushing to educate people, and they’re basically saying, ‘We don’t know, really, what we’ve done, but you’ve got to do something about it,’ that’s an interesting environment.”