The AI safety bill Big Tech hates has passed the California legislature

Posted by. Posted onAugust 28, 2024 Comments0

Yann LeCun, Meta’s chief AI Scientist, has warned that liability for mass casualties caused by AI will destroy the industry. | Chesnot/Getty Images

If I build a car that is far more dangerous than other cars, don’t do any safety testing, release it, and it ultimately leads to people getting killed, I will probably be held liable and have to pay damages, if not criminal penalties.

If I build a search engine that (unlike Google) has as the first result for “how can I commit a mass murder” detailed instructions on how best to carry out a spree killing, and someone uses my search engine and follows the instructions, I likely won’t be held liable, thanks largely to Section 230 of the Communications Decency Act of 1996.

So here’s a question: Is an AI assistant more like a car, where we can expect manufacturers to do safety testing or be liable if they get people killed? Or is it more like a search engine?

This is one of the questions animating the current raging discourse in tech over California’s SB 1047, newly passed legislation that mandates safety training for that companies that spend more than $100 million on training a “frontier model” in AI — like the in-progress GPT-5. Otherwise, they would be liable if their AI system leads to a “mass casualty event” or more than $500 million in damages in a single incident or set of closely linked incidents.

The general concept that AI developers should be liable for the harms of the technology they are creating is overwhelmingly popular with the American public. It also earned endorsements from Geoffrey Hinton and Yoshua Bengio, two of the most-cited AI researchers in the world. Even Elon Musk rang in support Monday evening, saying that even though “this is a tough call and will make some people upset,” the state should pass the bill, regulating AI just as “we regulate any product/technology that is a potential risk to the public.”

The amended version of the bill, which was less stringent than its previous iteration, passed the state assembly Wednesday 48-16. Amendments included removing criminal penalties for perjury, established a new threshold to protect startups’ ability to adjust open-sourced AI models, and narrowing (but not eliminating) pre-harm enforcement. For it to become state law, it will next need a signature from Gov. Gavin Newsom.

“SB 1047 — our AI safety bill — just passed off the Assembly floor,” wrote State Senator Scott Wiener on X. “I’m proud of the diverse coalition behind this bill — a coalition that deeply believes in both innovation & safety. AI has so much promise to make the world a better place.”

Would it destroy the AI industry to hold it liable?

Criticism of the bill from much of the tech world, though, has been fierce.

“Regulating basic technology will put an end to innovation,” Meta’s chief AI scientist, Yann LeCun, wrote in an X post denouncing 1047. He shared other posts declaring that “it’s likely to destroy California’s fantastic history of technological innovation” and wondered aloud, “Does SB-1047, up for a vote by the California Assembly, spell the end of the Californian technology industry?” The CEO of HuggingFace, a leader in the AI open source community, called the bill a “huge blow to both CA and US innovation.”

These kinds of apocalyptic comments leave me wondering … did we read the same bill?

To be clear, to the extent 1047 imposes unnecessary burdens on tech companies, I do consider that an extremely bad outcome, even though the burdens will only fall on companies doing $100 million training runs, which will only be possible for the biggest firms. It’s entirely possible — and we’ve seen it in other industries — for regulatory compliance to eat up a disproportionate share of peoples’ time and energy, discourage doing anything different or complicated, and focus energy on demonstrating compliance rather than where it’s needed most.

I don’t think the safety requirements in 1047 are unnecessarily onerous, but that’s because I agree with the half of machine learning researchers who believe that powerful AI systems have a high chance of being catastrophically dangerous. If I agreed with the half of machine learning researchers who dismiss such risks, I’d find 1047 to be a pointless burden, and I’d be pretty firmly opposed.

This story was first featured in the Future Perfect newsletter.

Sign up here to explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week.

And to be clear, while the outlandish claims about 1047 don’t make sense, there are some reasonable worries. If you build an extremely powerful AI, fine-tune it to not help with mass murders, but then release the model open source so people can undo the fine-tuning and then use it for mass murders, under 1047’s formulation of responsibility you would still be liable for the damage done.

This would certainly discourage companies from publicly releasing models once they’re powerful enough to cause mass casualty events, or even once their creators think they might be powerful enough to cause mass casualty events.

The open source community is understandably worried that big companies will just decide the legally safest option is to never release anything. While I think any model that’s actually powerful enough to cause mass casualty events probably shouldn’t be released, it would certainly be a loss to the world (and to the cause of making AI systems safe) if models that had no such capacities were bogged down out of excess legalistic caution.

The claims that 1047 will be the end of the tech industry in California are guaranteed to age poorly, and they don’t even make very much sense on their face. Many of the posts decrying the bill seem to assume that under existing US law, you’re not liable if you build a dangerous AI that causes a mass casualty event. But you probably are already.

“If you don’t take reasonable precautions against enabling other people to cause mass harm, by eg failing to install reasonable safeguards in your dangerous products, you do have a ton of liability exposure!” Yale law professor Ketan Ramakrishnan responded to one such post by AI researcher Andrew Ng.

1047 lays out more clearly what would constitute reasonable precautions, but it’s not inventing some new concept of liability law. Even if it doesn’t pass, companies should certainly expect to be sued if their AI assistants cause mass casualty events or hundreds of millions of dollars in damages.

Do you really believe your AI models are safe?

The other baffling thing about LeCun and Ng’s advocacy here is that both have said that AI systems are actually completely safe and there are absolutely no grounds for worry about mass casualty scenarios in the first place.

“The reason I say that I don’t worry about AI turning evil is the same reason I don’t worry about overpopulation on Mars,” Ng famously said. LeCun has said that one of his major objections to 1047 is that it’s meant to address sci-fi risks.

I certainly don’t want the California state government to spend its time addressing sci-fi risks, not when the state has very real problems. But if critics are right that AI safety worries are nonsense, then the mass casualty scenarios won’t happen, and in 10 years we’ll all feel silly for worrying AI could cause mass casualty events at all. It might be very embarrassing for the authors of the bill, but it won’t result in the death of all innovation in the state of California.

So what’s driving the intense opposition? I think it’s that the bill has become a litmus test for precisely this question: whether AI might be dangerous and deserves to be regulated accordingly.

SB 1047 does not actually require that much, but it is fundamentally premised on the notion that AI systems will potentially pose catastrophic dangers.

AI researchers are almost comically divided over whether that fundamental premise is correct. Many serious, well-regarded people with major contributions in the field say there’s no chance of catastrophe. Many other serious, well-regarded people with major contributions in the field say the chance is quite high.

Bengio, Hinton, and LeCun have been called the three godfathers of AI, and they are now emblematic of the industry’s profound split over whether to take catastrophic AI risks seriously. SB 1047 takes them seriously. That’s either its greatest strength or its greatest mistake. It’s not shocking that LeCun, firmly on the skeptic side, takes the “mistake” perspective, while Bengio and Hinton welcome the bill.

I’ve covered plenty of scientific controversies, and I’ve never encountered any with as little consensus on its core question as to whether to expect truly powerful AI systems to be possible soon — and if possible, to be dangerous.

Surveys repeatedly find the field divided nearly in half. With each new AI advance, senior leaders in the industry seem to continually double down on existing positions, rather than change their minds.

But there’s a great deal at stake whether you think powerful AI systems might be dangerous or not. Getting our policy response right requires getting better at measuring what AIs can do, and better understanding which scenarios for harm are most worth a policy response. I have a great deal of respect for the researchers trying to answer those questions — and a great deal of frustration with the ones who try to treat them as already-closed questions.

Update, August 28, 7:45 pm ET: This story, originally published June 19, has been updated to reflect the passing of SB 1047 in the California state legislature.

Category

Leave a Comment