California Senator Scott Weiner of SF has introduced the bill (https://twitter.com/Scott_Wiener/status/1755650108287578585, https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240SB1047).
This bill would regulate AI systems in various ways.
Will it become law by the time the legislative session ends?
This resolves YES if SB 1047 becomes law as a substantively similar bill. I wouldn't count it if it is gutted so much that it now essentially does nothing, but mostly I will let lesser changes stand.
This resolves NO if SB 1047 does not become law by the deadline and the session is over. If the session runs late the deadline will be extended.
Note that this applies only to SB 1047, if another similar bill is introduced and passes that would not count (it seemed too messy to worry about edge cases).
π Top traders
# | Name | Total profit |
---|---|---|
1 | π54,124.72 | |
2 | π48,068.05 | |
3 | π29,184.24 | |
4 | π23,922.80 | |
5 | π19,928.36 |
@Dynd I don't think I can beat you in our league now. π Say hello to your old friends in Masters for me.
@Joshua "AI springs" sounds like clever wordplay from someone well-versed in AI history. (Context: Famously, there have been "AI winters," so some peopleβmostly just academicsβrefer to the period since ~1996 or since ~2012 as an "AI spring.")
Press release from Newsom here: https://www.gov.ca.gov/2024/09/29/governor-newsom-announces-new-initiatives-to-advance-safe-and-responsible-ai-protect-californians/
Based on the way he frames the press release and veto message, it seems like he's trying to maximize his standing with both the pro- and anti-SB 1047 camps. The opponents ofc get the veto they wanted; the proponents get nice rhetoric that risks are serious and need to be addressed by an even stronger bill. Accordingly, he builds the press release around an image of himself an AI safety champion, rather than leading with the most newsworthy announcement, which is the veto itself. (Presumably, he also authorized the member of his orbit who leaked the veto decision to the WSJ and was intentional that they use the framing they did)
Similarly, his choice of 3 advisors announced in the press release seems like a similar move to placate as many groups as possible:
- Fei-Fei Li = for the VCs and industry types who opposed the bill
- Tino CuΓ©llar = for the mainstream/DC policy crowd, also perhaps AI safety people (who I think regard him well)
- Jennifer Tour Chayes = for the local academic establishment
The opponents ofc get the veto they wanted; the proponents get nice rhetoric that risks are serious and need to be address by an even stronger bill.
This strategy has been around for more than a century, stroking the egos of interest groups is mundane for legislative bodies, even if it looks new to unfamiliar people.
When the next bill comes around, the people with real power will get the veto they wanted, some other trick will be deployed, and it will also feel new. People who read books on the topic know this.
Yeah now that I'm looking for it, here are various threads from supporters arguing Newsom's stated reasoning is clearly disingenuous:
https://x.com/KelseyTuoc/status/1840494503142273056
https://x.com/GarrisonLovely/status/1840491831760675298
https://x.com/thegartsy/status/1840499944261558529
@JBar This doesn't seem accuracte:
the proponents get nice rhetoric that risks are serious and need to be addressed by an even stronger bill.
Newsome writes:
While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions β so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology.
In this quote he's not saying that a stronger bill is needed; he is saying a weaker bill is needed that doesn't regulate AI systems deployed in non-high-risk environments, doesn't regulate AI systems doing merely "basic functions" (as opposed to "critical decision-making"), and doesn't regulate AI systems that don't use sensitive data. He's explicitly saying the bill covers too much rather than too little.
@WilliamKiely Ah, I think we're talking about two different axes here. I agree he claims to want weaker/more targeted legislation on the dimension of model functions, as evidenced by your quote and others. I was referring to the "model size" axis, along which he claims to want to move in the stronger/more expansive direction.
To elaborate, rhetorically, it's notable that he offers the following as his first reason in the veto message (right above the sections you quoted):
> SB 1047 magnified the conversation about threats that could emerge from the deployment of Al. Key to the debate is whether the threshold for regulation should be based on the cost and number of computations needed to develop an Al model, or whether we should evaluate the system's actual risks regardless of these factors. This global discussion is occurring as the capabilities of Al continue to scale at an impressive pace. At the same time, the strategies and solutions for addressing the risk of catastrophic harm are rapidly evolving.
> By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology. Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047
And I think this was a load-bearing element of his comms strategy around the veto. In addition to putting it first in his message, it appeared prominently in what I believe was the first story to break on the veto (and therefore more likely to bear the fingerprints from off-the-record conversations with Newsom's staff/allies), which was published by WSJ with the following subtitle: "Governor seeks more encompassing rules than the bill opposed by OpenAI, Meta and supported by research scientists"
@JBar You're right--on one axis he calls for more expansive regulation and on the other he calls for less.
Newsom has published his veto message here: https://www.gov.ca.gov/wp-content/uploads/2024/09/SB-1047-Veto-Message.pdf
@JBar He appeals to science to argue that he needs empirical evidence that AI isn't safe. Clearly he hasn't read No Safe Defense, Not Even Science, but this goes beyond the usual pundit energy.
https://www.lesswrong.com/posts/wustx45CPL5rZenuo/no-safe-defense-not-even-science
People spent months trying to explain to him how future uncertainty works, and it's now abundantly clear that he decided to go out of his way to ignore them.
@DylanMatthews Vetoing the bill for not going far enough? Highly skeptical that's the real reason