As a fellow AI researcher, I’ve huge respect for Dr. Fei-Fei Li’s scientific contributions to our subject. Nonetheless, I disagree with her lately printed stance on California’s SB 1047. I imagine this invoice represents an important, mild contact and measured first step in guaranteeing the protected improvement of frontier AI methods to guard the general public.
Many consultants within the subject, together with myself, agree that SB 1047 outlines a naked minimal for efficient regulation of frontier AI fashions in opposition to foreseeable dangers and that its compliance necessities are mild and never prescriptive by intention. As an alternative, it depends on mannequin builders to make self-assessments of danger and implement fundamental security testing necessities. It additionally focuses on solely the biggest AI fashions—these costing over $100 million to coach—which ensures it won’t hamper innovation amongst startups or smaller corporations. Its necessities align intently with voluntary commitments many main AI corporations have already made (notably with the White Home and on the Seoul AI Summit).
We can’t let firms grade their very own homework and easily put out nice-sounding assurances. We don’t settle for this in different applied sciences resembling prescribed drugs, aerospace, and meals security. Why ought to AI be handled otherwise? You will need to go from voluntary to authorized commitments to stage the aggressive taking part in subject amongst corporations. I anticipate this invoice to bolster public confidence in AI improvement at a time when many are questioning whether or not corporations are appearing responsibly.
Critics of SB 1047 have asserted that this invoice will punish builders in a fashion that stifles innovation. This declare doesn’t maintain as much as scrutiny. It is not uncommon sense for any sector constructing doubtlessly harmful merchandise to be topic to regulation guaranteeing security. That is what we do in on a regular basis sectors and merchandise from vehicles to electrical home equipment to house constructing codes. Though listening to views from business is essential, the answer can’t be to fully abandon a invoice that’s as focused and measured as SB 1047. As an alternative, I’m hopeful that, with further key amendments, a few of the primary issues from business could be addressed, whereas staying true to the spirit of the invoice: Defending innovation and residents.
One other specific subject of concern for critics has been the potential affect of SB 1047 on the open-source improvement of cutting-edge AI. I’ve been a lifelong supporter of open supply, however I don’t view it as an finish in itself that’s at all times good irrespective of the circumstances. Take into account, as an example, the latest case of an open-source mannequin that’s getting used at a large scale to generate baby pornography. This criminality is outdoors the developer’s phrases of use, however now the mannequin is launched and we will by no means return. With far more succesful fashions being developed, we can’t wait for his or her open launch earlier than appearing. For open-source fashions far more superior than people who exist in the present day, compliance with SB 1047 won’t be a trivial box-checking train, like placing “illegal activity” outdoors the phrases of service.
I additionally welcome the truth that the invoice requires builders to retain the flexibility to shortly shut down their AI fashions, however provided that they’re beneath their management. This exception was explicitly designed to make compliance attainable for open-source builders. General, discovering coverage options for extremely succesful open-source AI is a fancy subject, however the threshold of dangers vs. advantages needs to be determined by way of a democratic course of, not based mostly on the whims of whichever AI firm is most reckless or overconfident.
Dr. Li requires a “moonshot mentality” in AI improvement. I agree deeply with this level. I additionally imagine this AI moonshot requires rigorous security protocols. We merely can’t hope for corporations to prioritize security when the incentives to prioritize income are so immense. Like Dr. Li, I’d additionally want to see sturdy AI security rules on the federal stage. However Congress is gridlocked and federal companies constrained, which makes state motion indispensable. Previously, California has led the way in which on inexperienced vitality and shopper privateness, and it has an incredible alternative to steer once more on AI. The alternatives we make about this subject now could have profound penalties for present and future generations.
SB 1047 is a optimistic and cheap step in direction of advancing each security and long-term innovation within the AI ecosystem, particularly incentivizing analysis and improvement in AI security. This technically sound laws, developed with main AI and authorized consultants, is direly wanted, and I hope California Governor Gavin Newsom and the legislature will assist it.
Extra must-read commentary printed by Fortune:
The opinions expressed in Fortune.com commentary items are solely the views of their authors and don’t essentially replicate the opinions and beliefs of Fortune.