The tension around California's most-watched artificial intelligence bill was not enough to stall its passage through the state legislature last week, sending what could be a watershed piece of legislation to the governor's desk at the tail end of the session.
Much attention has been paid to Senate Bill 1047, which would put guardrails around companies that spend USD100 million to train their AI models or USD10 million to modify them. The bill is one of a handful of AI governance bills the California State Legislature passed and now await action from Gov. Gavin Newsom, D-Calif., before 30 Sept.
"Innovation and safety can go hand in hand — and California is leading the way," state Sen. Scott Wiener, D-Calif., said in a statement when the SB 1047 passed the California State Assembly 28 Aug. He touted his bill as a "light touch, commonsense measure that codifies commitments that the largest AI companies have already voluntarily made."
SB 1047 has been decried by some technology companies as the stake in the heart of innovation, while some supporters say its approach may be the only way to reign said companies in before their products cause societal harm. The bill's detractors include members of California's congressional delegation while counting former OpenAI and Google AI researchers among its champions.
Debates over SB 1047 reflect the greater struggle over AI regulation in the U.S., which already has a patchwork look to it with U.S. Congress yet to pass any significant laws around the technology and leaving states to craft their own approach to rules and requirements. That the California Legislature would support putting safety requirements on tech companies — many of which call the Golden State home — despite many objections from the business community, may foreshadow the state's future as a bellwether on AI, regardless of whether SB 1047 is enacted.
"People are certainly paying attention, including at the federal level," Mozilla Senior Public Policy and Government Relations Analyst Joel Burke told the IAPP. "I don't know if it will serve a blueprint, but people are certainly looking at it and will draw lessons from both the bill and the response to it."
Provisions and debate
SB 1047 requires companies to screen their models for potential cybersecurity and infrastructure risks as well the capability to help develop chemical, biological, radioactive or nuclear weaponry. AI labs are required to submit public statements outlining their safety testing practices. An AI board within the Government Operations Agency would be created to focus on writing auditing standards. An amendment allows the California attorney general to seek injunctive relief and sue in the event a company's AI causes a catastrophic event.
Other amendments included relaxing the language around how AI developers ensure their models are safe, requiring them to provide "reasonable care" instead of "reasonable assurance" and protections for open-source models. TechCrunch reports many of the changes came after Anthropic offered suggestions on how to improve the bill, not all of which were taken but were enough to garner the AI startup's cautious favor.
"We believe SB 1047, particularly after recent amendments, likely presents a feasible compliance burden for companies like ours, in light of the importance of averting catastrophic misuse," the company wrote to Newsom 21 Aug.
Concerns about catastrophic misuse are a driver for many of the bill's more outspoken supporters.
Yoshua Bengio, the scientific director of the Mila-Quebec AI Institute and an A.M. Turing Award recipient for his work on deep learning, compared SB 1047's safety requirements to those found in the White House's voluntary commitments with several high-profile AI companies. He said requiring companies to have "basic" safety testing requirements is a floor many of those affected by bill should be able to meet.
"We cannot let corporations grade their own homework and simply put out nice-sounding assurances. We don't accept this in other technologies such as pharmaceuticals, aerospace, and food safety," he wrote in a Fortune op-ed. "Why should AI be treated differently?"
But the amendments did not dissuade many of the bill's detractors, who argue the core issues of the bill still remain. Mozilla's Burke said he believes harms related to AI should be addressed as they arise. But he said California should use its platform to create a positive ripple effect for AI legislation, rather than focusing purely on the harms.
Part of the challenge, he said, is that the open-source industry is still trying to understand itself. Requiring those developers to assure the state its products will not result in them being used for specific uses would create an undue burden. He also argued the bill does not consider the positive facets of open-source AI.
"Our presupposition coming into the debate is that we agree, we want AI to be safe," Burke said. "It's just that we have a different pathway for getting there. And we think history basically shows open-source AI has been a way to make AI safer and tackle issues like bias and real harms that are happening today."
Mozilla, along with Hugging Face and EleutherAI, told Sen. Wiener in a letter the bill still has vague definitions and lack clarity while imposing computing thresholds on covered models that will be obsolete.
Other major detractors have included Meta and OpenAI, the latter of which said in a letter to Wiener that the federal government, not states, should be in charge of AI safety regulation, according to Bloomberg.
Wiener has been unmoved by those arguments.
"Instead of criticizing what the bill actually does, OpenAI argues this issue should be left to Congress. As I’ve stated repeatedly, I agree that ideally Congress would handle this," he said in response to the letter. "However, Congress has not done so, and we are skeptical Congress will do so. Under OpenAI's argument about Congress, California never would have passed its data privacy law, and given Congress's lack of action, Californians would have no protection whatsoever for their data."
Regardless of where the bills lands, Daniel Zhang, the senior manager for policy initiatives with Stanford's Institute for Human-Centered Intelligence, told the IAPP the likelihood of there being a Brussels or Sacramento effect with AI was less likely than with issues like privacy. While the EU AI Act has gotten attention for being a major binding piece of regulation and putting safety requirements on high-risk systems, Zhang indicated the approaches taken in Singapore, Japan and China are showing a more model-based approach or are offering guidelines as opposed to hard rules.
"There are so many parts to AI policy or AI governance in general,” he said. "There's so much we don't know about the models, and there's this information gap between what the developers know and what the policymakers know.”
Other AI bills on Gov. Newsom's desk
SB 1047 is at the head of the line of legislature-approved AI bills, but it is not the only one that stands to impact how AI is regulated in California and across the U.S.
Also awaiting Newsom's signature is a bill limiting the use of deepfakes within a certain timeframe around Election Day. Another, SB 1120, would require physicians to review a decision made or assisted by an AI tool or algorithm. Bill AB 1831 would criminalize the creation, distribution and possession of child sexual abuse material created by AI.
Perhaps the second-most contentious AI bill, SB 942, would require large generative AI providers to label AI-created content and create an AI-detection tool. According to prime sponsor state Sen. Josh Becker, D-Calif., former technology opponents withdrew their opposition to the bill after negotiations.
"Those bills are all critical and will be directly impacting all Californians lives every day," Zhang said.
Editor's note: The IAPP regularly updates the "US State AI Governance Legislation Tracker" in the Resource Center.
Caitlin Andrews is a staff writer covering AI governance for the IAPP.