Friday, August 1, 2025
HomeTechnologyCalifornia lawmaker behind SB 1047 reignites push for mandated AI security experiences

California lawmaker behind SB 1047 reignites push for mandated AI security experiences

California State Senator Scott Wiener on Wednesday launched new amendments to his newest invoice, SB 53, that may require the world’s largest AI firms to publish security and safety protocols and challenge experiences when security incidents happen.

If signed into regulation, California can be the primary state to impose significant transparency necessities onto main AI builders, doubtless together with OpenAI, Google, Anthropic, and xAI.

Senator Wiener’s earlier AI invoice, SB 1047, included comparable necessities for AI mannequin builders to publish security experiences. Nonetheless, Silicon Valley fought ferociously towards that invoice, and it was finally vetoed by Governor Gavin Newsom. California’s governor then known as for a gaggle of AI leaders — together with the main Stanford researcher and co-founder of World Labs, Fei-Fei Li — to type a coverage group and set objectives for the state’s AI security efforts.

California’s AI coverage group lately printed their ultimate suggestions, citing a necessity for “necessities on trade to publish details about their techniques” with a purpose to set up a “sturdy and clear proof surroundings.” Senator Wiener’s workplace mentioned in a press launch that SB 53’s amendments had been closely influenced by this report.

“The invoice continues to be a piece in progress, and I sit up for working with all stakeholders within the coming weeks to refine this proposal into probably the most scientific and truthful regulation it may be,” Senator Wiener mentioned within the launch.

SB 53 goals to strike a steadiness that Governor Newsom claimed SB 1047 failed to attain — ideally, creating significant transparency necessities for the biggest AI builders with out thwarting the speedy progress of California’s AI trade.

“These are considerations that my group and others have been speaking about for some time,” mentioned Nathan Calvin, VP of State Affairs for the nonprofit AI security group, Encode, in an interview with TechCrunch. “Having firms clarify to the general public and authorities what measures they’re taking to deal with these dangers appears like a naked minimal, cheap step to take.”

The invoice additionally creates whistleblower protections for workers of AI labs who consider their firm’s expertise poses a “important threat” to society — outlined within the invoice as contributing to the dying or damage of greater than 100 folks, or greater than $1 billion in harm.

Moreover, the invoice goals to create CalCompute, a public cloud computing cluster to assist startups and researchers creating large-scale AI.

In contrast to SB 1047, Senator Wiener’s new invoice doesn’t make AI mannequin builders responsible for the harms of their AI fashions. SB 53 was additionally designed to not pose a burden on startups and researchers that fine-tune AI fashions from main AI builders, or use open supply fashions.

With the brand new amendments, SB 53 is now headed to the California State Meeting Committee on Privateness and Shopper Safety for approval. Ought to it move there, the invoice can even must move by means of a number of different legislative our bodies earlier than reaching Governor Newsom’s desk.

On the opposite facet of the U.S., New York Governor Kathy Hochul is now contemplating an identical AI security invoice, the RAISE Act, which might additionally require massive AI builders to publish security and safety experiences.

The destiny of state AI legal guidelines just like the RAISE Act and SB 53 had been briefly in jeopardy as federal lawmakers thought of a 10-year AI moratorium on state AI regulation — an try and restrict a “patchwork” of AI legal guidelines that firms must navigate. Nonetheless, that proposal failed in a 99-1 Senate vote earlier in July.

“Guaranteeing AI is developed safely shouldn’t be controversial — it must be foundational,” mentioned Geoff Ralston, the previous president of Y Combinator, in a press release to TechCrunch. “Congress must be main, demanding transparency and accountability from the businesses constructing frontier fashions. However with no critical federal motion in sight, states should step up. California’s SB 53 is a considerate, well-structured instance of state management.”

Up up to now, lawmakers have didn’t get AI firms on board with state-mandated transparency necessities. Anthropic has broadly endorsed the necessity for elevated transparency into AI firms, and even expressed modest optimism in regards to the suggestions from California’s AI coverage group. However firms resembling OpenAI, Google, and Meta have been extra resistant to those efforts.

Main AI mannequin builders sometimes publish security experiences for his or her AI fashions, however they’ve been much less constant in current months. Google, for instance, determined to not publish a security report for its most superior AI mannequin ever launched, Gemini 2.5 Professional, till months after it was made obtainable. OpenAI additionally determined to not publish a security report for its GPT-4.1 mannequin. Later, a third-party examine got here out that prompt it could be much less aligned than earlier AI fashions.

SB 53 represents a toned-down model of earlier AI security payments, however it nonetheless might drive AI firms to publish extra info than they do at the moment. For now, they’ll be watching carefully as Senator Wiener as soon as once more assessments these boundaries.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments