Wednesday, October 8, 2025

Anthropic endorses California’s AI safety bill, SB 53

State Senator Scott Wiener, a Democrat from California, right, during the Bloomberg BNEF Summit in San Francisco, California, US, on Wednesday, Jan. 31, 2024. The summit provides the ideas, insights and connections to formulate successful strategies, capitalize on technological change and shape a cleaner, more competitive future. Photographer: David Paul Morris/Bloomberg via Getty Images
State Senator Scott Wiener, a Democrat from California, right, during the Bloomberg BNEF Summit in San Francisco, California, US, on Wednesday, Jan. 31, 2024. The summit provides the ideas, insights and connections to formulate successful strategies, capitalize on technological change and shape a cleaner, more competitive future. Photographer: David Paul Morris/Bloomberg via Getty Images | Image Credits:David Paul Morris/Bloomberg via Getty Images

On Monday, Anthropic announced an official endorsement of SB 53, a California bill from state Senator Scott Wiener that would impose first-in-the-nation transparency requirements on the world’s largest AI model developers. Anthropic’s endorsement marks a rare and major win for SB 53, at a time when major tech groups like CTA and Chamber for Progress are lobbying against the bill.

“While we believe that frontier AI safety is best addressed at the federal level instead of a patchwork of state regulations, powerful AI advancements won’t wait for consensus in Washington.” said Anthropic in a blog post. “The question isn’t whether we need AI governance—it’s whether we’ll develop it thoughtfully today or reactively tomorrow. SB 53 offers a solid path toward the former.”

If passed, SB 53 would require frontier AI model developers like OpenAI, Anthropic, Google, and xAI to develop safety frameworks, as well as release public safety and security reports before deploying powerful AI models. The bill would also establish whistleblower protections to employees that come forward with safety concerns.

Senator Wiener’s bill specifically focuses on limiting AI models from contributing to “catastrophic risks,” which the bill defines as the death of at least 50 people or more than a billion dollars in damages. SB 53 focuses on the extreme side of AI risk — limiting AI models from being used to provide expert-level assistance in the creation of biological weapons, or being used in cyberattacks — rather than more near-term concerns like AI deepfakes or sycophancy.

California’s Senate approved a prior version of SB 53, but still needs to hold a final vote on the bill before it can advance to the governor’s desk. Governor Gavin Newsom has stayed silent on the bill so far, although he vetoed Senator Weiner’s last AI safety bill, SB 1047.

Bills regulating frontier AI model developers have faced significant pushback from both Silicon Valley and the Trump administration, which both argue that such efforts could limit America’s innovation in the race against China. Investors like Andreessen Horowitz and Y Combinator led some of the pushback against SB 1047, and in recent months, the Trump administration has repeatedly threatened to block states from passing AI regulation altogether.

One of the most common arguments against AI safety bills are that states should leave the matter up to federal governments. Andreessen Horowitz’s Head of AI Policy, Matt Perault, and Chief Legal Officer, Jai Ramaswamy, published a blog post last week arguing that many of today’s state AI bills risk violating the Constitution’s Commerce Clause — which limits state governments from passing laws that go beyond their borders and impair interstate commerce.

Source link

Latest Topics

Related Articles

spot_img