NewsletterNewslettersEventsEventsPodcasts
Loader
Find Us
ADVERTISEMENT

A big win for the EU? How California's new AI bill compares to the EU AI Act

FILE - The California State Capitol in Sacramento, Calif., is seen on Monday, Aug. 5, 2024.
FILE - The California State Capitol in Sacramento, Calif., is seen on Monday, Aug. 5, 2024. Copyright Juliana Yamada/Copyright 2024 The AP.
Copyright Juliana Yamada/Copyright 2024 The AP.
By Pascale Davies
Published on Updated
Share this articleComments
Share this articleClose Button

California could see a new AI bill but just as Europe's AI Act proved, not everyone is happy about regulation.

ADVERTISEMENT

More than 100 current and former employees of OpenAI, Google’s DeepMind, Meta and Anthropic sent a statement in support of California’s new artificial intelligence (AI) regulation bill, which is awaiting the signature or veto of California Governor Gavin Newsom by the end of the month.

Governments worldwide are trying to regulate AI with California becoming the latest place to attempt to do so after the State Assembly and Senate passed an AI safety bill in August.

"We believe that the most powerful AI models may soon pose severe risks, such as expanded access to biological weapons and cyberattacks on critical infrastructure," read the statement supporting the bill which was published on Monday.

It includes signatures from Turing Award winner Geoffrey Hinton as well as current employees of Google DeepMind and OpenAI, who wished to remain anonymous. 

The new legislation would also add whistleblower protections for employees who speak up about the risks in the AI models their companies are developing as well as other measures. 

California’s move follows Europe’s blueprint AI Act, which was finalised earlier this year. This is how they compare and why new regulation could have implications for AI companies around the world. 

How do the acts regulate AI?

The EU AI Act has a risk-based system that categorises AI models into unacceptable, high, limited, and minimal risk.

Stricter regulations apply to AI systems that pose the greatest risks, such as those used in healthcare and critical infrastructure. The high-risk systems undergo rigorous testing, risk management and must comply with strict EU standards.

The EU AI Act systemic risk is actually broader. It captures more stuff. It captures those mass casualties, but it captures structural discrimination within it and more.
Risto Uuk
EU research lead, Future of Life Institute

The California legislation, which is officially called the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (Senate Bill 1047), aims to establish critical safety protocols for large-scale AI systems – particularly those requiring over $100 million (€90 million) in data for training, which no AI company currently does. 

AI developers must test their models thoroughly and publicly disclose their safety measures.

The California State Attorney General would also have the power to sue AI companies for any serious harm, which it defines as mass casualties or material damages exceeding $500 million (€450 million).

The California legislation would also make it possible to quickly shut down an AI model if it is shown to be unsafe and maintain tests to evaluate if it could cause or "enable a critical harm". 

Which act goes further?

"You could say that the California bill in some way looks more serious because it defines very clearly the thresholds, shows 500 million mass casualties and these kinds of very concrete thresholds," said Risto Uuk, EU research lead at the Future of Life Institute. 

"But in reality, the EU AI Act systemic risk is actually broader. It captures more stuff. It captures those mass casualties, but it captures structural discrimination within it and more," he told Euronews Next. 

The Californian law seems to be a good update of the EU approach from 2023.
Kai Zenner
Head of Office and Digital Policy Adviser for MEP Axel Voss

"Arguably that's why some people would say that the California bill is a very light touch," he added. 

ADVERTISEMENT

The California legislation is also more forward-looking than the EU’s AI Act as it focuses specifically on AI models that it defines as "covered," which do not exist yet. 

They are defined as models that are trained using a certain quantity of computing power or that have similar performance to that of a state-of-the-art foundation model.

The EU covers a wide range of different AI systems instead, such as AI for education or recruitment. 

However, others argue that the California bill goes further in other aspects as it is more specific.

ADVERTISEMENT

"SB 1047 introduces much more detailed rules for foundation models, for instance on transparency," Kai Zenner, who is head of office and digital policy adviser for MEP Axel Voss and who worked on the EU’s AI Act, told Euronews Next. 

"The Californian law seems to be a good update of the EU approach from 2023, taking into account most new observations and technical developments of the past months," he added. 

Tech backlash: What they have in common

Critics of both the EU AI Act and the California bill argue that the AI rules would slow innovation. 

It is not just AI companies saying this, but politicians too. Nancy Pelosi, former US House Speaker and current representative from California, as well as San Francisco Mayor London Breed, opposed the bill, saying it would add unnecessary bureaucracy that could stifle innovation. 

ADVERTISEMENT

During the EU AI Act negotiations, there was pushback from AI companies but not as much from politicians, with the exception of French President Emmanuel Macron who at the very last minute argued against "punitive" AI regulation.

"In the United States, you can get away with having a checkmark next to your name and having a million followers and saying [on social media], this bill is going to lead to start-up founders being jailed and have a lot of people respond, like it, and reshare it," said Hamza Chaudhry, US policy specialist at the non-profit Future of Life Institute, who is based in Washington DC. 

AI lobbying

"I think that sort of thing was happening a lot less in Brussels than it was happening in Sacramento,” he told Euronews Next, adding that lobbying efforts are "significantly larger than in DC and Sacramento than in Brussels," due to "broadly historical reasons". 

"What's so interesting about the opposition to this [California] bill is that the tech lobby was able to nationalise this discussion in a way it didn't have to be," he said. 

ADVERTISEMENT

"The former speaker of the House, a great policymaker in her own right, weighed in on a state bill for the first time in many, many years. You had the House science committee in DC weigh in on a state bill in California, I think the first time in at least 20 years".

Chaudhry said this then makes it a question of whether it is controversial or "one of the worst bills you've ever seen".

Global implications

The California law would apply to the state, which is home to 35 of the world’s top 50 AI companies, such as OpenAI. But it would also apply to AI companies that do business in the state, not just companies based in California. 

With AI regulation in Europe already and China, having another set of rules would actually make it easier for businesses.

ADVERTISEMENT

"If the California bill passes or has a regulation, the EU has a regulation, China has AI regulation, and some other major innovation hub in the world has a regulation, then it is much easier for everybody to do international business, especially if those regulations are somewhat aligned," said Uuk.  

The California bill would be "a big win for the EU and would also help to align regulatory approaches worldwide," added Zenner.

"In California as well as in Brussels, laws in such a dynamic and new field will always be imperfect at first. They need to be adjusted and specified regularly - the EU is doing that with codes and standards," he said. 

Share this articleComments

You might also like