NewsletterNewslettersEventsEventsPodcasts
Loader
Find Us
ADVERTISEMENT

What should we expect as the EU’s AI Act comes into force?

A representation of artificial intelligence
A representation of artificial intelligence Copyright Canva
Copyright Canva
By Anna Desmarais
Published on Updated
Share this articleComments
Share this articleClose Button
Copy/paste the article video embed link below:Copy to clipboardCopied

Euronews Next looks at what is to come in the months and years ahead as companies prepare to meet the legislation’s requirements.

ADVERTISEMENT

The EU AI Act enters into force on Thursday and will apply to any artificial intelligence (AI) systems already in place or development.  

The act is widely considered to be the first legislation in the world that attempts to regulate AI based on the risks it poses. 

Lawmakers passed the legislation in March, but its publication in the European Commission’s official journal in July set the wheels in motion for its coming into force. 

The August 1 date sets off a series of dates and timelines over the months and years ahead to prepare companies that use AI in any capacity to get familiar with the new legislation and comply. 

AI Act evaluates companies based on risk

The EU AI Act assigns its rules to every company using AI systems based on four levels of risk which in turn determines what timelines apply to them.

The four types of risk are: no risk, minimal risk, high risk, and prohibited AI systems. 

The EU will ban certain practices entirely as of February 2025. That includes those that manipulate a user’s decision-making or expand facial recognition databases through internet scraping. 

Other AI systems that are determined high-risk, like AIs that collect biometrics and AI used for critical infrastructure or employment decisions, will have the strictest regulations to follow. 

These companies will have to show their AI training datasets and provide proof of human oversight among other requirements. 

About 85 per cent of AI companies fall under the second category of "minimal risk" with very little regulation required, according to Thomas Regnier, a spokesperson for the European Commission. 

Between 3-6 months for companies to comply

Heather Dawe, head of responsible AI at the consulting firm UST, is already working with international clients to bring their AI use in line with the new act. 

Dawe’s international clients are "okay" with the law’s new requirements because there’s a recognition that AI regulation is necessary. 

She said it could take between three to six months depending on the company’s size and how much AI plays a role in their workflow to get them into compliance with the new law. 

There’s a clear set of guidelines about what you have to do. Any complications are due to not starting the process fast enough.
Heather Dawe
Head of responsible AI, UST

"There’s a clear set of guidelines about what you have to do," Dawe said. "Any complications are due to not starting the process fast enough."

Companies could consider putting in place internal AI governance boards, Dawe continued, with legal, tech and security experts to do a full audit of what technologies are being used and how they need to adhere to the new law.  

ADVERTISEMENT

If a company is found not in compliance with the AI Act by the various deadlines, they could face a fine of up to seven per cent of their global annual turnover, Regnier with the Commission said. 

How the Commission is preparing

The Commission’s AI Office will supervise how the rules for general-purpose AI models are being followed. 

Sixty internal Commission staff will be redirected to this office and 80 more external candidates will be hired in the next year, Regnier said. 

An AI Board made up of high-level delegates from all 27 EU member states "set the groundwork" for the Act’s implementation in their first June meeting, according to a statement. 

ADVERTISEMENT

The board will work with the AI office to make sure that the application of the act is harmonised across the EU, Regnier added. 

Over 700 companies say they will sign on to an AI Pact: a commitment to comply early with the law.

EU states have until next August to put in place national competent authorities that will oversee the application of the rules in their country.

The Commission is also preparing to rev up its AI investments, with a €1 billion injection in 2024 and up to €20 billion by 2030. 

ADVERTISEMENT

"What you hear everywhere is that what the EU does is purely regulation… and that this will block innovation. This is not correct," Regnier said. "The legislation is not there to push companies back from launching their systems - it's the opposite. 

"We want them to operate in the EU but want to protect our citizens and protect our businesses".

For the Commission, one of the main challenges is regulating future AI technologies, Regnier said, but he believes the risk-based system means they can quickly regulate any new systems. 

More revisions needed

Risto Uuk, the EU Research Lead at the Future of Life Institute, believes there is still some clarification that the European Commission needs to give about how risky specific technologies are. 

ADVERTISEMENT

For example, Uuk said using a drone to take photos around a water supply that needs repairs "doesn’t seem super risky," despite falling into the legislation’s high-risk category. 

"When you read it right now, it’s quite general," Uuk said. "We have this guidance on a more general level and that is helpful, because companies can then ask the question about whether a specific system is high risk".

Uuk believes that, as the implementation gets underway, the Commission will be able to give more specific feedback.

Where the act could go further, according to Uuk, is giving more restrictions and bigger fines to the Big Tech companies that are operating generative AI (GenAI) in the EU.

ADVERTISEMENT

Major AI companies like OpenAI and DeepMind are considered "general-use AI" and are in the minimal risk category. 

Companies developing general-use AI need to prove how they are complying with copyright laws, publish a training data summary and prove how they’re protecting cybersecurity. 

Other areas to improve is on human rights, according to European Digital Rights, a collective of NGOs. 

"We regret that the final Act contains several big loopholes for biometrics, policing and national security, and we call on lawmakers to close these gaps," a spokesperson said in a statement provided to Euronews Next. 

ADVERTISEMENT
Share this articleComments

You might also like