EU AI Act reaction: Tech experts say the world's first AI law is 'historic' but 'bittersweet'

The OpenAI logo is seen displayed on a cell phone with an image on a computer screen generated by ChatGPT's Dall-E text-to-image model.
The OpenAI logo is seen displayed on a cell phone with an image on a computer screen generated by ChatGPT's Dall-E text-to-image model. Copyright Michael Dwyer/AP Photo
Copyright Michael Dwyer/AP Photo
By Pascale Davies
Share this articleComments
Share this articleClose Button

In a world-first, the EU has passed legislation to regulate artificial intelligence but while some argue it does not go far enough, others say it could hurt companies with “additional constraints”.

ADVERTISEMENT

Europe’s policymakers have rushed to spin out rules and warnings to tech companies since the launch of ChatGPT and this week has been monumental in establishing the EU's artificial intelligence (AI) rules.

On Wednesday, the European Parliament approved the Artificial Intelligence Act, which takes a risk-based approach to ensure companies release products that comply with the law before they are made available to the public.

A day later, the European Commission, asked Bing, Facebook, Google Search, Instagram, Snapchat, TikTok, YouTube, and X under separate legislation to detail how they are curbing the risks of generative AI.

While the EU is mostly concerned about AI hallucinations (when the models make errors and make things up), the viral dissemination of deepfakes, and the automated manipulation of AI that could mislead voters in elections, the tech community has its own gripes with the legislation while some researchers say it does not go far enough.

Tech monopolies

While Brussels deserves "real credit" for being the first jurisdiction globally to pass regulation mitigating AI’s many risks, there are several problems with the final agreement, said Max von Thun, Europe director of the Open Markets Institute.

There are "significant loopholes for public authorities" and "relatively weak regulation of the largest foundation models that pose the greatest harm," he told Euronews Next.

Foundation models are machine learning models that are trained on data and can be used to perform a range of tasks, such as writing a poem, for instance. ChatGPT is a foundational model.

However, von Thun’s biggest concern is tech monopolies.

"The AI Act is incapable of addressing the number one threat AI currently poses: its role in increasing and entrenching the extreme power a few dominant tech firms already have in our personal lives, our economies, and our democracies," he said.

Likewise, he said the European Commission should be wary of monopolistic abuse in the AI ecosystem.

Arthur Mensch, co-founder and CEO of Mistral AI.
Arthur Mensch, co-founder and CEO of Mistral AI.Toby Melville/Pool Photo via AP

"The EU should understand that the scale of the risks posed by AI is inextricably linked to the scale and power of the dominant companies developing and rolling out these technologies. You can't successfully deal with the former until you address the latter," von Thun said.

The threat of AI monopolies came under the limelight last month after it emerged that French start-up Mistral AI was partnering with Microsoft.

To some in the EU, it came as a shock since France had pushed for concessions to the AI Act for open source companies like Mistral.

'Historic moment'

But several start-ups welcomed the clarity that the new regulation brings.

"The EU Parliament's final adoption of the EU AI Act is both a historic moment and a relief," said Alex Combessie, co-founder and CEO of French open source AI company Giskard.

“While the Act imposes additional constraints and rules on developers of high-risk AI systems and foundation models, deemed as ‘systemic risks,’ we're confident that these checks and balances can be effectively implemented,” he told Euronews Next.

“This historic moment paves the way for a future where AI is harnessed responsibly, fostering trust and ensuring the safety of all," he said.

The legislation works by distinguishing the risks posed by foundation models, which are based on the computing power that trains them. AI products that exceed the computing power threshold are regulated more stringently.

ADVERTISEMENT

The classification is seen as a starting point and like other definitions can be reviewed by the Commission.

'Public good'

But not everyone is on board with the categorisation.

"From my point of view, AI systems used in the information space should be classified as high-risk, requiring them to adhere to stricter rules, which is not explicitly the case in the adopted EU AI Act," said Katharina Zügel, policy manager at the Forum on Information and Democracy.

It is vital that the EU harnesses the dynamism of the private sector, which will be the driving force behind the future of AI. Getting this right will be important for making Europe more competitive and attractive to investors.
Julie Linn Teigland
EY’s EMEIA Area Managing Partner

"The Commission, which has the ability to modify the use cases of high-risk systems, could explicitly mention AI systems employed in the information space as high-risk taking into account their impact on fundamental rights," she told Euronews Next.

"Private companies cannot be the only ones driving our common future. AI must be a public good," she added.

ADVERTISEMENT

But others argue that businesses also need to have their say and be able to work with the EU.

"It is vital that the EU harnesses the dynamism of the private sector, which will be the driving force behind the future of AI. Getting this right will be important for making Europe more competitive and attractive to investors," said Julie Linn Teigland, EY’s Europe, Middle East, India and Africa (EMEIA) Managing Partner.

However, she said that businesses in the EU and beyond must be proactive and prepare for the law coming into force, which means "taking steps to ensure that they have an up-to-date inventory of the AI systems they are developing or deploying, and determining their position in the AI value chain to understand their legal responsibilities".

'Bittersweet taste'

For start-ups and small and medium-sized enterprises, that could mean a lot more work.

"This decision has a bittersweet taste," said Marianne Tordeux Bitker, public affairs chief at France Digitale.

ADVERTISEMENT

"While the AI Act responds to a major challenge in terms of transparency and ethics, it nonetheless creates substantial obligations for all companies using or developing artificial intelligence, despite a few adjustments planned for startups and SMEs, notably through regulatory sandboxes.

"We fear that the text will simply create additional regulatory barriers that will benefit American and Chinese competition, and reduce our opportunities for European AI champions to emerge," she added.

The OpenAI logo is displayed on a cell phone with an image on a computer monitor generated by ChatGPT's Dall-E text-to-image model.
The OpenAI logo is displayed on a cell phone with an image on a computer monitor generated by ChatGPT's Dall-E text-to-image model.AP Photo/Michael Dwyer, File

'Effective implementation'

But even though the AI Act is a done deal, implementing it is the next challenge.

"Now the focus shifts to its effective implementation and enforcement. This also requires renewed attention to complementary legislation," Risto Uuk, EU research lead at the non-profit Future of Life Institute, told Euronews Next.

Such complementary legislation includes the AI Liability Directive, meant to assist liability claims for damage caused by AI-enabled products and services, and the EU AI Office, which aims to streamline rule enforcement.

ADVERTISEMENT

"The key things to ensure that the law is worth the paper it's written on are that the AI Office has resources to perform the tasks it has been set and that the codes of practices for general-purpose AI are well drafted with the inclusion of civil society," he said.

Share this articleComments

You might also like