The deployment of new AI tools, like ChatGPT, has increased awareness across the EU regarding regulation, potentially changing its Artificial Intelligence Act

The Artificial Intelligence Act is a legislative proposal to regulate AI based on its potential to cause harm.

Following the rapid success of ChatGPT and other generative AI models, the EU has reportedly amended the discussions, stimulating lawmakers to further interrogate how best to regulate such systems.

Founded by the EU, the law categorises applications of AI into three risk categories:

  1. Applications and systems that create an unacceptable risk, such as government-run social scoring of the type used in China, are banned.
  2. High-risk applications, such as a CV-scanning tool that ranks job applicants, are subject to specific legal requirements
  3. Applications not explicitly banned or listed as high-risk are largely left unregulated

MEPs noted that generative AI models and firms should provide a summary of the training data covered by copyright law, entailing all uses of the models must be ‘sufficiently detailed’.

Generative foundation models must now also ensure transparency, detailing if their content is AI rather than human-generated.

The fines for foundation model providers breaching the AI rules have been set up to €‎10 million.

Brussels cannot afford to leave AI in the hands of foreign firms such as Google, experts warn

The EU has also been warned in an open letter coordinated by the German research group Laion that it risks handing control of AI to US tech firms if it does not act to protect grassroots research in its forthcoming Artificial Intelligence Act.

In this letter, the European parliament was told that “one-size-fits-all” rules risked eliminating open research and development.

“Rules that require a researcher or developer to monitor or control downstream use could make it impossible to release open-source AI in Europe,” which would “entrench large firms” and “hamper efforts to improve transparency, reduce competition, limit academic freedom, and drive investment in AI overseas”, the letter says.

It adds: “Europe cannot afford to lose AI sovereignty. Eliminating open-source R&D will leave the European scientific community and economy critically dependent on a handful of foreign and proprietary firms for essential AI infrastructure.”

What is the difference with open-source AI?

Open-source AI efforts involve creating an AI model and then releasing it for anyone to use, improve or adapt as they want.

The largest AI efforts, by companies such as OpenAI and Google, are heavily controlled by their creators, making it impossible to download the model behind ChatGPT, for instance.

The paid-for access that OpenAI provides to customers comes with a number of restrictions, legal and technical, on how it can be used.

Christoph Schuhmann, the lead of Laion, said: “We are working on open-source AI because we think that sort of AI will be more safe, more accessible and more democratic.

“I’m a tenured high-school teacher in computer science, and I’m doing everything for free as a hobby, because I’m convinced that we will have near-human-level AI within the next five to 10 years.

“This technology is a digital superpower that will change the world completely, and I want to see my kids growing up in a world where this power is democratised.”

The EU should actively back open-source research with its own public facilities

Criticisms of AI systems which are privatised led the EU to consider holding companies responsible for what their AI systems do. However, a lot of regulation would render it impossible to release systems to the public at large, which Schuhmann says would destroy the continent’s ability to compete.

Schuhmann notes that the EU should actively back open-source research with its own public facilities, to “accelerate the safe development of next-generation models under controlled conditions with public oversight and following European values”.

Sandra Wachter, a professor at the Oxford internet institute at Oxford University, said: “The hype around large language models, the noise is deafening.

“Let’s focus on who is screaming, who is promising that this technology will be so disruptive: the people who have a vested financial interest that thing is going to be successful. So don’t separate the message from the speaker.”


Please enter your comment!
Please enter your name here