Eerdere besprekingen over de AI Act. Rechtsachter: Thierry Breton, de initiator van de wet Copyright: Terence Zakka op X.
Author profile picture

They’ve been talking about it in Brussels since 2018. A revolutionary piece of legislation: the AI Act is considered the world’s first attempt to regulate artificial intelligence in a comprehensive, ethics-based, and environmentally sustainable way. Everything looked like the Act would be enforced, but the tech lobby has successfully thrown a spanner in the works at the last minute. EU’s three largest economies, France, Germany, and Italy, are suddenly pushing for exemptions for advanced AI models.

On Wednesday, final talks on the AI Act were supposed to take place. After a 22-hour-long marathon of negotiation, the European Parliament failed to reach an agreement. Negotiations will resume tomorrow.

  • France, Germany, and Italy are calling for exemptions for advanced AI models in the AI Act;
  • In doing so, they are departing from the European Commission’s original proposal to regulate a broad spectrum of AI applications;
  • The EU is in a tension between being able to compete with China and preventing unregulated AI development that threatens privacy and democratic processes.

Sudden change of direction

These countries advocate that the most powerful AI models, such as so-called foundational models (of which ChatGPT is the best-known example), should not be subject to the strict rules of the AI Act. They want the companies behind these models to regulate themselves through a code of conduct. This position deviates from the European Commission’s original proposals, which were intended precisely to handle a wide range of AI applications.

All fingers point toward the tech lobby as the “culprit”. MEP Van Sparrentak highlighted the impact of the tech lobby on public debate and negotiations. Recent statements by political leaders such as German Economy Minister Robert Habeck, who advocates for “innovation-friendly regulation,” show that national interests and economic visions play a prominent role. Eurocommissioner Thierry Breton also indicated that there is a lot of lobbying around the AI Act.

The importance of strict regulation

The tension between economic interests and European values is becoming increasingly sharp. On the one hand, there is the desire to stay caught up with the US and China, which are investing heavily in AI. On the other, there is fear that AI systems can be used without regulation for purposes that undermine the privacy and rights of citizens. The AI Act would also mean the EU sets an example for the rest of the world. AI distortions, such as deepfakes and voting clones, are already a reality. They have the potential to harm democratic processes and individual freedoms.

Experts and MEPs, therefore, call for mandatory security testing and independent oversight. Such strict regulation should ensure the integrity of AI technologies and prevent misuse. The European Parliament has previously voted for proposals imposing strict requirements on the creators of so-called “foundational models.” Still, major member states’ direction changes are putting pressure on this progress.

The power of AI outside Europe

The discussion of the AI Act is not an isolated one. It touches upon a broader concern about the influence of non-European companies on AI technologies. As the debate surrounding Sam Altman, the top executive of OpenAI, illustrates, only a handful of people worldwide determine the direction of AI technology. Europe is currently wrestling with the question: how can values and standards be maintained within a playing field dominated by U.S. and Chinese giants?

The big question is: Does the EU opt for strict regulation that protects citizens’ rights but has the potential to hamper innovation? Or will it give in to pressure from the tech lobby and let AI companies regulate themselves? Brussels’s decisions this week will determine not only the future of AI in Europe but also that of European citizens and their place in the digital world.