AI-generated image of European unity, EU
Author profile picture

The European Parliament is voting on new legislation, called the “AI Act,” to regulate the application of artificial intelligence in Europe. The entire international tech world is watching. Sam Altman, CEO of OpenAI, threatened last month to pull his software, including ChatGPT, out of Europe if too many restrictions are imposed.

Unacceptable risks

The law restricts artificial intelligence in several ways, including using a risk-based approach that classifies AI systems into four levels of risk: unacceptable, high, limited, and minimal. Applications with unacceptable risks include facial recognition and social scoring, as is done in China. Whether Brussels also agrees today to ban real-time facial recognition remains to be seen.

High risk

High-risk AI applications are also clearly defined in the law and include biometric identification, critical infrastructure management, education and vocational training, employment, access to essential services, law enforcement, migration and asylum management, and justice. The law also targets major language models such as OpenAI’s GPT and Google’s Bard and requires security controls, data management measures, and risk mitigation. In addition, AI companies will be regulated in several areas, including transparency, energy use, and the use of personal data.

The European Parliament adopted an initiative resolution on AI civil liability and asked the Commission to develop legislation last year. In response, the Commission presented the proposal for an Artificial Intelligence Liability Directive (AILD), which aims to improve the functioning of the internal market and establish uniform non-contractual civil liability rules for damages caused by AI.

Leading the way

The European Union is taking the lead in drafting rules for AI. If negotiations with member states – due later this year – succeed, it will be the first AI law. That is positive, but there is a rush. Earlier this year, a large group of scientists and other key people in the tech world called for a pause in AI development. The goal is to reach an agreement by the end of this year.