Today, the European Union moved a step closer to implementing new rules on artificial intelligence. They agreed on the AI Act after a vote today. It will be the first comprehensive legislation regulating the technology, with new rules on facial recognition, biometric surveillance, and other AI applications.
After two years of negotiations, the law is now expected to move to the next phase, where lawmakers coordinate the details of the law with the European Commission and individual member states. In Thursday morning’s vote, MEPs agreed to ban the use of facial recognition in public places and preventive policing tools and to impose new transparency measures on generative AI applications such as OpenAI’s ChatGPT.
Classification of AI tools based on risk
Under the proposals, AI tools will be classified based on their perceived level of risk, ranging from low to unacceptable. Governments and companies using these tools will have different obligations depending on the level of risk. Green Left MEP Kim van Sparrentak told Reuters news agency, “This vote is a milestone in regulating AI and a clear signal from Parliament that fundamental rights must be a cornerstone. AI should serve people, society, and the environment, not the other way around.”
AI Act: goals and concerns
Proposed by the EU two years ago, the AI Act aims to set standards for AI before China and the U.S. do. There are concerns about job losses, disinformation, and copyright infringements resulting from the use of AI. The law divides AI systems into high-risk groups, bans dangerous methods, and regulates “high-risk” AI with a seal of approval. Flexibility is built in to allow adjustments to rapid developments.
Prohibited and ‘high-risk’ AI systems
Banned AI systems include human behavioral assessment and real-time biometric identification (facial recognition) by law enforcement agencies. Parliament wants to go further by prohibiting emotion recognition by police, border control, employment and education, and crime and fraud prediction based on profiling. ‘High-risk’ AI systems could impact human rights and health, including police tracking systems for suspects, resume scanning for job applicants, nuclear reactor security, and water supplies. Stringent requirements include no discrimination, transparency, clarity for regulators, and mandatory risk analysis.
It is essential because it is the first time legislation has addressed generative AI models. Next month, there will be a plenary vote in the European Parliament, followed by negotiations with the Council of the European Union. The expected law will go into effect early next year; its implementation will take two years.