Author profile picture

The European Union has reached a milestone with the agreement on the AI Act. This has important implications for the privacy and security of citizens. Artificial intelligence systems that pose risks must now be accountable to an independent regulator. In particular, facial recognition technology is restricted; it is prohibited unless it is for the detection of serious crimes.

The AI Act, officially known as the Artificial Intelligence Act, is a regulatory framework created by the European Union to manage the impact of artificial intelligence (AI) on citizens. Or, as European Commissioner in charge Thierry Breton puts it, “The #AIAct is much more than a rulebook – it’s a launchpad for EU startups and researchers to lead the global AI race.” The agreement recently reached by member states serves as a foundation for the future handling of AI in the EU.

Impact on privacy and data protection

The AI Act aims to protect the fundamental rights of citizens from the potentially invasive nature of AI technologies. Facial recognition and emotion recognition software are specific areas where the Act places strict restrictions. The former technology is banned unless it is used to detect serious crimes. Emotion recognition is restricted to prevent misuse in work and educational environments.

Transparency and accountability

Another critical aspect of the AI Act is requiring creators of high-risk AI systems to be accountable. This means that developers and providers of AI systems must be transparent about the operation and provenance of their products. To ensure authenticity, it includes watermarking digital content, such as photos, videos, and text. However, experts point out the challenges this can pose, especially with complex AI generators such as ChatGPT.

Benefits and challenges for businesses

The AI Act not only ensures protection for citizens but also provides opportunities for developers and entrepreneurs. It creates a clear framework within which innovative AI applications can be developed while creating a level playing field for European and non-European companies operating in the EU. About 15 percent of all AI systems will fall under the new strict regulations, requiring companies to adapt to the new requirements.

EU’s international position

With the introduction of the AI Act, the EU is positioning itself as a forerunner in regulating AI on the world stage. Outgoing Economy and Climate Minister Micky Adriaansens emphasizes the importance of this step to ensure that Europe is not left behind in the global AI race. The legislation promises to balance stimulating economic growth and safeguarding public values.

The AI Act sets clear limits on what is and is not allowed. For example, social scoring and manipulative AI techniques are prohibited. Yet existing scoring systems already in use within the EU remain unaffected. This distinguishes the European approach from those in countries such as China, where social scoring systems are integral to society. The legislation ensures that such systems do not gain a foothold in the EU.

Enforcement and fines

Violating the rules outlined in the AI Act can lead to significant fines. These can reach up to 35 million euros or a percentage of a company’s global turnover, depending on the severity of the violation. These severe penalties are intended to encourage companies to take the regulations seriously and take necessary precautions.

Before the AI Act takes effect, all EU member states and the European Parliament must still approve the agreement. Once approved, the law will enter into force within two years. This will give companies and governments time to prepare for the new regulations and adapt systems accordingly.