Author profile picture

Self-driving cars, but also deepfakes during elections: the increasing prevalence of AI technology is evident in both constructive and concerning applications. European companies face the challenge of balancing rapid adoption with responsible use of AI. The AI Act provides a clear framework. But useful tools, such as synthetically generated data, also come in handy.

Why you need to know this:

Europe wants to regulate AI using the AI Act. This act is going to safeguard citizens.

Europe aims to be at the forefront of AI. From established companies like SAS and Hugging Face to startups like Germany’s Aleph Alpha and France’s Mistral, we can expect a lot from AI on European soil in the upcoming years. 

The AI Act

At the same time, the fundamental rights and safety of citizens need to be safeguarded, through the AI Act. The European Parliament recently approved this act. AI systems are classified according to risk and the law imposes strict requirements on high-risk applications to prevent possible damage to health or the violation of fundamental rights. Companies can face fines of up to €35 million if they violate the law. The AI Act is expected to come into force in June.

SAS, the market leader in AI and analytics software, has been involved in the legislative process of the act from the beginning. Recently, SAS gave a briefing on ethical AI and the new law. Kalliopi Spyridaki, chief privacy strategist at the company: “The law covers AI applied in toys, aircraft, or government systems, among others. It additionally includes measures to prevent AI from disrupting elections. Consider, for example, the use of deepfakes. It must become clear to consumers whether certain content is real, or AI-generated.” Applications with ‘unacceptable’ risk will be banned altogether. For example, think about collecting images of faces in databases, as China does.

Guidelines and tools

SAS experts explored the fundamental components required for organizations to establish a reliable AI system. Their discussion covered crucial aspects, including guidelines for maintaining high data quality. Josefin Rosén, trustworthy AI specialist at SAS: “Data models are like milk. They become ‘less fresh’ over time. The key is to adapt them to an ever-changing reality. People need to continuously monitor the performance of models and sound the alarm in time.” 

Besides guidelines, some tools can help companies apply AI ethically, Rosén says, such as synthetic data. This generated data comes in handy when a company wants to train an AI model, but insufficient data is available or if privacy-sensitive information is involved. Research predicts that this year, 60% of the data used to develop AI applications will be synthetically generated. For healthcare, where patient data may not always be freely used, it is a useful alternative. Erasmus MC, for example, uses it to train AI models. SAS also provides useful programs for synthetic data. 

There are also explanatory AI models. These are specifically designed to make the decision-making process of AI systems understandable to humans. They reveal the internal logic and reasoning behind AI decisions, allowing companies to intervene when AI uses discriminatory processes, for example. Utrecht-based startup Deploy, for example, focuses on the correct application of explanatory AI models within companies. Deeploy’s clients include healthcare pension fund PGGM and comparison site Independer.

Overregulation is a pitfall

It remains to be seen how the new legislation will play out in Europe. The AI Act is widely supported. Yet many companies and governments are reluctant. Approval of the Act in member states was difficult; Germany and France offered resistance. The countries were concerned about overly strict rules that would penalize European developers. Compared to the AI Bill of Rights in the US, European legislation is more comprehensive and detailed. AI experts, such as Jens Bontinck of ML6, also warned of the risk of overregulation.