© Fontys
Author profile picture

Manufacturers of products that make use of artificial intelligence are liable for any eventual damage at all times. In an effort to provide users’ rights with better protection, the European Commission is tightening the AI Liability Directive

This summer, the new Meta chatbot became the target of scorn. Just days after Blenderbot 3 of Facebook’s parent company launched online in the United States, the self-learning program had degenerated into a racist spreader of fake news.

The same thing happened in 2016 with the Tay chatbot developed by Microsoft which was designed to engage in conversations with real people on Twitter. Tay also made a wrong turn and was soon taken offline by Microsoft. 

Real damage to real people

The scandals surrounding programs like Tay and Blenderbot are laughable and may seem relatively harmless. At most, their tales are a painful lesson in how a robot is prone to right-wing extremism when instructed to interact with real people online. 

Yet self-learning computer systems are definitely capable of doing actual damage to real people. And this doesn’t just concern self-driving cars that misjudge situations and cause a collision. 

What is also a matter of concern is when serious software programs that make use of AI techniques exhibit unexpected racist behavior. These are programs used, for example, in surveillance cameras or in the analysis of job application letters.

The general public should be able to trust robots

Whether it concerns autonomous transport, automation of complex processes, or the more efficient use of agricultural land, the European Union expects a great deal from the technological innovations that are being made possible thanks to artificial intelligence. But AI applications can only truly succeed if the general public does not lose confidence in the technology. That is why the European Commission has already come up with the Artificial Intelligence Act last year. The new Liability Directive is a follow-up to that. 

The law governs the conditions under which artificial intelligence is allowed to be utilized. For example, it prohibits the marketing of ‘smart’ products that threaten the safety, livelihood, or rights of human beings. Examples include toys that encourage children to engage in dangerous behavior or AI systems that enable governments to closely monitor citizens.

It is only under strict conditions that AI applications are permissible in transport, education, hospitals, and human resources. The latter includes, for example, software used in recruitment and selection procedures. Conditions are less strict, for example, in the use of chatbots, although it is required by law that users must always know that they are interacting with a machine and not a human being. AI techniques in computer games or spam filters are deemed by the EU politicians as minimal risk.

Outdated laws

The only issue is determining who is liable for damages caused by the use of products that embed artificial intelligence. The liability directive as it stands today has become outdated after 40 years, according to the European Union. Under existing law, only one manufacturer is liable for damages caused by a defective product. 

But in an analysis issued by the European Commission, officials draw the conclusion that that definition falls short in the digital age of today and the future. “However, in the case of artificial intelligence-based systems, such as autonomous vehicles, it may be difficult to prove that a product is defective.” Proving a causal link between a design defect and an injury is especially problematic when it comes to self-learning systems.

The ‘behavior’ of an artificially intelligent system changes over time. Oftentimes, that “learning” is such a complex process that it is sometimes impossible to trace why a system made a certain decision. This may be down to the design of the software, but also due to the quality of the data that the computer uses to learn. According to the European Commission, there is a danger that, in the event of any damage, it will be practically impossible for users to prove that this was due to a defect in the ‘smart’ computer. At the same time, this situation signifies legal uncertainties for manufacturers that could impede investment in new technologies.

Predictable rules

Under the new liability directive, the European Commission hopes to establish ‘predictable rules’. It will become standard that any damage caused by products that use AI techniques will be compensated.

The law will also make it easier for users to seek justice in court. Instead of proving a causal link, the European Commission’s proposal states that from now on the “presumption of a causal link” will be enough to claim compensation. Moreover, victims will have the right to gain access to evidence from companies to support their cases. The Commission also wants to protect the manufacturers’ legal certainty by introducing the right to legally challenge a compensation claim based on the presumption of a causal link.

Update against racism

The European Commission views the legislative conditions for the deployment of artificial intelligence and the new liability rules as two sides of the same coin. The law prohibits the marketing of self-learning systems that engage in discriminatory behavior. The new directive stipulates that a manufacturer remains liable if any such algorithm unexpectedly engages in prohibited behavior. This will compel developers to keep tabs on their products and address any transgressions with an update.