© Microsure
Author profile picture

The European Commission wants to create a list for ultra-risky applications of AI (artificial intelligence). Such as those used in medical care or self-driving cars that must pass a safety inspection before they are allowed to enter the market. This is set out in the White Paper on artificial intelligence ‘A European approach to excellence and trust‘ published by the European Commission last week.

The reason behind this is that it’s currently not clear who is liable for any incorrect decisions that AI makes independently whenever casualties occur. That’s why the European Commission wants to clarify in advance how an AI makes decisions. The European Commission also wants to be able to trace how the AI is trained. Along with how the AI collects data, what the AI does with this. As well as to what extent and for which parties this data is accessible.

Too much work for SMEs

The White Paper states it’s beyond SMEs’ capabilities to pass all the AI that’s been developed through the security windmills. And in some areas, it isn’t even necessary because there are few foreseeable risks. Imagine, for example, that a small-scale company makes a scheduling robot that allocates teachers to a secondary school timetable.

If the robot makes a wrong assessment, for instance by arranging too many complicated subjects one after the other for a particular class and also arranging leisurely classes such as gym, music and drawing all consecutively, this is not a disaster. The school can easily change that without harming students or teachers. That kind of AI shouldn’t need to be checked for security risks prior to the manufacturer selling it.

Read also: EU-commissioner Vestager to present new AI legislation in 2020

But if the AI of a self-driving car makes the wrong decision during a journey – or the AI of medical equipment fails to make the right decision during surgery – and somebody ends up dying, then according to the authors of the White Paper, it isn’t clear who to claim compensation from. After all, it is the AI who makes a decision, not the AI’s inventor.

Healthcare and transport risk areas

Therefore, for the development and application of AI in areas where there are potential risks, such as the transport industry and healthcare sector, a list of categories that need to be monitored will be drawn up. This will be subject to regular reviews. Also the AI itself placed on the market within these categories will have to be regularly reviewed as they frequently receive updates.

The White Paper addresses many aspects of AI that are not regulated at present. For one thing, there is a lack of transparency. Someone who is inconvenienced or adversely affected by how an AI functions and who wishes to seek legal redress for this from the courts, will only be able to obtain evidence if the owner of the AI provides it. There are currently no clear regulations that citizens can use to legally secure this information. This ought to be the case in the future. All information that has been used for the operation of the AI – which sometimes involves several suppliers – will have to be preserved.

Furthermore, it is unclear who exactly is responsible for any potentially harmful effects of AI on people. The developer makes the basic product, but the user also subsequently trains the robot. It also must become clear how this is to be handled.

Debate concerning facial recognition

An issue which is problematic in the European Union. It is also the subject of a major debate is the application of AI that uses biometric features remotely. Such as facial recognition. The document states that this can only be allowed if it serves a specific crucial purpose, e.g. security. According to the European Commission, AI should not be used for detecting citizens walking across zebra crossings during a red light with a camera that identifies a citizen by their face. This happens in China. There, a walker who ignores a traffic light is automatically sent a fine in the post when the AI camera identifies them and records the violation.

Read also: Europe must invest in a hub for collaborative robots in SMEs’

Already the proposed legal restrictions on the application of AI have been criticized in various media outlets for delaying the development of AI in the EU. Especially in comparison to the US and China. Moreover, the technology would still reach the EU, but through Chinese rather than European companies.

New European legislation for AI in the making

This White Paper for the development and implementation of AI is in fact a starter document by the European Commission. It’s used as a basis for making a law or an amendment to the law. These will be adopted by national governments, parliaments and the European Parliament. (As yet it has not been decided which of the two it will be.)

Do you want to have a say?

Stakeholders such as consumer organizations, universities and businesses can submit proposals to the European Commission. They have until 19 May to do this in order to have some influence on the proposed legislation regarding AI.