Last week, the Dutch Scientific Council for Government Policy (WRR) found that the Netherlands is not well prepared for the consequences of artificial intelligence (AI). In ‘Challenge AI, The New Systems Technology‘ (in Dutch), the council calls for regulation of technology and data, its use, and social implications. And rightly so. Machines will have more computing power than humans in a few decades. If devices with artificial intelligence then start to think and decide for themselves, it is to be hoped that they will observe a number of commandments.
AI is also entering mobility, and the problems the WRR refers to are also at play there. The most imaginative AI appearance in mobility is the autonomous car. It is potentially much safer and more comfortable, but there are tricky liability issues if an accident occurs. Should you as a human always be able to override the system? And what would it take for a self-driving car to interpret the law flexibly when necessary? This is something we, as humans, do every minute in daily traffic, precisely in the service of safety.
One day, when I was driving along with traffic at 120 km/h on the E25 through the Ardennes, my automatic cruise control suddenly lowered the speed limit to 70 km/h because the road workers had forgotten to remove a speed sign. Fortunately, I was able to override that and not adhere to that officially legal speed limit. Despite this example, however, in the future, we should not start allowing extremely smart machines to be flexible with the rules, just like us, without any ethical or moral framework. That could lead to dystopian states where machines, perhaps unintentionally, start endangering humanity.
But the AI issues in mobility go far beyond the self-driving car. What if Google or TomTom takes over traffic management from the road authority? What if the big tech giants take over the entire planning of public transport once people plan their journeys solely through their services? What if those platforms, after a friendly free initial period, start abusing their achieved monopolies? Who will guarantee availability and safety? Cab services like Uber are more popular than the classic taxi, but who can oblige them, as with regulated cab transport, to also accept guide dogs and wheelchairs, for example, so that a significant part of society is not left aside?
Artificial intelligence will make mobility better, safer, and more comfortable. But these systems need ethical and moral frameworks within which they can achieve this. In the Netherlands, companies, and knowledge institutions have already united in the Dutch AI Coalition. They received €276 million from the growth fund earlier this year to strengthen the Dutch position internationally. Wisely, the first part of that goes to so-called Elsa labs: Ethical, Legal & Societal aspects of AI, in which consortia focus on these aspects. Just as in mobility, AI will help steer other areas as well, but we still want to be able to take the wheel ourselves.
Maarten Steinbuch and Carlo van de Weijer are alternately writing this weekly column, originally published (in Dutch) in FD. Did you like it? There’s more to enjoy: a book with a selection of these columns has just been published by 24U and distributed by Lecturis.