Katleen Gabriëls Foto: Karel Duerinckx
Author profile picture

Philosopher and computer ethics lecturer at Maastricht University Katleen Gabriëls wrote in her book ‘Regels voor Robots’ (Rules for Robots, published last week) on the ways in which we should look at the design of artificial intelligence now that our society is becoming increasingly more robotized and artificial intelligence is influencing our lives to an ever greater extent.

Why have you written this book?

“Initially for a very pragmatic reason. It was part and parcel of the Willy Calewaert chair that I held last year at the Vrije Universiteit Brussels. A chair like that is perceived differently in Belgium than in The Netherlands. Over the course of one academic year, I gave several lectures on philosophy to future engineers. I had experience in this field as I had worked as a lecturer on the ethics of technology at TU/e and am currently doing that in Maastricht. I’ve been working in computer ethics for ten years now.

The chair entailed a scientific publication as well. It wasn’t specified what form that publication should take. But very soon I made the decision to write a book. Partly because an English translation was guaranteed, which meant that I could reach a wider audience. I began by writing out the lectures that I gave in Brussels to the engineering trainees. After that, I broadened and deepened the results.”

But ultimately, you haven’t written the book just for engineers, have you?

“No. That’s right. I didn’t want to write the book solely for these engineers because the debate around computer ethics is so important. You can’t open a newspaper without it mentioning artificial intelligence somewhere. That’s why it’s essential to support the debate by clarifying the issues that exist. The book may have stemmed from that chair, but it’s actually intended for everyone. This is something I explain in the book too. It’s not just about technology in itself, but also about the creator of it and the user. And about the influence that technology has on our society. When you see how the smartphone – which was launched on the market in 2007 – has transformed society enormously over the past twelve years, then you do ask yourself questions about this.

Naturally, it is also about the engineer who makes AI and their responsibility for the repercussions that this AI might have. But ultimately, the book is a plea for ethics in technology, of course. What matters to me is that consideration is given to the ethical aspects of technology: not just when technology reaches the market, but also beforehand.”

You’ve titled the book ‘Rules for Robots’. But it doesn’t actually contain any rules.

“You mean that there aren’t ten golden rules in there that robots have to conform to?”

Yes, and you also take some of those rules to task. In the book, for example, you say that it’s naive to rely on Asimov’s three famous rules – which have already been around for a long time.

“The purpose of the book is much more complex than just laying down a few rules. It is a plea for rules for robots, except there isn’t a checklist readily available. Because moral rules, for example, are also context dependent. Therefore, when it comes to the idea that we can program a system in a binary way incorporating moral rules that would make the right decision in any context, all I can say is: that’s not going to work. Solely because we as human beings are morally fallible. So, how can we make an infallible system then? That’s a utopia.

Ultimately, there really are plenty of rules in the book. But far less concrete than you perhaps might have originally thought. It covers a number of rules that engineers have to follow while designing AI. Then it’s about a plea for what is referred to as ethics by design.

You write that engineers should take an oath, just like doctors take the Hippocratic oath, whereby they promise to act in the interest of the welfare of humankind when it comes to the design of AI.

“Yes, but there are also other rules that can serve as an example of how something like this should be done. Like the German code for programming self-driving cars. Those are also rules for robots. But this isn’t a checklist of twenty rules either. The German code is much more extensive and complex than that.”

You describe in your book a theoretical dilemma concerning AI in an autonomous car. For example, in an emergency situation on the road, the AI has to choose between rescuing the driver of the self-driving car and his wife who is sitting next to him, or a bus on the road that has twenty passengers. They could be saved if the self-driving car drives into a ravine. If the car doesn’t drive into the ravine, the bus will end up in the ravine and 20 people will die. When the AI of the self-driven car is set to save as many people as possible, it will drive the motorist and his wife into the ravine. They die because of that.

“This is, of course, an example of how the debate on self-driving transportation can be narrowed down. The debate is much more complex than what we call the ‘trolley dilemma’. Other solutions are also conceivable. You could also try to bring the car to a standstill.”

But obviously there is also a situation imaginable where that choice isn’t an option.

“Yes, that’s something we have to think about universally so that one country doesn’t make a different choice than another country. You have to establish that on an international level.”

Suppose the bus is full of Mexican drug traffickers from the Sinaloa cartel who are on their way to the US to undergo the death penalty there. Then the AI saved this group based on their numbers, even though two weeks later they will all end up sitting in an electric chair.

“That’s a thought experiment. But in practice, the chance of that happening is very slight. You can’t altogether rule out that kind of situation statistically. But it is not a realistic scenario. Just think about it. How many times have you, as a motorist, had to choose between driving into a young pedestrian on the left side of your car or an elderly cyclist on the right side of the road when a crash can’t be avoided?”

I mean that anyone’s fate is unpredictable. The AI designer of that self-driving car thinks that on balance, they’ve saved 18 lives. And that by doing so, they have done something good. But at the same time, you don’t know anything about the future of those 18 people.

“But that’s also the reason why I consider this dilemma problematic. I write about that too. People say, purely in theory then: as many people as possible must survive. But if the one person who is going to die is your partner, then you will make a different decision. Consequently, this decision is context dependent as well.”

Suppose the bus is not full of drug traffickers and the choice is made for the car driver instead. Except that this driver had just invented a drug to combat cancer that might have saved a billion lives.

“Yes. Or a cure for AIDs. But those kinds of thought experiments are not part of the book’s objective.”

What I mean is that there is no such thing as perfect judgement, therefore you cannot program AI in a clear-cut way.

“Yes, that’s definitely the message of the book. You are dealing with matters that are dependent on context. People who are not so well informed may think that the system will make clear choices for them that will be better than their own decisions. That is precisely what I am calling into question. On the other hand, self-driving cars will reduce a number of problems that we are currently facing, like accidents caused by human error behind the wheel or by drunken drivers, for example.

What is the trolley dilemma precisely?

“The trolley dilemma is a thought experiment whereby a runaway train approaches a fork in the tracks. A person is tied up on one track, and five people are tied up on the other. In order to save those five, someone has to flip the switch. But as a consequence, that one person will die. There is another variant of the trolley dilemma wherein you are standing on a bridge overlooking the track together with a fat person. Five people are tied to the track here as well. If you throw the fat person off the bridge, that person will die because they will fall in front of the train that will then come to a halt. Yet this will save five people who are tied to the rails. Fewer people would opt to do this, as you’d then have to deliberately choose that scenario.”

Because you basically kill someone then?

“Yes.”

In that case, what does the trolley dilemma teach us as far as AI is concerned?

“The trolley dilemma originated during the 1960s from a totally different context. The aim was to see what people weigh up when faced with a moral dilemma. These deliberations are usually utilitarian. This means that your starting point is to try to save as many people as possible. The reason why the trolley dilemma is so often linked to the design of AI is that these thought experiments are becoming all the more concrete with the further development of AI.

There is also an experiment dubbed the ‘Moral Machine’ at the American university MIT. You’re presented with thirteen scenarios of a self-driving car that is about to crash. For example, you have to choose between saving the lives of older or younger people.”[One of the potential outcomes was that elderly people had to die in order for young people to survive. Since young people had longer to live than older people, this would have been the logical choice, ed.]

“In fact, the German code for self-driving transportation states that this must never happen. It states that AI should never be allowed to make a choice [between people, ed.] on the basis of personal characteristics. This is a clear rule that I endorse. The Germans did adopt one of the results from the MIT Moral Machine into their code: that the life of a human takes precedence over that of an animal.”

So, some decisions should never be allowed to be made by AI?

“Yes, a programmer must never have the sole power to program on the basis of personal characteristics.”

The second part of this interview will be published next Saturday.