Katleen Gabriëls Foto: Karel Duerinckx
Author profile picture

Philosopher Katleen Gabriëls recently published her book ‘Regels voor Robots‘ (Rules for Robots) as part of her chair at the Vrije Universiteit Brussel. Last week Innovation Origins published the first part of this interview. Today it’s time for the second part, wherein Gabriëls addresses the accountability of the AI developer who is responsible for controlling robots.

What’s the main conclusion on where we are now re rules for robots?

“When you write a book, you tend to think: what do I want to convey? I wanted to describe the current state of the AI debate. You can’t open up a newspaper or it’s about something that’s linked to an AI system like Facebook or Google. I wanted to explore the whole debate in depth. Moreover, I thought it was important to illustrate that a number of things in the current debate are not new. In order to lessen the level of confusion, I also make a distinction between what AI systems can do already, what they can’t do as yet and what they may never be able to do. This also makes the debate more nuanced. And, of course, it’s about how these systems are not neutral by nature.”

I read in your book that things aren’t up to scratch and that sometimes there is a bias in design. Like in the case of a soap machine sensor that was installed at a public toilet which didn’t recognize dark pigment. This meant that soap didn’t come out when dark-skinned people held their hands underneath the dispenser.

“A solution for this is to make topics such as ethics of technology compulsory so that future engineers will have guidelines which help them think about these things.”

But is this a matter of ethics or just plain stupidity? The designers probably didn’t design that soap sensor like that on purpose.

“Is it about ethics? Yes, it is. Because a designer doesn’t just make functional choices, but moral choices as well. By making a design only for a particular type of end user and not testing it in advance on a diverse group of end users, you do end up in the domain of ethics.”

But if it wasn’t done on purpose …

“It wasn’t done on purpose.”

Or do you think that the designer has a responsibility and that it doesn’t matter whether or not they deliberately didn’t test the product on a diverse group of end users?

“I think that in this case it’s a bit of a weak argument that he or she didn’t do it on purpose. The design nevertheless still has an ethical impact. By saying ‘it wasn’t on purpose’, you are absolving yourself of your moral responsibility to take different types of users into account. After all, everyone uses a public toilet.”

Let me put it this way: it is worse if the designer purposely did not research the specific characteristics of the group of users.

“The fact that you only have one type of end user in mind and regard your own body as the norm for the rest of the world is a choice which has ethical consequences. Using car crash test dummies as an analogy, the very first crash test dummy that was modeled on the ‘average’ European woman just dates back to 2014. Then you could also say: that was not done that way on purpose. But women make up half of the world’s population. For instance, women have less muscle in their neck and upper body which makes them more susceptible to whiplash. That’s why it’s so important to model a crash test dummy on women.”

Yet this is the case in medicine as well and that’s already been going on for a very long time. Which had nothing to do with AI. The male body often serves as the starting point for the treatment of diseases. Whereas in many cases the female body functions differently.

“Yes. I often refer to Caroline Criado Perez‘s book which describes this in detail.”

During the Research & Innovation Days in Brussels last autumn, officials from the European Commission announced that they did want to change this. It turned out that certain applications of AI lacked specific characteristics of women which unjustly excluded them from the scope of AI.

“Saying that this was unintentional would once again be too weak an argument.”

Read also: “It’s utopian to think that people can make infallible AI”

I keep on pressing that culpability question because there is a heated debate taking place about racism, which includes hatred of Jews. I have noticed that there is definitely a group that expresses racial hatred on purpose. This is aggressive behaviour that is very different from the behaviour of a group of people who unconsciously do not take the different characteristics of diverse ethnic populations into account.

“I acknowledge that distinction. But that doesn’t absolve AI’s designers of their responsibility to take this on board. That’s my point. With regard to a design [of the sensor inside that soap dispenser, as an example, ed.] there is also a lack of diversity and a lack of interdisciplinary thinking within the design team. Although at the same time I would like to stress that most designers work with a great sense of passion and commitment. I certainly don’t want to point the finger at all of them. Ultimately, it’s all about cooperation and dialogue. Whereby philosophers and designers reflect on a design, each drawing on their own expertise. And whereby they think about how particular problems can be addressed in advance, like those relating to privacy, for example.”

So now what is your most important conclusion after writing ‘Rules for Robots’?

“Maybe it’s just this: that a design is not neutral. And as an engineer or client, you do have a responsibility when it comes to that. Nobody is denying that the user also bears responsibility. But they do not have the same power as the designer has. The designer can guide people. A lot of technology that relies on advertising revenue, such as Facebook, is geared towards ‘distraction by design‘. Of course this influences how people will behave. Because their mental weaknesses are targeted. They get to use the platform for free. But they pay for that with their data, time and attention.”