©Pexels

A question that preoccupies me as a moral philosopher is to what extent artificial intelligence (AI) is capable of making moral judgments. To address that question, of course, we first need to know how humans arrive at moral judgments. Unfortunately, no consensus on that exists. Moral psychologist Jonathan Haidt argues that our moral reasoning is guided in the first place by our intuition. ‘Reason is a slave of the passions,’ as philosopher David Hume stated in the 18th century.

Haidt presented test subjects with a taboo scenario about a brother and sister who have sex with each other one time only. The objections were addressed. The siblings use contraceptives (birth control pill and condom) and it happens with mutual consent. The majority intuitively disapproves of this scenario and then seek arguments to support that intuition. If respondents are given more time to think about it and are also provided with substantiated arguments, then they are more likely to be okay with it. A calm conversation and the provision of arguments can make people change their gut instincts and their judgments. When there is an open conversation with mutual understanding and affection, people are more willing to change their minds.

‘Play’ as a form of intuition

Machine learning and deep learning are opening up opportunities for AI to develop a kind of moral ‘intuition’ by providing data and letting algorithms search for patterns in that data. The word intuition is not really the right one, because AI always concerns calculations. Like in the case study with AlphaGo, you could confront an algorithm with millions of scenarios. In this instance, about morality. Then have it ‘play’ against them (as a form of self-play) and learn from mistakes. AI will find a pattern, for example about right and wrong, and can consequently develop a kind of intuition. It continues to be extremely important to look critically at how AI discovers patterns. After all, not every pattern is desirable, as AI could also develop preferences based on e.g. popularity.

Subscribe to our Newsletter!

Your weekly innovation overview Every sunday the best articles of the week in your inbox.

    But a “good” and convincing moral judgement goes beyond intuition. It is supported by high-quality arguments. If someone judges that a specific act is wrong, that same person must be able to substantiate why that is. Complete arbitrariness is avoided this way. It also makes it possible to gauge the extent to which the judgement is susceptible to prejudice, to name one thing. So, teaching AI to use intuition is not enough. AI will also have to learn to argue. Research has been going on in the legal domain for some time now into how AI can be used to assist lawyers in evaluating legal argumentation. In this case, it is mainly about modeling legal argumentation. In the Netherlands, philosophers are researching to what extent an ‘argumentation machine’ is able to recognize fallacies. However, the research is still in its infancy.

    No consensus

    The morally right thing to do, under any circumstances, is to do whatever has the best reasons for doing it. Giving equal weight to the interests of each individual who will be affected by what people do. Quite apart from the question of whether AI will ever be able to do this, no consensus exists on those “best reasons.” This certainly complicates the choice of which data we should use to train AI with. The theory, and more specifically, the definition of morality that you adhere to and subsequently train AI with, will determine the outcome. In this case, moral judgment. When you connect ethics and AI, you inevitably end up being stuck with making choices that then determine the direction of that moral judgment. In short; for now, this question remains highly speculative.

    About this column:

    In a weekly column, alternately written by Eveline van Zeeland, Eugene Franken, Helen Kardan, Katleen Gabriels, Bert Overlack, Carina Weijma, Bernd Maier-Leppla and Colinda de Beer, Innovation Origins tries to find out what the future will look like. These columnists, occasionally supplemented by guest bloggers, are all working on solutions in their own way on the problems of our time. So that tomorrow will be good Here are all the previous articles.

    Support us!

    Innovation Origins is an independent news platform that has an unconventional revenue model. We are sponsored by companies that support our mission: to spread the story of innovation. Read more.

    At Innovation Origins, you can always read our articles for free. We want to keep it that way. Have you enjoyed our articles so much that you want support our mission? Then use the button below:

    Doneer

    Personal Info

    About the author

    Author profile picture Katleen Gabriels is a moral philosopher specializing in computer ethics at Maastricht University. She conducts research into the relationships between morality and computer technologies.