© Pixabay

What might machines capable of moral behaviour effectively mean in practice? The Swedish fiction series Real Humans presents an interesting spin on this question. ‘Hubots‘ – a contraction of ‘human’ and ‘robot’ – live together with human beings. They then revolt and demand more rights. People should not only have respect for them, but also take their concerns seriously. This type of robot can also be held morally and legally responsible. If it ever turns out that robots are morally better (less fallible) than we are, then in principle they may become our ‘moral mentors.’

Real Humans is, of course, fiction. It does constitute an interesting thought experiment. But no more than that. A program that works ‘well’ just means that it does what it is supposed to do. What about morality in machines? Are we able to ‘program’ that at all? The technical, functional aspects already pose quite a challenge. Apart from this, there really isn’t any ‘perfect moral blueprint’ on hand that contains all the data we need to be able to easily train a machine. After all, people are morally fallible by nature.

No consensus

Nor is there any consensus as to which ethical theory should form the basis for ethical standards. Philosophers have been discussing this for centuries already. Should the AI system adhere to moral rules? Or should it act in such a way so as to increase the happiness of the greatest number of people?

Subscribe to our Newsletter!

Your weekly innovation overview Every sunday the best articles of the week in your inbox.

    In the first case (as in, adhering to moral rules), the machine must be programmed with explicit rules that it must stick to in order to be able to make a moral decision. Let’s keep that rule simple: the age-old golden rule: ‘Don’t do to someone else what you would not do to yourself.’ The rule may seem simple, yet is extremely complex in its application. The computer needs to be able to determine for itself what it wants and doesn’t want, within various hypothetical contexts, and to assess the consequences of other people’s actions by itself.

    Even if the computer doesn’t feel any real empathy, it must at least have the capacity for ’empathy’ for calculating the consequences of its own actions on others. In order to estimate the extent to which the machine itself would want to be treated in the same way. And in doing so, the system must also take differing individual views and preferences into account.

    Utilitarianism or consequentialism

    Can that even be summed up in mathematical terms? Perhaps it is ‘easier’ to fuel the system with an ethical theory that focuses on boosting the happiness of the greatest number of people. In order to incorporate that ethical theory (utilitarianism or consequentialism) into a machine, the effects of any action on each member of the moral public would have to be given a numerical value.

    Yet it is impossible to do this in real time for every single action in the world. Especially as the effects of every action lead to fresh consequences. As such, you could mitigate computational problems by setting a threshold beyond which further estimation of consequences is no longer deemed necessary. Even that is unbelievably complex. Moreover, an incredible amount of suffering and pain may be caused just past that boundary. We would inevitably regard this as a morally reprehensible act.

    Just as with Real Humans, ideas about ‘moral machines’ encompass interesting thought experiments. But for the time being, not much more than that.

     

    About this column:

    In a weekly column, written alternately by Tessie Hartjes, Floris Beemster, Bert Overlack, Mary Fiers, Peter de Kock, Eveline van Zeeland, Lucien Engelen, Jan Wouters, Katleen Gabriels and Auke Hoekstra, Innovation Origins tries to figure out what the future will look like. These columnists, occasionally joined by guest bloggers, are all working in their own way on solutions to the problems of our time. So that tomorrow is good. Here are all the previous articles.

     

    Support us!

    Innovation Origins is an independent news platform that has an unconventional revenue model. We are sponsored by companies that support our mission: to spread the story of innovation. Read more.

    At Innovation Origins, you can always read our articles for free. We want to keep it that way. Have you enjoyed our articles so much that you want support our mission? Then use the button below:

    Doneer

    Personal Info

    About the author

    Author profile picture Katleen Gabriels is a moral philosopher specializing in computer ethics at Maastricht University. She conducts research into the relationships between morality and computer technologies.