It is thanks to science and technology that we are living longer and healthier lives. Technology has greatly improved our quality of life. We even have bionic limbs, such as bionic arms, and exoskeletons for patients with full spinal cord injuries. Yet in spite of all this, we are still contending with serious shortcomings. Transhumanist Max More, who was born as Max T. O’Connor, wrote a letter to Mother Nature:

“Mother Nature, truly we are grateful for what you have made us. No doubt you did the best you could. However, with all due respect, we must say that you have in many ways done a poor job with the human constitution. You have made us vulnerable to disease and damage. You compel us to age and die—just as we’re beginning to attain wisdom. […] You gave us limited memory, poor impulse control, and tribalistic, xenophobic urges. And, you forgot to give us the operating manual for ourselves!”

Increasing empathy with medication

Transhumanists hope to use genetic modification, synthetic organs, biotechnology, and AI techniques to enhance the human condition. What intrigues me here is the moral enhancement aspect. This deliberately sets out to improve people’s character or behavior. Scientists are seeking ways to improve levels of morality through the use of medication and technology, such as moral neuroenhancement. These so-called ‘neurotechnologies‘ directly change specific cerebral states or neural functions in order to induce beneficial moral improvements. Academics, for example, are exploring the feasibility of increasing empathy with medication. This leads to interesting questions.

We all harbor prejudices as well as a tendency to feel more empathy for people we know or who we can identify with, but what is the ‘right’ amount of empathy if you want to increase it? If you felt responsible for every other person on the planet, life would most likely become unbearable, as you would be overwhelmed by all this misery. Moral enhancement by means of biotechnology is controversial ethically speaking. A great deal of criticism has been leveled at this, among other things, concerning the limitations of autonomy.

AI and moral systems

Studies in the medical world indicate that AI systems are at least as good, and sometimes even better, than doctors in diagnosing cancer. For example, researchers have trained deep neural networks with the help of a dataset of around 130,000 clinical images of skin cancer. The results demonstrate that algorithms are on the same level as experienced dermatologists when it comes to predicting skin cancer. While Deep Mind’s AI even beat doctors in screenings for breast cancer.

Consequently, AI systems are capable of supporting doctors in making diagnoses. But what about AI that would help us make moral decisions? An AI system is conceivably more consistent, impartial, and objective. Although this does depend a lot on the quality of the data that is used to train it. Unlike people, AI can process massive datasets, which may perhaps result in better-informed decisions being made.

We often fail to adequately consider all the information that is needed to make a moral decision, in part due to stress, lack of time, limited scope for information processing, and so on. So, might we eventually turn to a moral AI adviser? A kind of virtual Socrates that asks pertinent questions, points out flaws in our thinking, and ultimately issues moral advice based on input from various databases.

The complexity of morality

In order to be able to engage in an in-depth, meaningful philosophical dialogue with an AI system, the technical challenges are undoubtedly enormous, not least in the field of Natural Language Processing. The complexity of morality also presents immense challenges, given that it is so context-sensitive and anything but binary. Numerous ethical rules – even the ban on homicide – depend on the context. Killing in self-defense is evaluated differently in moral and legal terms. Ethics is the grey area, the weighing up process.

And that brings me back to one of the most wonderful quotes on morality. From Aleksandr Solzhenitsyn in The Gulag Archipelago: “If only it were all so simple! If only there were evil people somewhere insidiously committing evil deeds, and it were necessary only to separate them from the rest of us and destroy them. But the line dividing good and evil cuts through the heart of every human being. And who is willing to destroy a piece of his own heart?”

Computer says no.

About this column:

In a weekly column, written alternately by Tessie Hartjes, Floris Beemster, Bert Overlack, Mary Fiers, Peter de Kock, Eveline van Zeeland, Lucien Engelen, Jan Wouters, Katleen Gabriels, and Auke Hoekstra, Innovation Origins tries to figure out what the future will look like. These columnists, occasionally joined by guest bloggers, are all working in their own way on solutions to the problems of our time. So that tomorrow is good. Here are all the previous articles in this series.

Become a member!

On Innovation Origins you can read the latest news about the world of innovation every day. We want to keep it that way, but we can't do it alone! Are you enjoying our articles and would you like to support independent journalism? Become a member and read our stories guaranteed ad-free.

About the author

Author profile picture Katleen Gabriels is a moral philosopher specializing in computer ethics at Maastricht University. She conducts research into the relationships between morality and computer technologies.