Author profile picture

Self-learning machines that drive cars, compose symphonies, create paintings or beat people at chess. Artificial intelligence (AI) certainly offers some exciting possibilities.

Who knows what is feasible in the future. Will machines be able to cure cancer or dementia sooner than people can? And should we really be so pleased about that? After all, in science fiction films, things often go badly wrong and the machines turn out to be a force for evil. Take, for example, Skynet in Terminator, HAL in 2001: Space Odyssey or Mother in Alien.

Skynet

Or is this an overly-exaggerated fear? Plenty of scientists have been struggling with this for years now. This also holds true for the German Max-Planck institute which, together with a group of international researchers, has been investigating the possibility of monitoring superintelligent machines with the help of algorithms. Their findings are negative.

“A superintelligent machine that controls the world sounds bit like science fiction. But there are already machines that perform certain key tasks all by themselves. And moreover, without the people who originally programmed them fully understanding how they manage to do that. That’s why it was an important question for us whether at some point this could become uncontrollable and dangerous for humankind,” says Manuel Cebrian, one of the authors of the study published in the Journal of Artificial Intelligence Research.

Two control options

Researchers around the world have come up with two ideas on how to maintain control over AI:

For one thing, you could limit the intelligent machine by cutting it off from specific resources. Such as the Internet and other technical devices, whereby contact with the outside world is rendered impossible. The main drawback of this is that AI would then be unable to solve a lot of problems due to a lack of data.

A second option is that from the outset, the AI is instructed that it must only pursue objectives that are in the interests of humankind, for example by programming ethical rules into it. Such as the Three Laws for Robotics stemming from the work of the Russian SF writer Isaac Asimov (I Robot, among others).

What were these again?
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law

‘It calculates the fastest way to “minimise misery,” which is the extinction of humankind.’
Programming of super-intelligent computers and the consequences. Image: Max-Planck instituut

Laws do not work

But according to the researchers, these kinds of laws do not offer enough safeguards for controlling AI. In their study, they designed a theoretical algorithm designed to ensure that superintelligent AI does not harm humankind under any circumstances. This algorithm first simulates the behaviour of the AI and switches the machine off as soon as it becomes risky.

“Based on our calculations, it is not possible to program an algorithm that could determine whether or not an AI might harm the world. What’s more, we may not even be able to tell whether a machine is superintelligent. Because according to our current knowledge, it is also impossible to calculate whether a machine has an intelligence that is superior to that of human beings.”

However, further analysis has revealed that such an algorithm cannot be programmed based on the current state of computer science. The researchers concluded that:

“In other words, further research is needed, and in any event, never blindly trust machines.”

You can read other articles about artificial intelligence in our dossier.