AI-generated picture of science fiction in the age of artificial intelligence
Author profile picture

I grew up in a very controversial time. On the one hand, this time between 1960 and 1980 was marked by the Cold War and the fear of nuclear war, and on the other hand, scientific progress was immense and largely seen in a positive light. The first moon landing in 1969 was one of the first highlights I remembered as a child.

Science Fiction of the ’60s

The science fiction of the ’60s was characterized by Star Trek (TOS) with Captain Kirk and Mr Spock who caused fascination not only by their presence. As the Enterprise could also be seen, the Camelot of space showed predominantly a positive future.

On the other hand, Kubrick’s “2001 a Space Odyssey” was of a different caliber. I can still remember the first time I saw this film in the cinema in the ’70s. The “Artificial Intelligence”, HAL (Fun Fact: HAL is exactly one letter further than IBM), killed all but one of the space travelers because it effectively went berserk. The faulty programming didn’t allow for any other ending.

Isaac Asimov’s robot laws

As early as 1942, Soviet-born writer Isaac Asimov foresaw the problems that could be caused by artificial intelligence. At that time, however, these entities were not yet referred to as AI, but rather as robots that could develop their own consciousness.

Asimov was one of the first to deal with the positive side of robot stories. Before that, most SF stories about robots followed the Frankenstein pattern, which Asimov called incredibly boring.

As a result, he postulated the famous robot laws in the story Runaround:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The Warning of the Singularity

With the advent of Artificial Intelligence these days, warnings have been rising again. Insiders like Elon Musk, who have themselves been involved with AI and also apply it in the form of Machine Learning in their products (Full Self Driving), have warned of the emergence of a “self-awareness” of Artificial Intelligence that could ultimately lead to the destruction or at least enslavement of humanity. This entity is called “Singularity”. Best described, by the way, in the AI thriller AVOGADRO Corp. by William Hertling. The setting is not coincidentally reminiscent of Google …

HAL, Terminator, Colossus & Co.

All cinematic implementations of AI in the past were characterized by the evil entity that concludes that humans are too incomplete to continue populating this planet. In Terminator, SkyNet takes over and tries to destroy humanity, HAL kills its crew, and Colossus (1970) takes defensive programming too seriously and unites with its Russian counterpart against humanity. And then there’s Wargames (1983), in which an AI supercomputer brings the world to the brink of nuclear war.

What all these dystopian expressions of Artificial Intelligence have in common is that they did NOT have the 3 Laws of Robotics as their basis.

In this light, Asimov’s considerations must be called factually absolute genius. Later Asimov even saw the weak point in his three laws and added a so-called “zeroth law”.

  • 0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

World Peace

So, should AI research be moved to a high-security wing because of the great dangers involved, as Musk and other scientists call for, for example? Or wouldn’t it be better to formulate basic “AI laws” that all AIs must follow?

That could become about as purposeful as “world peace,” which is always articulated as the contestants’ most important wish at beauty contests. The chance that it will come true is vanishingly small.

I know the robot laws would be desirable but they would be put out of service at the latest with combat robots. Otherwise, they could not act against other races (humans). Then you would become psychotic like HAL, destroy humanity like SkyNet, or fraternize with the enemy like Colossus. The latter could then indeed lead to “world peace”. But this would not be a good solution for mankind with probability bordering on certainty …

About this column:

In a weekly column, alternately written by Eveline van Zeeland, Derek Jan Fikkers, Eugène Franken, JP Kroeger, Katleen Gabriels, Bernd Maier-Leppla, Willemijn Brouwer, and Colinda de Beer, Innovation Origins tries to figure out what the future will look like. These columnists, sometimes joined by guest bloggers, all work in their own way to find solutions to the problems of our time. Here are all the previous installments.