Research on artificial intelligence (AI) started in the years after the Second World War. John McCarthy, an American mathematician at Dartmouth College, coined the term in 1955 while he was working on a proposal for a summer school that he was seeking funding for. A group of AI pioneers met at that summer workshop in 1956 – the Dartmouth Summer Research Project on Artificial Intelligence. The term AI may have been new, but academics such as British mathematician Alan Turing were already thinking for some time about ‘machine intelligence’ and a ‘thinking machine.’ The objective of the Dartmouth project was also along these lines: simulate intelligence in machines and have computers work out problems that until then had been the preserve of human beings. The summer project did not quite live up to its expectations. The participants were not all present at the same time and were primarily focused on their own projects. Moreover, there was no consensus on theories or methods. The only common vision they shared was that computers might be able to perform intelligent tasks.

AI in 2056

The surviving pioneers from the Dartmouth summer project met up again together for a conference in the summer of 2006. During this three-day conference, they asked what AI would look like in 2056. According to John McCarthy, powerful AI was ‘likely’, but ‘not certain’ by 2056. Oliver Selfridge thought that computers would have emotions by then, but not at a level comparable to that of humans. Marvin Minsky emphasized that the future of AI depended first and foremost on a number of brilliant researchers carrying out their own ideas rather than those of others. He lamented the fact that too few students came up with new ideas because they are too attracted to the idea of entrepreneurship. Trenchard More hoped that machines would always remain under human control and stated that it was highly unlikely that they would ever match the capabilities of the human imagination. Ray Solomonoff predicted that truly intelligent machines were not as far from reality as imagined. According to him, the greatest threat lies in political decision-making.

Who is right?

A wide range of opinions, so it seems. Who among them will be right? Predicting technological breakthroughs is difficult. In 1968, the year when 2001: A Space Odyssey by Stanley Kubrick was released, Marvin Minsky stated that it would only take a generation before there would be intelligent computers like HAL. To date, they don’t exist. In 1950, Alan Turing thought that the computer could pass the Turing test by the year 2000, which turned out to be a miscalculation. Vernor Vinge predicted in 1993 that the technological means to create ‘superhuman intelligence’ would be in place within thirty years and that shortly after that year the human age would come to an end. There are still a few years left before it’s 2023, but even this prediction is excessively utopian.

Flip a coin

Making predictions for the future is problematic, as by definition the future is not determined. The role of chance is often greatly underestimated as well. Even experts are scarcely able to “predict the future any better than if you were to flip a coin.” Therefore, we should all be a bit wary.  Not in the least when it comes to visionaries and tech gurus with their exaggerated dystopian or utopian worldviews. So, don’t just believe anyone who claims that AI will definitely outstrip human intelligence within ten years.

Rules for Robots

The new book by Katleen Gabriels, Regels voor robots. Ethiek in tijden van AI (Rules for Robots; Ethics in times of AI) will be published next week. The English translation will be published in early 2020.