Frank van Harmelen

Frank van Harmelen is a professor in Knowledge Representation & Reasoning at the Department of Artificial Intelligence at the Free University of Amsterdam and director of the Hybrid Intelligence Centre. We talked to him about the latest developments in the field of cooperation between man and machine, i.e., between human and artificial intelligence. Why is it so crucial that humankind is at the center of such collaboration? And what does this collaboration look like in the future? “We have to let go of the idea that machine intelligence will eventually become like human intelligence. The two are very different, so we have to find out how these two can best strengthen each other.”

The Hybrid Intelligence Center is unique in the Netherlands “and perhaps even unique in the world,” says Frank van Harmelen firmly. It started in January, thanks to the most significant research grant ever given to AI in the Netherlands and involves collaboration between the universities of Delft, Leiden, Amsterdam (Vrije Universiteit, and Universiteit van Amsterdam), Utrecht, and Groningen. “Within this project, we are thinking about how we can build AI systems that are not meant to replace people but rather to work with people. This is very different from what you see in modern AI research today”.

Subscribe to IO on Telegram!

Want to be inspired 365 days per year? Here’s the opportunity. We offer you one "origin of innovation" a day in a compact Telegram message. Seven days a week, delivered around 8 p.m. CET. Straight from our newsroom. Subscribe here, it's free!

Subscribe!

The interview with Frank van Harmelen was broadcast on IO television (in Dutch). Subscribe to that channel to see more interviews like this.

According to Van Harmelen, a lot of research on AI is still intended to devise an application to replace humans. “A self-propelled car? Then Uber’s drivers can all be sent off. A translation aid instead of an interpreter, complete medical image analysis to replace the radiologist. Sometimes it’s said out loud, and sometimes it’s not said, but it’s unspoken anyway. We are convinced that this is a fallacy because people reason very differently than computers; human intelligence is very different from artificial intelligence. You should not try to replace one for the other; it’s like trying to push a round pin into a square hole. We’d better make use of the differences by building teams. That’s why in the Hybrid Intelligence Center, we focus on hybrid teams of people and AI systems that together are better than each separately.”

Doesn’t that collaboration between six institutions take a lot of energy and organizational power?

“Yes, such collaboration takes a lot of energy, but it also yields a lot, precisely because each university has its specialties. At the UvA, for example, they are extremely good at machine learning. That’s very important for hybrid teams. In Delft, they are very good at systems that understand how to negotiate. My colleague there, Professor Catholijn Jonker, has built a pocket negotiator: a computer that helps you negotiate, that understands enough of how people interact with each other so that computers can help. And so each university has its specialty. My colleague Piek Vossen, here at the Free University, is an expert in natural language. We talk to each other in Dutch or English, so if we are going to build the hybrid teams with people and computers, those computers will also have to become better at understanding language and to be able to express themselves in it.

That turn towards hybrid intelligence, away from the role of AI as automation, and towards AI as cooperation, is still relatively recent. There are new research centers at Stanford and MIT in Boston, and then there’s ours in the Netherlands, in which we play a global role. But that is all very recent. Another colleague of mine, Professor Koen Hindriks, has been conducting experiments for some time now to allow children in hospitals, for example, to play games with robots to experience less stress. These robots can answer questions or advise about eating or going to the toilet, all very important in such a situation. That’s a very concrete and successful example of hybrid intelligence.”

Read the interview we had with Prof. Koen Hindriks: How science is going to make the robot more social.

“It is certainly not the case that such a robot replaces a nurse because there are so many things that people are so much better at, especially in healthcare. We also use these robots in primary schools. And there we see that teachers and educators, just like nurses, are so good at social intercourse and the social context and sensitivities of someone, you can’t automate that at all. But there are many tasks as well in which the robot is better. A good example is in the classroom: there is a child who has a lot of trouble with the multiplication tables, and they have to be repeated endlessly. Even the best teacher has no patience for that anymore. A robot, on the other hand, has endless time and patience. So that’s typically something where the robot can do something that that teacher can’t or isn’t good at, so it’s more like complementing each other than replacing each other.”

You mention education and healthcare as examples. Are there more sectors where hybrid intelligence comes in handy?

“The classic AI, which is meant to automate, you already see it in many sectors. In steel mills, the quality of the steel is automatically checked; for parcel deliverers, routes are automatically calculated, and in the hospital, images are indeed automatically analyzed on the computer with great success. But we have yet to learn in which sectors this hybrid intelligence will come into use. We have mentioned healthcare and education as two undeniable applications, but there is already speculation, for example, that AI will play a role in a democratic debate. If we all discuss, for example, the energy transition, we should perhaps also allow AI to play a role in it: not as a replacement for the human debate, but as a different kind of intelligence that can make an additional contribution.

People suffer from all kinds of cognitive biases, which are limitations in our thinking. As humans, we are always very much looking for confirmation: confirmation bias. We do read the newspapers in which we recognize ourselves, but we do not read the publications in which things are written that we disagree with. AI might be able to read these different sources in a much more balanced way and compare them with each other. And so there are more cognitive limitations we suffer from as human beings, just because we are the people we are. So we could build machines that suffer less from that. We should not leave the final decision making to them, but these machines can help us to overcome our limitations.”

In which parts does man remain indispensable?

“An important aspect of why people are so successful at working together is that we are constantly aware of what the other wants; by responding to each other’s goals, we can work well together. An AI system has no idea about that. Such a system can very well analyze, for example, an X-ray, but does not understand anything of the context. Is it an old or young patient, someone who has been treated before, is the patient sick at all, or just part of a larger examination? Awareness of each other’s goals and the context are important elements that until recently were ignored within the AI field of research. And it is precisely these parts that become important if you want people and machines to work together in a meaningful way.”

Which direction will this development take in the coming years?

“Real predictions are hard to make. It’s only been a little over a decade since the first iPhone came on the market and look at the world today. What I do know for sure is that we will increasingly recognize that human intelligence is different from machine intelligence and that the two can complement each other very well. Even with something as simple as digging a hole to pull a new cable through the ground, you see that: that excavating machine can be excellent, it still needs a human to work with. We have to let go of the idea that machine-intelligence eventually becomes like human-intelligence. The two are very different, and so we have to find out how these two can best strengthen each other. As a result, we will get used more and more to intelligent computers that become our partners, working together in a team of people and machines.

In five years, we want to be the first with our group to publish a scientific article in which the computer is a co-author. This means that the computer has to help with all steps of scientific research. I want to come into my office on Monday morning and then be told by the network ‘Hi Frank, I’ve just read those 50,000 articles, and that conversation we had on Friday now looks very different’. But that computer also has to help me think up new research questions, carry out further experiments, interpret the data from those experiments, and write the article. Only then will we be able to say that the computer is a co-author.”

Support us!

Innovation Origins is an independent news platform that has an unconventional revenue model. We are sponsored by companies that support our mission: to spread the story of innovation. Read more.

At Innovation Origins, you can always read our articles for free. We want to keep it that way. Have you enjoyed our articles so much that you want support our mission? Then use the button below:

Doneer

Personal Info

About the author

Author profile picture