© Barrett Lyon
Author profile picture

[et_pb_section bb_built=”1″ fullwidth=”on” specialty=”off” next_background_color=”#000000″][et_pb_fullwidth_post_title _builder_version=”3.12.2″ date=”off” categories=”off” comments=”off” featured_placement=”background” meta_text_align=”center” custom_css_main_element=”min-height:450px;” text_color=”light” text_orientation=”center” title_text_color=”#ffffff” meta_text_color=”#ffffff” text_background=”on” text_bg_color=”rgba(0,0,0,0.2)” /][et_pb_fullwidth_code _builder_version=”3.12.2″]<p style=”text-align:center;”>Picture: © Barrett Lyon / The Opte Project</p>[/et_pb_fullwidth_code][/et_pb_section][et_pb_section bb_built=”1″ prev_background_color=”#000000″][et_pb_row][et_pb_column type=”4_4″][et_pb_text _builder_version=”3.12.2″]

A group of more than 150 European scientists are sounding the alarm. There is a threat of a brain drain in artificial intelligence (AI) in Europe. Talent is more likely to choose foreign countries, and investment remains low compared to North America and China. To prevent worse, scientists are calling for a European AI research institute: Confederation of Laboratories for Artificial Intelligence Research in Europe (CLAIRE). Is it still possible to close the gap?

Holger Hoos is convinced that this brain drain will only increase if the EU does not intervene. Hoos is a professor of machine learning at the University of Leiden and one of the initiators of CLAIRE. “Artificial intelligence already has a significant influence on society; this influence will only increase in the coming years. AI is going to solve major problems and make medical breakthroughs. Even the climate problem could be solved. If we do not intervene, we will become dependent on other countries or companies. That would be disastrous for the European economy.”

Hoos does not rule out Europe losing control over the development of artificial intelligence: ‘Without own research, you will not be able to keep up. Science will be lagging behind. Moreover, you have no idea how AI works because you have little insight into the data it contains. Are you sure there has been no manipulation? If we are lagging behind, this question will become increasingly relevant.”

[/et_pb_text][et_pb_text _builder_version=”3.12.2″]

Competition

In 2016, China made it clear that it wants to be the world leader in AI by 2030. The country is pulling the wallet. Massive investments will follow, but how much exactly is not clear. We do know, however, that the city council of Tianjin, a northern port city with almost 16 million inhabitants, is pouring some EUR 4.3 billion into AI. They are also constructing an ‘intelligent industrial zone’ of more than 20 square kilometres in the city. Other regions are investing hundreds of millions in artificial intelligence. Major technology companies in the US, such as Google, Apple and Amazon, spent some 16 to 24 billion euros in the development of the technology.
Europe is going to remain silent for a long time. In 2016, investments were down by EUR 3.2 billion compared with EUR 9.7 billion in Asia and EUR 18.6 billion in North America. The European Commission does not want to fall behind and now recognises the value of AI. Over the next three years, more than EUR 4 billion will be spent on AI, of which EUR 1.5 billion will come from the EU and EUR 2.5 billion from public-private partnerships. The EU hopes to increase the amount to more than 20 billion with contributions from the Member States and industry. The EU is also mobilising EUR 500 million to support start-ups in this sector. This startup fund is not the only one that stimulates business.

Two AI initiatives and a startup fund

Shortly after this EU budget was announced, a group of institutes presented a proposal to keep AI talent within the EU. ELLIS, as the initiative is called, focuses mainly on machine learning and setting up research hubs where spin-offs are given the opportunity to develop. This will require a one-off €600 million, plus €90 million a year to maintain these hubs. CLAIRE also wants to stimulate entrepreneurship in AI. How exactly they do this and how much money will go to it is not yet clear. Two AI initiatives and a startup fund with reasonably overlapping goals will not interfere with these initiatives? Hoos is not afraid of this: “We are working together to improve the research structure. While ELLIS focuses on machine learning, at CLAIRE we focus on all parts of AI. Artificial intelligence is more than machine learning.” Hoos explains that it concerns various AI implementations and issues: the development of AI that, for example, learns to recognise skin diseases or gameplay-AI that plays thousands of games against itself to eventually outsmart a human being. That is the practical side. But Hoos also has the goal with CLAIRE to deal with ethical issues.
Hoos: “We only develop technology that is people-centred. AI is going to change society in many areas. Take labour. People will have to work less in the future, but when it goes wrong, many jobs will disappear. This creates inequality. Our aim is precisely to prevent this from happening. It is a great responsibility to deal with this appropriately. That’s why CLAIRE looks to the world of AI with European values like equality, privacy, transparency and democracy. ELLIS also stands for these values, in which we strengthen each other by working together. We exchange projects to spread knowledge, so you are aware of each other’s activities. This reduces the chance that we will end up getting in each other’s way. ELLIS also has a significant responsibility to society. Machine learning has a big impact on society.”
That impact doesn’t always have to be positive, think of the Twitter chatbot Tay, which Microsoft developed in 2016. Tay changed from an ‘innocent’ chatbot within 24 hours to a racist smashing blunt. This was just an experiment, and all Microsoft had to do was take the chatbot offline. But what if an algorithm decides on your job? Or about whether or not to buy a house? According to scientists, we should indeed be concerned about mistakes that may be made in algorithms of this kind.

[/et_pb_text][/et_pb_column][/et_pb_row][et_pb_row][et_pb_column type=”1_2″][et_pb_text _builder_version=”3.12.2″]

Dangers of AI

AI does not exist without human bias. Joaquin Vanschoren is an assistant professor in machine learning at the Eindhoven University of Technology. He also signed for CLAIRE. Vanschoren: “A machine has no will of its own. People – unconsciously – have a preference for a certain outcome. People design algorithms that make it difficult to discharge this bias.” Hoos complements him: “People see patterns that are not there, or because something has happened before, they think it is going to happen again. I can give a few more examples. It would be a significant step forward if these errors of thought were no longer to be found in AI. That is why we will also include other scientific disciplines such as cognitive psychology in future research.”
Another danger lurking in artificial intelligence, according to scientists, is that some neural networks are so complicated that designers often don’t know how a decision is made, the so-called black-box. A good example of this are self-driving cars. Hoos: “This contains so many parameters that it is difficult for researchers to predict how such a decision will be made. Science does not yet fully know how the brain works. The same applies to some deep neural networks. We don’t understand that yet.”
Vanschoren nods in agreement: “This is a problematic point, nor can a surgeon always explain precisely why he chooses in a split second. That’s a combination of experience and intuition. AI is a mathematical model, so decisions are calculated. But a visualisation or a model statement that provides insight into the way in which such a determination is made is often lacking. We are investigating a form of AI that can reason or link to logical thinking. How it can provide insight into the decision-making process.”
But ethical dilemmas also play a role: a neural network that uses profile photos to determine someone’s sexual orientation. Is that the direction in which technology should go? If it’s up to the CLAIRE scientists not.

Companies

Scientists believe that companies should also be more open. The researchers are worried that knowledge will remain in companies. To make more money, for example. Hoos: “Large companies must keep shareholders satisfied, which is focused on the short term. Developments resulting from this seldom focus on the interests of people. Think of algorithms that influence the buying behaviour of shoppers, without these people realising it.”
Vanschoren also expresses his concerns about this: “Some banks use algorithms to determine whether or not someone gets a loan. Users must be able to see exactly which values the algorithm uses, this must be transparent. This also applies to a trading algorithm that is programmed to make as much profit as possible, regardless of the consequences. Is that humane technology?”
CLAIRE wants to develop AI that can be trusted. They do this by drawing up guidelines that algorithms must comply with and a vision that focuses on the long-term, instead of the short-term objectives that many companies pursue. Hoos is afraid that people are not sufficiently concerned with the consequences of AI that has human intelligence: “AI has the same capacities as people in this scenario, which could turn out to be problematic. We devote too little energy to investigating the consequences. AI may be programmed to do the right thing. It can carry out tasks with the right intentions, but afterwards, this can turn out to be different from what was intended, which is what we are doing. We still know too little about a world in which AI has human intelligence at its disposal. Personally, I think we should stay away from this form of AI.” Vanschoren is milder but emphasises that we should not be naive: “Awareness is difficult to define, so how can we know if an AI is aware? But there are indeed dangers. Artificial intelligence in military technology is one of them. CLAIRE can play a role in this by devising guidelines and advising companies. In this way, we ensure that AI remains safe. Take autonomous driving. That is in the pipeline, but an increase in computing power is needed to develop further in a responsible manner. This computing power is still lacking in the EU. By combining all forces in one place, not only will more computing power be added, but good researchers and prestigious research projects will also establish themselves in that place. In this way, we hope to create a leading institute.”

[/et_pb_text][/et_pb_column][et_pb_column type=”1_2″][et_pb_text _builder_version=”3.12.2″ header_3_text_color=”#ffffff” header_font_size=”19px” header_3_font_size=”16px” background_color=”#063b6d” text_text_color=”#ffffff” custom_css_main_element=”padding:10px;”]

When do you speak of AI?

Holger Hoos, professor machine learning at University Leiden: “Things that are a big challenge for a human being – chess is their life for some people – can be solved relatively simply by a computer. Using an algorithm or a neural network. But is there intelligence then? It’s about how you define it. In the Turning test, a person speaks to a person and a computer without knowing who he is facing. The system wins when the player is no longer able to distinguish between man and machine. But does the system understand your questions? What does it have in common with the context? Is there a sense of empathy? In short: Is this intelligence? Some systems may fake this reasonably well, but we are not yet in the phase of systems understanding emotions. You can also say that intelligence lies in the ability to learn something. This does not even require a colossal brain capacity. Octopuses prove that they can learn tasks. At the moment, AI is at a point where it’s good at one specific task. But real human intelligence, where machines can combine tasks and understand why they do something, that’s not in it for the time being.”

[/et_pb_text][et_pb_text _builder_version=”3.12.2″ header_3_text_color=”#ffffff” header_font_size=”19px” header_3_font_size=”16px” background_color=”#063b6d” text_text_color=”#ffffff” custom_css_main_element=”padding:10px;”]

What is the current level of intelligence of AI based on the Turing test?

Sander Wubben, CEO of flow.ai. They develop chatbots: “At the moment, chatbots are reasonably able to fool people, by putting enough data in the network a chatbot can make connections when someone asks something about Messi and Ronaldo. They can link America and politics to Hillary and Donald when they talk about them. Also, some networks have been trained to imitate emotions, but these are all tricks. To apply this flawlessly to every conversation, you need an incredible amount of data, that’s unimaginable. Even if a person asks questions, there is a ceiling on a chatbot. There is too little memory to return to data points from previous moments of the conversation. People have those contextual skills and can read between the lines, and chatbots can’t.”

Megan Bloemsma, AI-specialist at Microsoft: “Robots are very good at translating spoken text one-on-one, or they tell silly jokes. But you never feel like you’re talking to a human being. Google-Assistant who can make an appointment for you makes clever use of the underlying data. The assistant has access to your agenda and sees when an appointment can be made and the hairdresser is in your contact list. That may seem very smart, but behind the scenes, there’s something happening which is not that great. The combination of different data makes it smart. But for robots to be able to understand those words or understand sarcasm, that will take quite a while from now.”

[/et_pb_text][et_pb_text _builder_version=”3.12.2″ header_3_text_color=”#ffffff” header_font_size=”19px” header_3_font_size=”16px” background_color=”#063b6d” text_text_color=”#ffffff” custom_css_main_element=”padding:10px;”]

What are the significant benefits of AI?

Joaquin Vanschoren, assistant-professor machine learning at the Technical University Eindhoven: “A great advantage of AI is that it prevents human error. It can help us to do things we can’t yet do, or even imagine because it solves problems that are currently hindering us. With AI we will be held back less and less by human limitations.”

Holger Hoos: “Here at the University of Leiden, scientists are working on a way to prevent diseases, that’s what AI is all about. I think that in the not too distant future, nine out of ten medical breakthroughs will be due to AI. But AI is also able to solve climate problems, discovering patterns and choosing an efficient solution is no problem for an algorithm. Especially now that the deep neural networks are becoming more powerful. In general, people in the climate often see wrong patterns because they weigh emotions, AI has no emotion, so it is not taken into account.”

[/et_pb_text][et_pb_text _builder_version=”3.12.2″ header_3_text_color=”#ffffff” header_font_size=”19px” header_3_font_size=”16px” background_color=”#063b6d” text_text_color=”#ffffff” custom_css_main_element=”padding:10px;”]

What are the dangers of AI?

Sander Wubben: “I do not so much believe in human intelligence as I do in bots that can think humanly. There is so much more to it than that. The human brain is still a mystery. But AI can replace a lot of jobs in the future, but on the other hand, there will also be jobs. Software needs to be updated, data needs to be entered, and there are many other AI-related jobs, the trick is to make sure that you are needed.”

Bloemsma: “Killer robots, who decide for themselves what the target will be, for example. Fortunately, international agreements have been made on this. There must always be a person behind the decision. But it is a scary idea that a system is autonomously able to decide on life and death.”

[/et_pb_text][/et_pb_column][/et_pb_row][et_pb_row][et_pb_column type=”4_4″][et_pb_text _builder_version=”3.12.2″]

CERN

Hoos compares CLAIRE with CERN, also a leader in European science: “CERN is a concept. Pioneering discoveries come from there. The world wide web as we know it was conceived there. CLAIRE must become just as inspiring a place. A place where different disciplines come together to solve challenging issues.” Hoos wants an environment where researchers, students, PhD students and visitors are motivated. This requires a great deal of computer power, he thinks. The research facilities must be state-of-the-art. Hoos: “The AI community can share and develop ideas here. We are not a closed stronghold; we also need input from industry. I am critical of companies, but I am not saying that they are wrong or that we do not need them. The aim is to improve science so that developments can flow through to industry. It often works both ways.” Hoos is sure that improving science has a knock-on effect on talent. But more is needed. Vanschoren also sees more and more talented people continuing their career abroad. Vanschoren: “I’ve also had an offer before, but I have the opportunity here to do research that I like to do. There are many more opportunities outside the EU. The computing power of techno giants is something that we in Europe can only dream of at the moment. Preserving talent is not only possible with science. Students who do not aspire to a career in science also need opportunities. We are looking for a culture where talent can take AI techniques off the shelf to start a business. In London, I met several startups that need AI-technique to grow. We need to move towards a culture where this technology is available off the shelves for European start-ups.” The researchers hope that a kind of Silicon Valley will be created in Europe with startups that can compete with the US, Canada or China.

Business

In Canada, Toronto is one such place, Sander Wubben runs flow.ai, and in recent months the company has been operating from Toronto. There they participated in an accelerator program of Techstars. Meanwhile, they are back in Tilburg. Flow.ai develops software for chatbots and places it on an online platform where users can create their chatbot. Wubben: “Canada is ahead in the field of artificial intelligence. There is plenty of investment in AI, the universities are doing well, and many large companies have an establishment or research lab with excellent facilities. That plus the high salaries make for an attractive cocktail, to which it is difficult for talent to say no.” In Tilburg, too, companies are queuing up for people who understand artificial intelligence. That is why Wubben thinks it is a good idea for the EU to invest in the development of technology, although he doubts whether Europe can compete with the enormous sums that are involved in the US and Canada: “The culture is entirely different. More venture capital and investment, the companies are much more significant. Here it is more difficult to get money, and sometimes you are dependent on subsidies. I welcome the fact that the EU is opting for a clear focus. AI is too broad to be at the top of all categories. It is good that the EU is focusing on the ethical dilemmas surrounding technology. The Cambridge Analytica affair is an excellent example of this. I think that we are more concerned here with, for example, privacy when it comes to data. We are ahead of the rest in this respect. But are you competing against giants like Facebook or Google? For this, we miss a European version of Google.”

European head office

Hoos and Vanschoren are happy to welcome a sizeable AI-like company with headquarters in Europe. “That’s good for the competitive position,” says Hoos. “This will give your talents more opportunities to settle in Europe. Amsterdam has Google Brain, where a lot of AI-related research takes place, and Microsoft, where AI is found in all kinds of applications. Megan Bloemsma is an AI consultant at Microsoft, and she does not agree with the scientists: “Why does it matter where the head office is located? Especially in this day and age, everything goes via the internet and clouds. Microsoft is located all over the world, and it happens to be headquartered in America. So what? We are a global company with operations in Europe, Asia and America. For us, it does not matter where the head office is located. Nor do I have only Europeans as colleagues because I happen to work in Amsterdam.” According to Bloemsma, it makes little difference to European competitiveness if Microsoft’s head office were located in Amsterdam: “Much of Microsoft’s research is shared online. It is available to everyone all over the world. So then the location doesn’t matter much.” Bloemsma advises companies on how to apply AI in their business. Bloemsma: “I’m trying to take knowledge to the next level. And that is often necessary. Because AI is a very vague term, I always start by asking what they understand by it. I meet companies who say they do a lot with AI, but when I ask, it turns out that they want to have insight into their data. A dashboard of visualisation, but that is not AI. Companies use the term quickly because it sounds interesting.” In this respect, Bloemsma agrees with CLAIRE’s objective: to increase knowledge about artificial intelligence by improving education. According to her, this starts at primary school: “Children must learn to program and code. They don’t have to become WizKids, but it is vital that they understand at a fundamental level how an algorithm works. AI is developing very fast, if we don’t teach this to children now, we will soon have a problem.” This is not the case when it comes to the CLAIRE scientists, but the EU must invest in the research centre. In this way, they believe that the gap between China and North America can be kept to a minimum. It is not known exactly how this will be done and how much money they will need for it. On 7 September, they will organise an AI conference in Brussels. Here they discuss their plans further with the European Commission. This story is an adapted version of the story that was online for a short time last week. This version emphasizes that CLAIRE and ELLIS work more closely together. The example that the reporter gives about an algorithm that determines someone’s sexual orientation has not been discussed with the scientists. However, they did indicate afterwards by e-mail that this is an example that goes against European values.

Picture: © Barrett Lyon / The Opte Project

[/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section]