Increasingly, Facebook shapes our view of the world. We see the opinions of our friends in their responses to news that may or may not be fake. But which messages we see first and which messages remain hidden is all controlled by algorithms – algorithms based on artificial intelligence (AI). And these algorithms are the work of a large research team led by Professor Yann LeCun, making LeCun one of the world’s leading and most influential technologists.

Mark Zuckerberg asked LeCun in 2013 to build the AI capability within Facebook, at a time when the link between social media and AI was still unexplored. LeCun had by then built a formidable reputation in academia, and his algorithms were being used in industry – almost all handwritten cheques are processed using his image-recognition model.

Philips, Signify and the TU/e awarded Yann LeCun the prestigious Holst Memorial Medal 2018. Radio4Brainport’s Jean-Paul Linnartz spoke to LeCun on his visit to Eindhoven. Listen to the full interview here.

(Also: Listen to the Radio4Brainport podcast: The 2018 Holst Memorial Lecture with Yann LeCun).

On the importance of providing an environment for creative research, such as that which Dr Gilles Holst created at NatLab

“I started my career at Bell Labs, which was very much modelled on this idea that you do research in an open way and which is scientist-driven research – so, bottom up, with a lot of freedom to work on whatever topic seems relevant or interesting. And this is one of the things that I have tried to reproduce to some extent at Facebook AI Research (FAIR), to maximise the creativity and the way to go forward. Not just to advance technology, but to advance science, which I think is necessary for the domain of AI.”

(See also: Facebook’s head of AI delivers Holst Memorial lecture, says open innovation is a route to faster scientific progress).

Facebook wouldn’t work without deep learning

“It actually is almost exactly five years ago, on 9 December 2013, that it was announced that I would be joining Facebook. What had happened was that, over the preceding months, Mark Zuckerberg and the leadership at Facebook had identified that AI was going to be the key technology for the next decade, and so they decided to invest in that. And that turned out to be true. Facebook is entirely constructed around deep learning nowadays. If you take deep learning out of Facebook, it doesn’t work anymore.”

AI has significant implications for healthcare – and will save lives

“Probably one of the most exciting applications and developments these days is computer vision, which is the application of deep-learning convolutional networks in particular, to medical imaging. It is one of the hottest topics in radiology these days. One idea, for example, is that by using deep-learning-based reconstruction, we could accelerate the collection of data from an MRI machine, which means the test would be cheaper, simpler and faster, which means people can have more of it, essentially. And so the analysis can be done automatically. And so one can have a fast turnaround for diagnosis. Medical imaging I think is one of the biggest applications, and is going to save lives.”

On the view that machines learn from humans, but that humans don’t learn from computers

Yann LeCun and Jean_Paul Linnartz 2018

Yann LeCun and Jean Paul Linnartz 2018

“It is not entirely true that we don’t learn from machines. For example, people have gotten better at playing chess and Go, because they have played against machines, and with machines. If the machine is better than you at a particular task, you get better at it, because you use it to educate yourself. Generally, what is most powerful is the combination of a machine and a person – an expert in the field.

So, machines are there to complement and to empower us, but not to replace us. I am not one of those people who believe that radiologists are going to be replaced by an AI system. It is not the case. There are going to be just as many radiologists, except that their jobs are going to change. Instead of having to spend eight hours a day in a dark room looking at slices of MRIs, they might be able to actually talk to patients or spend more time on complicated cases.”

Preparing for a career in AI – math, math and more math!

“In AI, in fact, you have to study more math than you would otherwise have to if you work on regular computer science. Regular computer science, at least in North America, but it is partly true also in Europe, does not have a huge requirement for mathematics, and most of it is for discrete mathematics. But if you work on machine learning and AI and neural nets and deep learning and computer vision and robotics, that requires actually a lot more continuous math – the kind of math that we used to study forty years ago in the engineering programme. Interestingly, some of the methods that are useful to analyse what happens in a deep-learning system, many of these methods come from statistical physics, for example. What I tell young students who want to get into AI, if you are ambitious, take as many math courses as you can. Take multi-variate calculus, and partial differential equations, and things like that. And study physics, also; quantum mechanics, statistical physics.”

AI combined domain knowledge, physical devices and hardware is a great opportunity for Brainport

“There are lots of opportunities in new kinds of hardware. Of course, NXP is right in that business. I think over the next five to ten years we are going to see neural-net accelerator chips popping up in just about everything we buy. Everything that has electronics in it will have a neural-net accelerator chip. Within a couple of years, it will be the case for mobile phones, cameras, vacuum cleaners, every toy. Every widget with electronics in it, if you want, will have some sort of neural net chip in it. So, there are a lot of opportunities for that kind of industry. Signify can place AI in the edge rather than in the cloud. We are going to see a motion from the cloud to the peripheries to mobile devices and eventually to the Internet of Things devices.”

China has a vast interest in AI

“China is interesting because it is investing massively in AI. The interesting thing in China is that the public itself is very interested in AI. China is one of the two countries where I am recognised on the streets. Not in the US [where I live]. Only in China and in France. In France because I am French, but in China because there is so much interest in AI, that it is everywhere, absolutely everywhere. The thing is, the Chinese have an advantage, in that they have a very large home market. And a disadvantage, in that they are a completely isolated ecosystem in terms of online services. That is going to make it difficult for them to export their services.”

Facial recognition technology: Both benign and nefarious uses

Yann LeCun Source_Radio4Brainport

Yann LeCun in Eindhoven

“Facial recognition is one of the things that made Facebook interested in deep learning in the first place. In the spring of 2013, a small group of engineers at Facebook started experimenting with convolutional networks for image recognition and for face recognition, and they were getting really, really good results. Within a few months, they beat all the records, published a really nice paper at the Conference on Computer Vision and Pattern Recognition in 2014, that was called Deep Face. That was deployed very quickly; you post a picture and your friends are in the picture and they get tagged automatically, and they can choose to tag themselves or not. At first, it was not turned on in Europe, but now it is turned on in Europe on a voluntary basis. Unfortunately, it has been deployed, a very similar technology, using convolutional nets, which is kind of my invention, very widely in China on a grand scale, and it is used to spy on people, essentially. So, there are nefarious uses of technology that, thankfully in many countries, the democratic institutions protect us against, but it is not the case everywhere. There is a very big difference between China, Europe and the US. The US and Europe are getting closer together. Facebook is now applying GDPR-like rules in the US as well. Those are good rules.”

No, Europe does not need its own Facebook in order to ensure it keeps up with AI technology

“Actually, no, it is not necessary for Europe to develop its own Facebook. The reason it is not necessary is that, firstly, there are several parts to developing AI. One part is developing new methods, new algorithms – new science – making the field go forward. For this, you don’t need a Facebook or a Google. You need funding for research, you need a good infrastructure for universities, large computational infrastructure that is accessible to researchers, you need industry support. But there could be that in Europe.”

Myth: You need big data for AI

“There is this myth that somehow you cannot develop new AI techniques if you don’t have access to enormous amounts of data, like Facebook, Google and Microsoft do. It is not the case. At FAIR, for example, we almost exclusively use public data, because we want to be able to compare our algorithms to other people’s. So, we don’t use internal data. Once we have something that works, of course, we work with engineering groups, and they try it on internal data. But to actually make research go forward, you don’t need data that companies like Facebook have access to. You need the drive from the applications, of course, to be able to motivate enough people to work on this. What makes FAIR possible is that Facebook is a large company, is well-established in this market, and has enough profits or cash to finance long-term research. It used to be the case for Philips. Holst’s creation was a forward-looking, fundamental lab. I had friends working there 20 years ago. This is not the case anymore. Bell Labs is the same. It used to be a leading light, it is a shadow of its former self. It is true for a lot of industry research labs across the world, particularly in Europe. Today in Europe, if you want to find an advanced research lab in information technology, in industry, there just aren’t many that practice open research on a grand scale.”

(See also: Why Europe should have its own AI centre).

My advice to Brainport-based companies seeking advice on AI technology? Get ambitious and go big

“It is up to companies like Philips or NXP or others, that are sufficiently forward-looking and have enough resources to really get into this, to create ambitious research labs. If you are not ambitious enough about the goals of a research lab, it is going to be second-rate. And if you want to be ambitious about it, it has to be open. That means the culture is very different. If you are a company that builds widgets, you tend to be very secretive about your research and development.”

(See also: Tomorrow is good: The ten commandments of Holst).

“It is the case for Apple, for instance. Apple is nowhere to be seen on the research circuit for AI. They develop the technology around AI, but they don’t really push the science of AI forward, because they build widgets and they have a secretive culture. The companies that move the field forward are the ones that are not secretive and are not too possessive about intellectual property. And that puts them in a good position to hire, to innovate, to propose tools that other people use, so it makes it easier to make progress. Practice open research. That is my recommendation.”

Open source is essential for faster innovation. Facebook basically doesn’t believe in patents.

Yann LeCun Source TUe

Yann LeCun (c) TUe

“There is no need for protection. What makes the value of a technology is how fast you can bring it to market. For a company, you have a choice between working with universities, which is relatively cheap. And then trying to get new innovations from them by either hiring students or by having interns or by having research contracts with universities. It creates a relatively slow process with a lot of friction to do technology transfer. The main issue with technology transfer is not whether you have the best technology, it’s whether you believe this good technology is something that you can do something with.

The situation we find ourselves in sometimes is that we think we have the best system for, say, classifying text, translating the language, recognising speech. We open source it, and we, of course, talk to the engineering group at the same time. And the engineering groups, you know, they are doing their thing, they don’t have a lot of bandwidth; they have to reallocate their resources in order to pick up on new technology and make progress. So, they have to believe that what you bring to them is really, very useful. And what we do is, we put it in open source and we can point to it and say, “Look, it has 5 000 stars on GitHub and it is used by 200 other companies except us. Isn’t that embarrassing?!” Things like this, where you are convincing product groups and engineering groups that your technology is good, is the main obstacle to technology transfer.

If you have an in-house research group, even if you practice open research, even if you open source everything, you will get there first. And that is the only thing that matters. You don’t need to protect it. Facebook basically doesn’t believe in patents.”

Independent

Innovation Origins is an independent news platform, which has an unconventional revenue model. We are sponsored by companies that support our mission: spreading the story of innovation. Read more here.

On Innovation Origins you can always read articles for free. We want to keep it that way. Have you enjoyed this article so much that you want to thank the author? Click here: