Self-driving cars (c) Ramin Hasani
Author profile picture

Things like search engines and self-driving cars are based on artificial intelligence. Behind them are complex neural networks. Until now, the underlying mathematical models could only be implemented with enormous computing power and are difficult for humans to comprehend. Professor Radu Grosu, head of the Cyber Physical Systems research group at the Vienna University of Technology in Austria, has been thinking for years about how models from biological nervous systems could make artificial networks better and more comprehensible.

Verifiability

Neural networks – like the brains of living creatures – consist of many individual cells. When a certain task needs to be solved, an active cell sends a signal to other cells. The sum of the signals received by the respective cell determines whether the cell also becomes active. How cells influence the activity of other cells in this way is at first unclear. The parameters are adjusted in an automatic learning process until the neural network can solve the task.

Also interesting: Even robots become more creative if you let them make their own experiences

The learning process is hardly visible to humans from the outside and often incomprehensible, which is why it is referred to as a black box. To strengthen trust in artificial intelligence, systems that can be verified by humans are required.

Most recently, Professor Grosu tried to solve the problem in an international research project, together with researchers from Massachusetts Institute of Technology (MIT), Cambridge, MA, U.S.; the Technical University of Vienna and the Institute of Science and Technology (IST) in Austria. The biological model was provided by Caenorhabditis elegans, the nematode worm, one of the most important model organisms in biology. It is small, transparent, frugal and needs only three days for its development. Although it has only a few nerve cells, it shows amazing, interesting behavioral patterns. This is due to the efficient and harmonious way in which its nervous system processes information, explains Professor Grosu.

The basis for neural networks

Nevertheless, “compared to the structure of deep neural networks (DNNs), the nervous system of C. elegans seems chaotic at first sight,” says Mathias Lechner, a PhD student in the Henzinger Group at IST. The researchers found the basis for a novel neural network in the functions of the neurons. Lechner says: “Neuroscientists divide the neurons of the system into the groups sensory, inter and motor. In the inter neurons, there is also the subgroup of command neurons in which important signaling pathways are concentrated. “For example, in the C. elegans nervous system, there are command neurons for forward and backward crawling in which the signaling pathways of movement control are concentrated. However, forward and backward movements are themselves much more complex and require the cooperation of multiple neurons and muscles.”

To get at the efficiency and harmony of the C. elegans nervous system, the research team developed new mathematical models for neurons and synapses. Also, the processing of signals within individual cells follows different mathematical rules than those in existing deep learning models. Another simplifying factor was that not every cell was connected to every other cell.

Test scenario

The test scenario was the way cars adhere to a lane in autonomous driving. Based on a camera image of the road, the neural network automatically decides whether to steer to the right or to the left. In existing deep learning models, this task requires millions of parameters. The research group’s new model, however, manages with only 75,000 trainable parameters.

In preparation for the test, large amounts of video of human-driven cars in the Boston area were collected. These were input into the network, along with information on how the car should be controlled in each situation. This training process continues until the system has learned the correct link between the image and the steering direction and can also handle new situations on its own.

The artificial intelligence model consists of two parts, a convolutional network and a control system. Both subsystems are initially trained together. The convolutional network has the task of recognizing structural image features in the visual data from the camera. It passes on the relevant parts of the camera image in the form of signals to the control system, which steers the vehicle.

The neural network control system (called “neural circuit policy,” or NCP), which translates the data from the visual network into a steering command, consists of only 19 cells. This makes it three orders of magnitude smaller than existing state-of-the-art models.

Interpretable neural networks

This model makes it possible to examine exactly where the neural network focuses its attention during driving. It focuses on very specific areas of the camera image: the roadside and the horizon. This behavior is highly desirable and is unique to artificial intelligence-based systems, he said.

“Classic deep neural networks (DNNs) can also learn correct foci, but the focus can only be determined after the training process and it cannot be determined which elements will be learned before training. Our C. elegans-inspired model implicitly influenced the training process to focus on the horizon. Why and how the model influenced the training process to deliver this result is not answered in the research, so it is still an open research area.” Mathias Lechner, PhD Student, IST, Vienna, Austria.

In addition, he said, the role of each individual cell in each individual decision can be identified. The function of the cells would be understandable and their behavior explainable. This level of interpretability has so far been impossible in larger Deep Learning models, the researchers said.

Robustness

One shortcoming of existing deep learning models is also that they are incapable of dealing with poor image quality. Experts refer to this as image noise. This property could also be improved in the novel system, as was shown by an analysis in which the neural network was confronted with artificially damaged images. This robustness is a direct consequence of the concept and architecture of the innovation, according to the researchers.

The methods used also reduce the training time. As a result, artificial intelligence can be implemented even in relatively simple systems. The Deep Learning model developed by the international research team makes imitative learning possible in a wide range of applications, from automated work in warehouses to robot motion control.

The research results were published in nature.com:
M. Lechner, R. Hasani, A. Amini, T. Henzinger, D. Rus, R. Grosu. 2020. neural circuit policies enabling auditable autonomy. Nature Machine Intelligence. https://www.nature.com/articles/s42256-020-00237-3

Also interesting: “It’s utopian to think that people can make infallible AI”