Neurocomputer scientist Wolfgang Maass has been conducting research at the Institute for the Foundations of Information Processing at Graz University of Technology in Austria for 20 years, where he is pursuing a conceptual framework and algorithmic methods for a brain model of the mouse. He is particularly interested in energy-efficient concepts as observed in the biological brain. He has already succeeded in modeling data-based modeling of specific functions in the mouse brain in the past. Now, for the first time, he and post-doc students Chen Guozhang and Franz Scherr have trained a detailed large-scale model of the mouse brain and enabled it to simulate the function of vision.
The vision function in AI
The vision function in artificial intelligence is of interest in computer science as it is one of the central functions in artificial intelligence, such as in autonomous driving and image processing. Here, data from the environment is collected via sensors and forwarded to appropriately trained algorithms. The algorithms interpret this data and learn from it.
Decoding the visual cortex
The fact that researchers are now able to specifically execute certain visual functions in their data-based brain model is due to the achievements of the renowned Allen Institute for Brain Science in Seattle, Washington, U.S. Among other things, this institute is dedicated to decoding the visual cortex of mice and recently published decisive data. “This was the first time we got the data we needed to breathe life into our general know-how,” Maass says.
Training biological neurons
The Allen Institute data provided a valuable framework for the biological visual network, but it had an enormous number of gaps. That’s because neural network knowledge resides in the weights of synaptic connections between neurons, which are hard to measure. The researchers applied machine learning to replace the missing biological data with mathematical optimization. However, neurological networks that closely resemble biology are not as easy to train. “Biological neurons have an unsteady mode of operation. Unlike artificial neurons, they send out action potentials, not a slowly changing value. This means that the gradient method, used in numerics to solve general optimization problems, cannot be applied,” explains Professor Maass. In the gradient method, one progresses from a starting point along a descent direction until no numerical improvement is obtained.
Brain-like visual performance
Using data from the Allen Institute and their own software, the researchers developed a detailed, large-scale biological model of the primary visual cortex of the mouse. The results of tests conducted at the Jülich Supercomputing Center showed that the novel brain model can solve multiple visual processing tasks. For example, it can classify images of handwritten numbers or detect visual changes in a long sequence of images. The virtual brain model performed with high accuracy and achieved performance comparable to that of the mouse brain — even when exposed to noise in the images and network not encountered during training. Because of this robustness to noise in the image or network, the biological model outperforms current models for visual processing in AI. The researchers explain this by the fact that their model replicates several characteristic coding properties of the brain.
The researchers used a standard artificial neural network test, but four of the tests for which the model was trained can also be learned by the mouse. For example, the mouse can detect visual changes in a long sequence of images. Therefore, the team can compare the mouse’s performance with its model.
Neural networks in AI
However, vision might still function differently in mice since fundamentally, biology and artificial intelligence have two different approaches to achieving visual function in neural networks. When they analyzed their novel brain model, the researchers found both: features consistent with findings from biological experiments, but also features that differ from them. “This is why AI-based models are very limited in inferring visual function in the brain, Maass explains. He continues: “Our paper is one of the first to try to illustrate this bifurcation in biology and AI clearly.”
Low energy consumption
“Some of the coding properties of the biological brain would be nice to copy in AI,” the researcher says, such as the so-called sparse activity of neurons, which are mostly inactive and thus do not consume energy. This is made possible by the so-called mixture of experts. These are local networks in the brain that are specialized in their competence. They consist of experts that only react when they have something to contribute to the solution of the task.
In contrast, the neurons in the neural artificial networks in machine learning are active almost continuously, consuming an enormous amount of power. In this respect, many areas of industry are interested in finding new approaches. This biological prototype could inspire new designs, Maass said.
A standard value
The Austrian researchers’ brain model provides a way to implement a hypothesis: showing how computation is organized in the model, how it proceeds, and what the model can do under certain circumstances. As such, it forms a standard value with which many other models from biology can be tested and compared. To compare the energy consumption of neurons from artificial and biological neural networks, the researchers will implement the brain model on a chip that Intel will provide.
Technical achievements of AI
The fact that the researchers were able to model the mouse brain in such detail and reproduce the function of vision is due in no small part to the enormous technical achievements of artificial intelligence in recent years. Until recently, the two approaches to neural networks were also reflected in the software. There were two types of software: software in machine learning developed by TensorFlow and PyTorch that ran on graphics processing units (GPUs), and software capable of simulating biological models – without any function yet.
Fast GPUs with high memory capacity
When the researchers tested the visual function they had modeled in the mouse’s brain on the supercomputer in Jülich, they had just received a new generation of graphics processors from Nvidia. These are characterized by high speed and memory capability. This was necessary because training the new model requires very fast simulations that must be carried out ca. 100,000 times, each time with different values of weights.
This more powerful technology allowed the Austrian researchers to combine the different software concepts to show that biological models can also be simulated on TensorFlow and trained very efficiently. It’s an innovation that brings an unexpected spinoff to the AI industry, Maass said, as the latter can now use tools, software, and hardware from the AI-oriented industry for brain research as well.
Vision function in autonomous driving
When technology companies like Intel and IBM work on alternative methods of vision on chips, the focus is on energy efficiency and intelligence to integrate into vision, says Maass. The vision function in AI systems is based on bottom-up information. This means that the camera provides pixel information that becomes more abstract as it is processed. The brain acts differently. It very quickly combines the information from the eye with top-down information. This is experience-based information from other brain regions that enables an appropriate response to the visual event.
Missing contextual information
“What we don’t really understand yet is how bottom-up and top-down information is brought together effectively in the brain. If you happen to capture just the top-down information, then you’re hallucinating and seeing something that you might be imagining, but that’s not really there. So the top-down information must be there to support information coming from below, without dominating it,” the researcher explains.
For example, if a large plastic bag or cardboard box is blown across the road in front of an autonomously driven car, then the car is faced with the decision to brake or not. In this case, humans are superior to AI. A human has empirical values on the weight of cardboard boxes and what happens to their car or other road users by simply driving over them.
How brain areas work together
Maass says: “Through the pixels of the camera, you don’t really see the consequences. If you want to avoid the few but often devastating accidents with autonomous driving cars, then you also need some kind of top-down information. After all, you have to react quickly and can’t see first and then think. That’s the point where AI takes its cue from the visual function in the brain and expects help.”
That is the next question that Maass, a basic researcher, will address. In a joint research project with the Allen Institute, he wants to investigate how different brain areas work together and model it.
The results were published in the journal Science Advances.
Link to original publication: Anatomical and neurophysiological data on primary visual cortex suffice for reproducing brain-like robust multiplexing of visual function