A hacker attack on autonomous vehicles could be catastrophic. A research team from the Max Planck Institute for Intelligent Systems (MPI-IS) and the University of Tübingen has now demonstrated that a simple patch of color could completely disrupt autopilots. And this splotch of color could be discreetly put on a T-shirt, or on a rear car windshield sticker, or even used in an logo on a shopping bag.
“It took us three, maybe four hours to construct the pattern – it went pretty fast,” notes Anurag Ranjan, PhD student in the Perceiving Systems Department at the MPI-IS in Tübingen, Germany.
But fortunately everything has the all-clear at the moment. That danger is minimal for the series models currently available on the market. Nevertheless, as a precaution, the researchers did inform a number of car manufacturers who are currently working on the development of autonomous models. This will enable them to react promptly to any potential risk.
Subscribe to IO on Telegram!Subscribe!
Optical flow is disrupted
In their research, Anurag Ranjan, and his colleagues Joel Janai, Andreas Geiger and Michael J. Black tested the resilience of a number of different algorithms for determining optical flow. These types of systems are used in autonomous vehicles, robotics, medicine, video games and navigation. The optical flow refers to the motion in a scene that is captured by onboard cameras.
Recent developments in the field of machine learning have led to faster and improved methods for calculating motion. However, the research carried out by the Tübingen scientists shows that such methods are susceptible to errors. For example, a simple, colorful pattern that is added to a scene as an obstructive signal is capable of disrupting things. Even if the colored pattern doesn’t move. This could cause the deep neural networks (which are often used for flow computation these days) to make faulty computations. As a result, the network could suddenly (mis)calculate that a large portion of elements in a scene are moving in the wrong direction.
Scientists have already shown that even tiny patterns of color could disrupt the neural networks. This was the reason why objects like stop signs have been incorrectly classified in the past. The Tübingen researchers found that algorithms which are used to determine the movements of objects are also susceptible to these types of attacks. It is imperative that this does not happen when it comes to safety-critical applications such as autonomous vehicles. These systems have to be absolutely safe in the face of such attacks.
Small patch with huge effect
The researchers have been working on the Attacking Optical Flow project since March last year. Over the course of their research, they were surprised that even a small patch of color can cause chaos. A size of less than 1% of the total image is in itself large enough to disrupt the system. The slightest interference caused the system to make serious mistakes in its computations. This then affected half of the image area. The larger the patch of color, the more devastating the impact.
“This is worrying because in many cases the flow control system blotted out the motion of objects across the entire scene,” Ranjan explains.
It’s easy to imagine the damage a disabled autopilot could cause. Especially when an autonomous car is driving fast or is driving around the city.
Mystery of the self-driving car has been maintained
It is still a secret known only to their respective manufacturers as to how some of these self-driving cars actually work in practice. That’s why computer vision research scientists are only able to speculate.
“Our work aims to shake the manufacturers of self-driving technology awake and warn them of the potential threat. If they know about it, they can train their systems to withstand such attacks,” says Michael J. Black, Director of the Perceiving Systems Department at the Max Planck Institute for Intelligent Systems.
One aim of the R&D team is to show the automobile industry how improved optical flow algorithms can be developed using what is known as “zero flow” testing.
“If we show the system two identical images and there is no movement between them, the optical flow algorithm should not change color in any way. Yet this often isn’t the case, even without an attack. This is where the problems start. And this is where we have to start to fix what the net is doing wrong,” Ranjan explains.
He and his team hope that their research will help raise awareness about this problem. Their goal is to get car manufacturers to take these types of attacks seriously. Consequently, they can adapt their systems accordingly to make them less susceptible to malfunctions.
The paper will be presented at the International Conference on Computer Vision ICCV, which is the leading international conference on machine vision.
More IO articles on autonomous driving can be found here.
Innovation Origins is an independent news platform that has an unconventional revenue model. We are sponsored by companies that support our mission: to spread the story of innovation. Read more.
At Innovation Origins, you can always read our articles for free. We want to keep it that way. Have you enjoyed our articles so much that you want support our mission? Then use the button below: