© Astrid Eckert / TUM
Author profile picture

Scientists surrounding Prof. Gordon Cheng from the Technical University of Munich (TUM) recently gave the robot H-1 a biologically inspired artificial skin. With this skin, (which is the largest organ in humans by the way), the digital being will now be able to feel its body and its environment for the first time. However, while real human skin has around 5 million different receptors, H-1 has a total of just over 13,000 sensors. These can be found on the upper body, arms, legs and even on the soles of its feet. Their goal is to provide the humanoid with its own sense of a physical body. Thanks to the sensors on the soles of the feet, for example, H-1 is able to adapt to uneven ground and even balance on one leg.

But of far greater importance is the robot’s ability to safely embrace a human being. And this is not as trivial as it sounds. As robots are capable of exerting a force that would seriously harm humans. A robot comes into contact with a human being at several different points especially during an embrace. It must be able to quickly calculate the correct movements and the appropriate amount of force required using this complex data.

“This may be less important for industrial applications, but in areas such as healthcare, robots have to be designed for very close contact with people,” Cheng explains.

Biological models as a basis

The artificial skin is based on biological models in combination with algorithmic controls. The skin of H-1 is made up of hexagonal cells. They are about the size of a €2 coin. The autonomous robot has a total of 1260 of these cells. Each cell is equipped with sensors and a microprocessor. These are used to measure proximity, pressure, temperature and acceleration. Thanks to its artificial skin, H-1 perceives its environment in a much more detailed and responsive way. This not only helps it to move around safely. It also ensures that it is safer in its interaction with people. It is able to actively avoid any accidents.

Event-driven programming delivers more computing power

So far, the main obstacle in the development of robot skin has been computing power. Previous systems were already running at full capacity when evaluating data from several hundred sensors. Taking into account the tens of millions of human skin receptors, the limitations soon become clear.

To solve this problem, Gordon Cheng and his team chose a neuroengineering approach. They do not permanently monitor skin cells, but use event-driven programming. This allows the computational workload to be reduced by up to 90 percent. The key is that individual cells only pass on data from their sensors when measured values vary. Our nervous system works in a similar way. For example, we can feel a hat as soon as we put it on. Yet then we quickly get used to it and don’t need to give it any attention. We only tend to become aware of it again once we take it off or it gets blown away. Our nervous system is then able to concentrate on other, new impressions which the body has to react to.

Prof. Gordon Cheng ©Astrid Eckert /TUM

Gordon Cheng, Professor of Cognitive Systems at TUM, designed the skin cells himself about ten years ago. However, this invention really only reveals its full potential as part of a sophisticated system. This has recently been featured  in the specialist journal ‘Proceedings of the IEEE.’

More IO articles on this topic can be found here:

Top 10 Emerging Technologies (2): social robots

Could you love a robot?