Rescue operations, for example after earthquakes, can be life-threatening for the rescuers. Again and again, the courageous men and women put their lives in danger when they go in search of survivors lying in the rubble somewhere. Equally dangerous are operations in fires where firefighters are repeatedly injured or even die in the course of their work. And think of the cave diver who died in Thailand last summer when a group of young people fell into the trap.
In such a dangerous situation, people are increasingly able to get help from robots. During rescue operations, fire-fighting operations or inspections in the deep sea, mobile, self-learning robots can relieve people of dangerous or unhealthy activities. Depending on external circumstances, operations – such as deep-sea research – would not only be more economical but perhaps the only way to do it.
Active support
“Learning systems work independently, alone or in hybrid teams with other learning elements or as autonomous systems with a human being. As assistants, they assess risks to people, but they are also able to act adequately and independently in a given situation”, says the report of the Learning Systems Platform, which was presented at the Karlsruher Institut für Technologie (KIT).
In their presentation, the researchers presented two possible application scenarios for such robots. “The use of artificial intelligence is associated with enormous opportunities for our society. Especially in disaster management, the decommissioning of nuclear power plants or in maritime areas, there are great opportunities to effectively support specialists with artificial intelligence,” says Professor Holger Hanselka, president of the Karlsruher Institut für Technologie and member of the steering committee of the Learning Systems platform.
The platform has set up an interdisciplinary working group to discuss how learning systems for life-threatening environments can be developed and used for the benefit of people. “IT security will be extremely important, especially for autonomous systems we use in crisis situations. Therefore, KIT’s research focuses not only on the protection of the external borders of a complex IT system, but also on the protection of each individual component, and KIT brings its IT security expertise to the learning system platform.
The Working Group on Life-threatening Environments assumes that in about five years’ time, Artificial Intelligence will be able to support people in disaster response and in reconnaissance and maintenance missions. In the “Rapid rescue assistance” application scenario, scientists have demonstrated how AI-supported robot systems can support firefighters on the ground and from the air in the event of a chemical factory fire.
Using multi-sensor technology, the systems can “quickly create a detailed picture of the situation, establish a communication and logistics infrastructure for rescue work, search for injured people and identify sources of danger,” say KIT scientists. In the application scenario “autonomous underwater operation”, robotic underwater systems maintain the foundations of an offshore wind turbine. They can navigate the deep sea on their own and, if necessary, request support from divers or remote-controlled systems.
Technical obstacles
The researchers admit that there are still a number of obstacles to be overcome before such systems can actually be used. One of these obstacles is autonomous learning in unfamiliar environments, and the other is the collaboration of independent robots with humans.
“What if several people need help, but the robot can only take care of one person?”
“The demands placed on learning systems are particularly high in hostile environments: they must be intelligent and at the same time robust against extreme conditions and be able to function independently under unpredictable conditions,” says Jürgen Beyererer, head of the workgroup Life Hostile Environments of the learning system platform. “Until then, AI-based systems can be remotely operated by emergency services providers and the data collected can be used for the development of intelligent functions. Gradually, the systems achieve a higher degree of autonomy and can further improve themselves through machine learning”.
Bureaucratic hurdles
In addition to the technical challenges, some tough questions must also be clarified. Who is responsible? What about liability and insurance? What happens if these systems cause damage? How do you protect yourself against theft? These are just some of the questions that arise before such systems are actually used. The issue of the regulation of property rights also arises in international application areas. “For example, under current international maritime law, unmanned systems may be possessed by the finder in international waters.”
In addition, the processing of personal data may lead to privacy and data protection problems. “Such cases may occur when learning systems are used in disaster relief or firefighting operations and data of affected people are picked up and passed on. In addition, a formal framework ‘with technical, legal and ethical levels’ should also be found for a situation where several people need help, but the robot can only take care of one person.
Even though solutions have been found to all these questions, one thing is certain, despite all artificial intelligence. Human intelligence will remain indispensable. KIT researchers are well aware of this. “There is no doubt that mankind, as an operational force and decision-maker, will continue to be irreplaceable in the future – especially in operations to save human lives.”
.