© Pixabay
Author profile picture

A new deep learning system can detect natural disasters using images posted on social media. The international group of researchers applied computer vision tools that, once trained using 1.7 million photographs, proved capable of analyzing, filtering, and detecting real disasters, writes the Universitat Oberta de Catalunya (UOC) in a press release.

  • A computer vision system based on deep learning detects natural disasters by drawing from a database of social media images;
  • The model can avoid false positives;
  • This technology can speed up humanitarian aid during emergency situations;

As global warming progresses, natural disasters such as floods, tornadoes, and forest fires are ever more frequent and devastating. As there are still no tools to predict where or when such incidents will occur, it is vital that emergency services and international cooperation agencies can respond quickly and effectively to save lives. “Fortunately, technology can play a key role in these situations. Social media posts can be used as a low-latency data source to understand the progression and aftermath of a disaster,” Àgata Lapedriza, professor of computer science at the UOC, explained.

Models for incidents

Previous research focused on analyzing text posts, but this research, published in Transactions on Pattern Analysis and Machine Intelligence, went further. During a stay at the MIT Computer Science and Artificial Intelligence Laboratory, Lapedriza contributed to developing a taxonomy of incidents and the database used to train deep learning models and performed experiments to validate the technology.

The researchers created a list with 43 categories of incidents, including natural disasters (avalanches, sandstorms, earthquakes, volcanic eruptions, droughts, etc.) and accidents involving some element of human intervention (plane crashes, construction accidents, etc.). This list, together with 49 place categories, enabled the researchers to label the images used to train the system.

The authors created a database named Incidents1M, with 1,787,154 images that were then used to train the incident detection model. From among these images, 977,088 had at least one positive label linking them to one of the incident classifications, while 810,066 had class-negative labels. Meanwhile, for the place categories, 764,124 images had class-positive labels, and 1,023,030 were class-negative.

Avoiding false positives

These negative labels meant the system could be trained to eliminate false positives; for example, a photograph of a fireplace does not mean the house is on fire, even though it has some visual similarities. Once the database was constructed, the team trained a model to detect incidents “based on a multi-task learning paradigm and employing a convolutional neural network (CNN)”.

When the deep learning model had been trained to detect incidents in images, the team ran a range of experiments to test it, using a huge volume of images downloaded from social media, including Flickr and Twitter. “Our model was able to use these images to detect incidents, and we checked that they correspond to specific, recorded incidents, such as the 2015 earthquakes in Nepal and Chile,” Lapedriza said.

Helping humanitarian aid

Using real data, the authors demonstrated the potential of a tool based on deep learning for obtaining information from social media about natural disasters and incidents requiring humanitarian aid. “This will help humanitarian aid organizations to find out what’s happening during disasters more effectively and improve the way humanitarian aid is managed when needed,” she said.

Following this achievement, the next challenge could be, for example, to use the same images of floods, fires or other incidents to automatically determine the seriousness of incidents or even to monitor them more effectively over time. The authors also suggested that the scientific community could follow up the research by combining the analysis of images with that of the accompanying text, to enable more accurate classification.