Cloud Computing (Literally)

EmergencyNet delivers aerial image classification, tuned for drones, that approaches state-of-the-art accuracy.

Nick Bild
a month agoMachine Learning & AI

Unmanned aerial vehicles (UAV) have found a niche in giving emergency responders extra sets of eyes in the sky. Compared to traditional aerial search methods — typically involving aircraft and a human crew — UAVs are inexpensive and many more can be deployed. Additionally, they can fly into situations that would be too dangerous to send emergency responders.

When sending out a large number of UAVs, equipping the vehicles with image recognition capabilities can quickly identify dangerous situations that may need an urgent response without waiting for someone to review the captured images. However, due to tight weight and power constraints, including the requisite hardware may not be possible in many cases. Transferring image data to the cloud for processing is not always an option either, as connectivity may be severely limited in remote or disaster-stricken areas.

To address this issue, a research team from the University of Cyprus have developed a new technique, called EmergencyNet, for efficiently classifying aerial images.

The first step in training an image classification neural network is data collection. According to the researchers, there is no widely used, publicly available dataset for emergency response applications. As such, they took it upon themselves to create a dedicated database for this task. AIDER (Aerial Image Dataset for Emergency Response applications) was built by manually classifying aerial images into four disaster categories — Fire/Smoke, Flood, Collapsed Building/Rubble, and Traffic Accidents. They sourced the images from publicly available sources such as google images, bing images, youtube, and news websites.

Next the authors tested several neural network architectures to find the best fit for the application. They found that an atrous convolutional network offered the best mixture of performance and efficiency. The atrous convolutions allow features to be recognized at differing scales, which is of critical importance for aerial applications.

EmergencyNet was found to run efficiently on low-power embedded platforms while achieving up to 20 times higher performance compared to existing models. Only minimal memory is required, and less than a 1% accuracy drop was found when compared to state-of-the-art models. Not too shabby for computing up in the clouds.

Nick Bild
R&D, creativity, and building the next big thing you never knew you wanted are my specialties.
Related articles
Sponsored articles
Related articles