A team from the Daegu Gyeongbuk Institute of Technology (DGIST) has developed neural network designed to turn satellite and aerial imagery into maps — complete with picking out buildings automatically.
Led by Jae Youn Hwang, professor at the DGIST Department of Information and Communication Engineering, the team set about applying deep-learning technology to object segmentation in aerial photography. The key: Being able to pick out and map buildings, even in low-resolution or otherwise low-quality shots.
The result is a neural network which, the team claims, can precisely pick out the boundaries of buildings from a given image — and convert them into high-accuracy maps, turning what is usually the work of a person over a period of over a month into something a computer can handle in a matter of seconds.
"The neural network we developed in this study is a novel neural network that can extract objects from aerial and satellite images with high accuracy," Hwang claims of the team's work. "If this technology is further improved in the future, it is expected to be applied to various fields, such as medical imaging, and have a positive impact on the development of artificial intelligence technology."
The key to the network's performance: A new learning pipeline and an operator which analyzes the association between the bounding of the building and entropy, resulting in a novel neural network which easily outperforms its rivals at the task.
The team's work has been published under closed-access terms as an early-access paper in the journal IEEE Transactions on Geoscience and Remote Sensing. DGIST has confirmed it is looking into commercialisation of the technology.