U.S. Army Researchers Boost Distributed Deep Learning Efficiency by Up to 70 Percent

By communicating only when significant changes have been made to the model, a key bottleneck in distributed deep learning is overcome.

A pair of researchers working for the U.S. Army have developed a means of boosting communications efficiency in distributed deep learning systems by up to 70 percent — by getting the nodes to talk only when there have been significant changes in the model.

"There has been an exponential growth in the amount of data collected and stored locally on individual smart devices," says Dr. Jemin George, an Army scientist at the U.S. Army Combat Capabilities Development Command's Army Research Laboratory. "Numerous research efforts as well as businesses have focused on applying machine learning to extract value from such massive data to provide data-driven insights, decisions and predictions."

"This research tries to address some of the challenges of applying machine learning, or deep learning, in military environments," adds co-author Dr. Prudhvi Gurram. "Early indications and warnings of threats enhance situational awareness and contribute to how the Army evolves and adapts to defeat adversarial threats."

By triggering communication on major changes, network efficiency is improved. (📷: George et al)

A key aspect to adapting deep learning systems for battlefield use: Moving away from a centralized design to a distributed layout in which the data is no longer reliant on a single system or connection, increasing both its resiliency and allowing its performance to scale with the number of nodes involved — in theory, at least.

"Distributed learning algorithms typically require numerous rounds of communication among the agents or devices involved in the learning process to share their current model with the rest of the network," George explains of the problem. "This presents several communication challenges."

Chief among these challenges is the overhead associated with communicating between nodes in the distributed network — an overhead the researchers claim to have reduced by up to 70 percent in ideal scenarios, through the application of a triggering system which tells nodes only to communicate their model with neighbouring nodes if there have been significant changes since the last transmission.

The pair's paper has been published under open access terms on arXiv.org.

ghalfacree

Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin. For hire: freelance@halfacree.co.uk.

Latest Articles