ML Hiding in Plain Sight

Researchers have shown that nearly any physical system can be turned into a neural network.

Nick Bild
2 years agoMachine Learning & AI
Laser-based PNN (📷: L. Wright et al.)

In most cases, deep neural networks (DNNs) rely on traditional compute resources, such as CPUs, GPUs when heavy parallelization is needed, or microcontrollers for small edge devices. Fantastic advancements in machine learning have been made in recent years with the help of these types of hardware to slog through the computations. As methods continue to grow more sophisticated, so to grow the computational requirements, and with that grows a corresponding hardware energy efficiency problem that is increasingly limiting new DNN developments.

One way that this growing problem has been addressed is through the development of special-purpose DNN accelerator hardware. These accelerators are very rigid in the tasks that they can perform, as the hardware is designed to mimic the chain of mathematical operations that a DNN performs. Further, these devices rarely are able to perform model training, and instead focus on inference. Recent work led by researchers at Cornell University has led to the development of what they call physical neural networks (PNNs). These PNNs allow for the construction of neural network hardware from nearly any controllable physical system. These networks built on physical systems are far more energy-efficient than traditional compute methods, and are able to perform certain types of computations much more quickly as well. The team has also demonstrated methods by which they can train their PNN models with the help of the physical system, which reduces the error that would be introduced if training was done fully in simulation.

The training method is a hybrid approach that uses the PNN for the forward pass, and an in silico algorithm for backpropagation. This setup allows the model to be adjusted accurately, even in the presence of modeling errors and physical noise. Experiments conducted by the researchers showed that this approach led to models that performed dramatically better than those trained fully in silico.

In one PNN constructed by the team, a laser light was reflected off of a series of mirrors, and through a pulse shaper that served as tunable model parameters. Then a nonlinear crystal transformed the colors of the laser light into new colors by combining photon pairs. This light was analyzed by a spectrometer, and served as the output prediction of the model. This PNN was trained to recognize ​​handwritten numbers, and was able to achieve an impressive average classification accuracy of 97%.

The researchers also demonstrated a multimode mechanical oscillator PNN, constructed from a commercially available speaker and a titanium plate. In another case, they built a minimal PNN using only a resistor, a capacitor, an inductor, and a transistor. Showing the adaptability of these physical systems to varied applications, the team also trained the laser-based PNN to recognize the sound of spoken vowels.

A number of unexpected devices were able to perform quite well as machine learning classifiers, but the team noted that every physical system is not good for all machine learning tasks. They are currently exploring the question of what physical systems perform the best for a variety of tasks.

Nick Bild
R&D, creativity, and building the next big thing you never knew you wanted are my specialties.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles