An Analog Network of Resistors Promises "Machine Learning Without a Processor," Researchers Say

Prototyped on a series of breadboards, this analog machine learning network could one day deliver more energy-efficient AI.

Gareth Halfacree
1 month ago β€’ Machine Learning & AI

Researchers from the University of Pennsylvania have come up with an interesting approach to machine learning that could help to address the field's ever-growing power demands: taking the processor out of the picture and working directly on an analog network of resistors.

"Standard deep learning algorithms require differentiating large non-linear networks, a process that is slow and power-hungry," the researchers explain. "Electronic learning metamaterials offer potentially fast, efficient, and fault-tolerant hardware for analog machine learning, but existing implementations are linear, severely limiting their capabilities. These systems differ significantly from artificial neural networks as well as the brain, so the feasibility and utility of incorporating non-linear elements have not been explored."

Until now, that is. In the team's research, a non-linear learning metamaterial is introduced β€” an analog electronic network of resistive elements based on transistors. It's not a traditional digital processor, and can't do the tasks a traditional processor can do β€” but it is tailored specifically to machine learning workloads, and proved able to perform computations that can't be handled in a linear system without the involvement of a processor beyond an Arduino Due to make measurements and connect to MATLAB.

"Each resistor is simple and kind of meaningless on its own," physicist Sam Dillavou, first author on the work, explains in an interview with MIT Technology Review. "But when you put them in a network, you can train them to do a variety of things."

The team has already demonstrated the same core technology being used in an image classification network. and in its latest work extends the concept to non-linear regression and exclusive OR (XOR) operations. Better still, it shows the potential to outperform the traditional approach of throwing the problems at digital processors: "We find our non-linear learning metamaterial reduces modes of training error in order (mean, slope, curvature)," the team claims, "similar to spectral bias in artificial neural networks."

"The circuitry is robust to damage," the researchers continue, "retrainable in seconds, and performs learned tasks in microseconds while dissipating only picojoules of energy across each transistor. This suggests enormous potential for fast, low-power computing in edge systems like sensors, robotic controllers, and medical devices, as well as manufacturability at scale for performing and studying emergent learning."

There is, of course, a catch: in its current form, existing as a prototype spread across a series of solderless breadboards, the metamaterial system draws around ten times the power of a state-of-the-art digital machine learning accelerator β€” but as it scales, Dillavou says, the technology should deliver on a promise of increased efficiency and the ability to remove external memory components from the bill of materials.

The team's work has been published as a preprint on Cornell's arXiv server.

Main article image courtesy of Felice Macera.

Gareth Halfacree
Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin. For hire:
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles