"Self-Learning" Memristor Array Could Deliver High-Efficiency On-Device AI, ML

High-reliability memristors and self-learning error correction deliver a chip that gets better with practice, its creators claim.

Gareth Halfacree
2 months ago β€’ Machine Learning & AI

Researchers from the Korea Advanced Institute of Science and Technology (KAIST), Sungkyunkwan University, the Electronics and Telecommunications Research Institute (ETRI), Yonsei University, and Seoul National University have developed a self-learning, error-correcting chip, based on memristor technology, which could deliver a big efficiency gain for on-device machine learning and artificial intelligence (ML and AI).

"This system is like a smart workspace where everything is within arm's reach instead of having to go back and forth between desks and file cabinets," say co-first authors Hakcheon Jeong and Seungjae Han of the team's work. "This is similar to the way our brain processes information, where everything is processed efficiently at once at one spot."

The chip itself is based on memristor technology, named as a combination of "memory" and "resistor" and first proposed as a fundamental electrical component in 1971 by Leon Chua β€” but delivering "ideal" memristors capable of commercial use has proven a challenge. According to the researchers, the memristors used in their prototypes exhibit high reliability β€” and the system overall uses a self-learning approach to correct errors, improving its efficiency.

"Our platform β€” which consists of a selector-less (one-memristor) 1K (32Γ—32) crossbar array, peripheral circuitry and digital controller β€” can run AI algorithms in the analog domain by self-calibration without compensation operations or pre-training," the researchers claim. "We illustrate the capabilities of the system with real-time video foreground and background separation, achieving an average peak signal-to-noise ratio of 30.49dB and a structural similarity index measure of 0.81; these values are similar to those of simulations for the ideal case."

The chip, connected to an FPGA for experimentation, is being proposed as a way to deliver high-efficiency on-device machine learning and artificial intelligence β€” and is, the researchers claim, "both reliable and practical," and "becomes better at [a] task over time" thanks to its self-learning capabilities. "This technology will revolutionize the way artificial intelligence is used in everyday devices," a KAIST spokesperson claims, "allowing AI tasks to be processed locally without relying on remote cloud servers, making them faster, more privacy-protected, and more energy-efficient."

The team's work has been published in the journal Nature Electronics under closed-access terms.

Main article image shows Professor Young-Gyu Yoon (left), Seungjae Han, Hakcheon Jeong, and Professor Shinhyun Choi (inset, right) with the prototype system; image courtesy of KAIST.

Gareth Halfacree
Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin. For hire: freelance@halfacree.co.uk.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles