Neurons on Nitro

Brain-inspired chips ditch silicon and traditional computing architectures, paving the way for fast, energy-efficient, next-gen AI tools.

Nick Bild
1 month agoMachine Learning & AI
Artificial neuron and synapse for efficient computing (📷: Y. Jo et al.)

Our tried and true modern computing systems, primarily based on the von Neumann architecture and silicon CMOS-based transistors, have served us well for many decades now. These computers have brought about remarkable advancements in technology, enabling unprecedented levels of computation, data storage, and information processing. The von Neumann architecture, with its distinct separation of memory and processing units, has been a cornerstone in the evolution of computing, providing a standardized framework that has stood the test of time.

However, the landscape of computing is undergoing a transformative shift as new applications that are extremely data-intensive, like artificial intelligence, are growing increasingly important. The traditional von Neumann architecture is not well-suited to the frequent transfers of data between memory and processing units demanded by these applications, causing a bottleneck. Moreover, the physical constraints of silicon-based transistors are approaching their theoretical limits in terms of size reduction and power efficiency. The limitations of the current paradigm are becoming increasingly apparent, prompting researchers and engineers to explore new frontiers in computing technology. This has led to a quest for alternative materials and architectures that can overcome these limitations and usher in a new era of computing.

Brain-inspired neuromorphic computing has been heralded as a possible solution to this problem. The fundamental operational characteristics of these systems are entirely different from traditional computers. They are designed from the ground up to for massive parallelization and low power consumption. They also eliminate the von Neumann bottleneck by collocating processing and memory units.

These neuromorphic chips frequently take the form of artificial neuron and synaptic devices that work together to perform computations in a way that mimics the function of the brain. In order to build large-scale neural network hardware, these devices will need to be tightly integrated and optimized as a single unit. To date, researchers have not given this issue a lot of focus, and have instead focused on improving the properties of individual devices. But recently, a team from the Korea Institute of Science and Technology has taken on the challenge of integrating these devices and evaluating their performance.

In the course of their work, the team built both volatile and nonvolatile resistive random-access memory from two-dimensional hexagonal boron nitride film to serve as artificial neuron and synaptic devices, respectively. These two-dimensional sheets were stacked vertically to create two neurons and a synapse, which were then connected. This material enables ultra-low levels of power consumption, and since both devices are composed of the same material, integration is greatly simplified. This factor could, in theory, allow for the production of large-scale artificial neural network hardware.

While this was a small first step toward the goal of building a real-world neural network, the team was able to demonstrate spike signal-based information transmission with their hardware. It was also shown that the behavior of these signals could be altered by updating the system’s synaptic weights. Clearing this initial hurdle shows that this design has the potential to be utilized in future large-scale AI hardware systems.

This case was further bolstered by an experiment in which data collected from the physical hardware device was used to create a simulated hardware neural network in software. This made it easy for the researchers to scale up the network architecture to build a handwritten digit image classifier. This simple network had a single hidden layer with 100 neurons. After training it on the MNIST dataset, it was found to have an average classification accuracy rate of 83.45%.

With further work, the team envisions their technology being leveraged in application areas as diverse as smart cities, healthcare, next-generation communications, weather forecasting, and autonomous vehicles.

Nick Bild
R&D, creativity, and building the next big thing you never knew you wanted are my specialties.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles