Researchers Demo "Almost Unlimited" Brain Simulations Running at Supercomputer Speed on a Gaming GPU

Building on work carried out in 2006, with the benefit of 15 years' of Moore's Law, it's now possible to simulate brains on a consumer GPU.

A pair of researchers from the University of Sussex' Centre for Computational Neuroscience and Robotics have published a paper which showcases an approach to simulating brain models of near-unlimited size on an off-the-shelf graphics card — putting, they say, a supercomputer in people's bedrooms.

The use of highly-parallel graphics processing units (GPUs) to accelerate scientific computation is nothing new, but the technology can only go so far: There's still considerable demand for true supercomputers, which put hundreds or thousands of CPU and/or GPU cores under the control of scientists, but come with a dramatic cost attached.

Concentrating solely on brain simulation, researchers Dr. James Knight and Professor Thomas Nowotny claim to have developed a method which allows a commercial, gaming-style graphics processor to simulate brain models of almost unlimited size — building on earlier work on the topic carried out by Eugene Izhikevich in 2006.

“I think the main benefit of our research is one of accessibility. Outside of these very large organisations, academics typically have to apply to get even limited time on a supercomputer for a particular scientific purpose," Knight explains. "This is quite a high barrier for entry which is potentially holding back a lot of significant research.

“Our hope for our own research now is to apply these techniques to brain-inspired machine learning so that we can help solve problems that biological brains excel at but which are currently beyond simulations. As well as the advances we have demonstrated in procedural connectivity in the context of GPU hardware, we also believe that there is also potential for developing new types of neuromorphic hardware built from the ground up for procedural connectivity. Key components could be implemented directly in hardware which could lead to even more truly significant compute time improvements."

Where Izhikevich's 2006 project hit a wall in the performance available from computers at the time, Knight and Nowotny have Moore's Law on their side - the observation turned target by Intel co-founder Gordon Moore that the number of transistors, and thus computational power, in a leading-edge chip trends towards a doubling every 18 months. The result: The graphics cards available now are roughly 2,000 times more computationally powerful than in Izhikevich's time.

To prove the concept, the pair simulated a Macaque's visual cortex: Running on an IBM Blue Gene/Q supercomputer in 2018 the simulation took five minutes to initialise and 12 minutes per second of simulation; running on an NVIDIA Titan RTX, an admittedly high-end graphics card, Knight and Nowotny were able to run the same simulation with just six minutes' initialisation and 8.4 minutes per second of simulation — at a fraction of the cost and power draw required by the supercomputer.

“This research is a game-changer for computational neuroscience and AI researchers who can now simulate brain circuits on their local workstations," claims Nowotny, "but it also allows people outside academia to turn their gaming PC into a supercomputer and run large neural networks."

A pre-print of the pair's paper is available on bioRxiv under open-access terms, while it has also been published in the journal Nature Computational Science under closed-access terms.

Gareth Halfacree
Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin. For hire: freelance@halfacree.co.uk.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles