Clever Neural Network Takes a Leap Towards Real-Time 3D Holography — on a Smartphone

New system can use depth data from cameras and LiDAR sensors, increasingly common on smartphones, to generate holograms in near-real-time.

Researchers at the Massachusetts Institute of Technology have published a paper detailing a system for efficiently creating computer-generated holograms — and it's lightweight enough to run on a smartphone.

"People previously thought that with existing consumer-grade hardware, it was impossible to do real-time 3D holography computations," explains lead author Liang Shi of the technology, known as tensor holography, and its impact. "It's often been said that commercially available holographic displays will be around in 10 years, yet this statement has been around for decades."

Using a deep-learning network, a team of researchers has made a big leap towards real-time holography on smartphone hardware. (📹: Shi et al)

"We are amazed at how well it performs," adds co-author Wojciech Matusik of the system, which can generate holograms from depth data — generated by the computer itself, or captured using depth cameras, LiDAR, or other increasingly-common smartphone sensors — in milliseconds. Better still, the tensor network used requires only 1MB of memory. "It's negligible, considering the tens and hundreds of gigabytes available on the latest cell phone."

"It's a considerable leap that could completely change people's attitudes toward holography. We feel like neural networks were born for this task."

The secret to the system's performance: A convolutional neural network trained on 4,000 image pairs which matched image and depth data with the finished hologram. The finished network could then create fresh holograms from novel images and depth data several orders of magnitude faster than the physics calculations traditionally required.

The team is looking to use the technique for everything from more immersive and less tiring virtual reality based on phase-modulation displays to improving volumetric 3D printing, microscopy, medical imagery visualisation, and materials design - and even displays which could adjust for the viewer's optical prescription.

The team's work has been published in the journal Nature under open-access terms, while the source code — written using Python and TensorFlow — can be found on GitHub under an evaluation and testing only license, albeit without the training code but with a pre-trained neural network.

Gareth Halfacree
Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin. For hire: freelance@halfacree.co.uk.
Related articles
Sponsored articles
Related articles