A team of scientists at the Stanford Computational Imaging Lab (SCI) have showecased a new model that, they say, will allow for a dramatic improvement in virtual and automated reality display quality — by using holographic, rather than flat, imagery generated by an artificial intelligence.
"They are not perceptually realistic," Gordon Wetzstein, associate professor and co-author of a paper describing both the problem with conventional virtual and augmented reality displays and a proposed solution. The key issue: The wearer is presented with a two-dimensional image, one for each eye, which simply doesn't reflect how real-world stereo vision works.
The solution: Neural holography. "Artificial intelligence has revolutionized pretty much all aspects of engineering and beyond," Wetzstein claims. "But in this specific area of holographic displays or computer-generated holography, people have only just started to explore AI techniques."
"Only recently," adds co-author Yifan Peng, "with the emerging machine intelligence innovations, have we had access to the powerful tools and capabilities to make use of the advances in computer technology."
The neural holographic display detailed in the paper sees a neural network trained to mimic the real-world physics of the display, which pairs with a camera-in-the-loop calibration strategy — providing real-time control of the image and, the team found, a more realistic representation of scenes which contain visual depth, even when portions of the scene are far away or deliberately out-of-focus.
"I’m a big believer in the future of wearable computing systems and AR and VR in general," says Wetzstein, who points to the potential for augmented reality to transform the field of medicine for both training and active surgeries. "I think they’re going to have a transformative impact on people's lives."
More information on the work, which was presented at SIGGRAPH Asia 2021, is available on the project website.