"Ghost Imaging" Lets a Human Act as a Computer-Driven Camera Capable of Seeing Around Corners

Seeing only diffused flickers of light on a wall, this machine learning system can read your mind to reveal the object around the corner.

A team from the University of Glasgow has successfully demonstrated active "ghost imaging" in a human subject — combining human vision and machine intelligence to allow a person to see images not directly in their line of sight.

"We believe that this work provides ideas that one day might be used to bring together human and artificial intelligence," claims Daniele Faccio professor at the School of Physics and Astronomy of the University of Glasgow. "The next steps in this work range from extending the capability to provide 3D depth information to looking for ways to combine multiple information from multiple viewers at the same time."

A team of researchers have turned a human wearing an EEG headset into a living camera for non-line-of-sight imaging. (📷: Faccio et al)

The project builds on the recognised concept of "ghost imaging," whereby light interacting with an object is combined with a reference pattern to create an image — but all previous approaches have used a human as a passive observer. Not so with Faccio and team: The human is an active participant, acting as a living camera to provide input to a machine learning system.

"This is one of the first times that computational imaging has been performed by using the human visual system in a neurofeedback loop that adjusts the imaging process in real time," Faccio explains. "Although we could have used a standard detector in place of the human brain to detect the diffuse signals from the wall, we wanted to explore methods that might one day be used to augment human capabilities."

Although the human sees only flickers of diffused light, the computer can recreate the object's shape. (📷: Alex Litvin)

The team's experiment saw a projector used to create patterns on a cardboard cut-out in front of a white wall. Diffused light from the cut-out was reflected by the white wall, but blocked from direct observation by a solid gray wall between the observer and the projector — leaving only a small patch of light visible. That light was then flickered 12 times over two seconds, and the resulting signals in the human observer's visual cortex recorded using a single-electrode electroencephalograph (EEG) sensor.

Data from the EEG were then used to estimate the intensity of the light transmitted by the object, being fed into a neurofeedback loop to allow the machine learning system to reconstruct the shape of the object — up to a 16×16 pixel estimation, at least.

The team's work is to be presented at the Optica Imaging and Applied Optics Congress 2022 on July 15.

ghalfacree

Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin. For hire: freelance@halfacree.co.uk.

Latest Articles