A team of researchers from the University of Strathclyde and Aralia Systems have published a paper detailing a 3D imaging system which can operate with nothing more than an off-the-shelf smartphone and cheap LED lighting — with no synchronization required.
"Current video surveillance systems such as the ones used for public transport rely on cameras that provide only 2D information," explains Emma Le Francois, doctoral student in the research group behind the paper. "Our new approach could be used to illuminate different indoor areas to allow better surveillance with 3D images, create a smart work area in a factory, or to give robots a more complete sense of their environment."
"Deploying a smart-illumination system in an indoor area allows any camera in the room to use the light and retrieve the 3D information from the surrounding environment. LEDs are being explored for a variety of different applications, such as optical communication, visible light positioning and imaging. One day the LED smart-lighting system used for lighting an indoor area might be used for all of these applications at the same time."
The new system is based on photometric stereo imaging, which traditionally requires four light sources positioned symmetrically around a camera's viewing axis. The novel approach, by contrast, illuminates an area from the top-down despite imaging it from the side — and uses LED clock signalling picked up by the camera as a self-synchronization system.
"We wanted to make photometric stereo imaging more easily deployable by removing the link between the light sources and the camera," says Le Francois. "To our knowledge, we are the first to demonstrate a top-down illumination system with a side image acquisition where the modulation of the light is self-synchronized with the camera."
To prove the concept, the team created a simple prototype: A smart LED lighting system controlled by an off-the-shelf Arduino microcontroller, and an unmodified smartphone capable of high-speed video. The target: a 48mm figurine, 3D printed in a matte finish, which the system was able to capture from a distance of 42cm with a 2.6mm reconstruction error rate. Further experiments proved that the system is not affected by ambient light conditions, and that it works for both still and moving objects.
The only wrinkle: The approach takes a few minutes to turn the video into a 3D reconstruction, making it unsuitable for real-time or near-real-time use. The team is presently working on developing a deep neural network designed to accelerate the reconstruction from raw image data.
The paper has been published under open-access terms in the journal Optics Express.