Google's Augmented Perception Team Unveils the Relightables for 3D Capture with Realistic Lighting

Using a geodesic dome containing 331 programmable LEDs and high-resolution depth sensors the Relightables makes for more realistic captures.

Gareth Halfacree
4 years agoSensors / Lights

A team of researchers working at Google on augmented and virtual reality have published a paper describing a unique 3D capture system. The geodesic sphere is designed for volumetric capture with a focus on allowing for realistic relighting.

"While significant progress has been made on volumetric capture systems, focusing on 3D geometric reconstruction with high resolution textures, much less work has been done to recover photometric properties needed for relighting," Google's Augmented Perception team notes in the paper's abstract — referring to the process by which a performance's lighting can be changed post-capture. "Results from such systems lack high-frequency details and the subject's shading is pre-baked into the texture.

"In contrast, a large body of work has addressed relightable acquisition for image-based approaches, which photograph the subject under a set of basis lighting conditions and recombine the images to show the subject as they would appear in a target lighting environment. However, to date, these approaches have not been adapted for use in the context of a high-resolution volumetric capture system. Our method combines this ability to realistically relight humans for arbitrary environments, with the benefits of free-viewpoint volumetric capture and new levels of geometric accuracy for dynamic performances.

The answer: a capture system which looks every inch like it stepped from the set of a science fiction film. "Our subjects are recorded inside a custom geodesic sphere outfitted with 331 custom color LED lights, an array of high-resolution cameras, and a set of custom high-resolution depth sensors," the team explains. "Our system innovates in multiple areas. First, we designed a novel active depth sensor to capture 12.4MP depth maps, which we describe in detail. Second, we show how to design a hybrid geometric and machine learning reconstruction pipeline to process the high resolution input and output a volumetric video.

"Third, we generate temporally consistent reflectance maps for dynamic performers by leveraging the information contained in two alternating colour gradient illumination images acquired at 60Hz. Multiple experiments, comparisons, and applications show that the Relightables significantly improves upon the level of realism in placing volumetrically captured human performances into arbitrary CG scenes."

The full paper is available for download from the project's GitHub page.

Gareth Halfacree
Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin. For hire: freelance@halfacree.co.uk.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles