Jumping Spiders Provide Surprising Inspiration for Novel Depth-Sensing Metalens Technology

Clever metalens-based sensor captures multiple focus-point images in a single snap, providing information for a depth map calculation.

Gareth Halfacree
4 years agoSensors

A team from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and the National University of Singapore has turned to an unusual source of inspiration for a new depth sensor: jumping spiders.

"Evolution has produced a wide variety of optical configurations and vision systems that are tailored to different purposes," explains Zhujun Shi, a Ph.D. candidate in the Department of Physics and co-first author of the team's paper. "Optical design and nanotechnology are finally allowing us to explore artificial depth sensors and other vision systems that are similarly diverse and effective."

The particular focus of the research: jumping spiders, which do naturally what it takes a human brain — or a computer — considerable effort to achieve. "[The] matching calculation, where you take two images and perform a search for the parts that correspond, is computationally burdensome," explains Todd Zickler, the William and Ami Kuan Danoff Professor of Electrical Engineering and Computer Science at SEAS and co-senior author of the study. "Humans have a nice, big brain for those computations but spiders don’t."

A jumping spider, by contrast, performs the matching required to gauge depth from a visual input using several layered semi-transparent retinae which capture multiple images with differing focus points. When a target — say, a juicy fly — is sharp in one retina but blurred in another the spider can infer the distance of the fly — but doing the same using motorised focus-shifting camera systems has proven bulky and impractical.

The solution: a metalens. "Instead of using layered retina to capture multiple simultaneous images, as jumping spiders do, the metalens splits the light and forms two differently-defocused images side-by-side on a photosensor," Shi explains — with the defocused images then passing through a an algorithm which produces a depth map of precisely where in a three dimensional scene an object can be found. Unlike previous implementations, there are no moving parts and the differing-focus images are captured simultaneously — avoiding issues where an object has moved between captures.

The team's work has been published for open access in the journal Proceedings of the National Academy of Sciences (PNAS), but neither university has yet commented on how long it will take for the technology to be commercialised.

Gareth Halfacree
Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin. For hire: freelance@halfacree.co.uk.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles