Picture This (But Not That)

Using diffractive layers defined by a machine learning algorithm, this camera will only image objects that it has been trained to recognize.

Nick Bild
2 years ago β€’ Machine Learning & AI
Optical filtering of visual data (πŸ“·: B. Bai et al.)

Visual data provides a very rich source of information about the world around us. For this reason, many computer vision applications have been developed in the past few decades that leverage this data source for video surveillance, autonomous driving, medical imaging, and much, much more. But as these technological advancements continue to roll out, there is a growing chorus of voices raising concerns about the privacy implications of having so many cameras around. People begin to feel like they are always being watched, and that leaves many with a very unsettling feeling.

Developers of computer vision applications have taken notice of these concerns β€” gadgets that give consumers the heebie-jeebies might not fly off the shelves, after all. And no one wants a modern group of Luddites smashing their shiny new looms. Efforts to assuage these concerns usually involve blurring and/or encrypting images to attempt to keep them safe from prying eyes. But one thing that these techniques have in common is that they happen after the image has been captured. This leaves systems implementing these methods vulnerable to being exploited.

A novel approach developed by researchers at the University of California, Los Angeles has recently been described that may offer much stronger privacy protections than are presently available. They have designed a camera that has the unique property that it can only image objects that it has been trained to recognize. Anything else in the frame is filtered out and seen as an unrecognizable, pixelated blur. This filtering happens completely optically, so the non-target portions of the image are actually filtered out before they ever reach the image sensor β€” nothing is ever recorded that could potentially be exploited. Using this method also reduces energy expenditure, since no digital processing is required.

The camera works by designing a number of layers of transmissive surfaces, each with tens of thousands of nanoscale diffractive features. These features are finely tuned such that only certain types of predefined objects will pass through them unchanged to the image sensor. Anything else that passes through the filtering layers will be distorted beyond recognition in a way that resembles random noise. Of course laying out tens or hundreds of thousands of features at the scale of the wavelength of light is no small task, so the team turned to a machine learning algorithm to design the structure of the filters.

To validate their methods, the researchers 3D printed a set of diffractive layers that were engineered to only recognize the handwritten digit β€œ2” with the help of the MNIST handwritten digit database. All other digits should be rejected and optically filtered out if the test is successful. It was found that the filter was able to continually, and selectively, allow the number β€œ2” to be clearly seen, while all other digits were optically erased. They even tested this capability under a wide variety of lighting conditions and found the camera to be robust in these scenarios.

To date, the camera has been tested under contrived, laboratory conditions, so as you might expect, additional work will need to be done before we see it in devices in the wild. With any luck that will happen in the near future. That would be a great boon for both privacy advocates and the sort of people that run off whenever someone wants a group photo.

Nick Bild
R&D, creativity, and building the next big thing you never knew you wanted are my specialties.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles