Camera-flage

Researchers have developed a novel camera design that obscures images, making them useless to humans, but not to computer vision algorithms.

Nick Bild
2 years agoRobotics
Images are scrambled before leaving the camera (📷: University of Sydney and Queensland University of Technology)

Robotic vacuum cleaners have introduced millions of people to the idea of having a household service robot. These vacuum cleaners are only the first baby step into a new era of personal service robots. It is only a matter of time before advances in robotics, machine learning, and computer vision enable the development of all manner of practical robotic servants. So as we sit at the dawn of this new age, now is the time to carefully consider the implications of deploying millions, or even billions, of these robots into people’s homes. It is always better to fix a problem before it happens in the first place, after all.

Since many of these robots already do, or will in the future, make use of cameras for sensing and navigation, we need to take a serious look at the privacy-related concerns that this raises. Having an always-on camera in one’s private residence is already a red flag, but when you consider that these robots are likely to be connected to the Internet, the risk of that data being exploited skyrockets. And connected devices do not exactly have a great track record for being secure.

Trusting the best intentions of a corporation or a security setting in a configuration app just is not going to cut it for most people. Fortunately, a team of researchers at The University of Sydney and Queensland University of Technology have a better plan. They have created a new type of camera design that heavily obscures images before they ever leave the device. The images are sufficiently distorted that humans cannot make heads or tails of them, however, robots can use them for navigation and other crucial tasks. And since clear images never leave the camera, there is virtually no chance of defeating this protection, even with full control of the robot.

The technique involves modifying the camera’s hardware, such that images are obscured even before they are digitized. In this way, remote attacks are incapable of accessing clear images. These methods can certainly make the images unintelligible to humans, but robots must still be able to extract useful information. For this reason, a camera of this sort must be tuned to the task that it is intended to complete.

Computer vision algorithms do not look at an image in the same way that we do. Much of the detail dissolves into patterns, blobs of colors, and so on. Accordingly, the camera must be designed such that it preserves these types of patterns that are essential to the proper operation of the processing algorithm.

The astute reader is probably thinking that if the images can be scrambled, then they can be unscrambled as well. It is likely just a matter of training another machine learning model to understand the associations between clear and obscured images. That might yield a sort of decoder that can then unscramble new images. Perhaps that will prove to be the case in the future, however, the team did make an effort to reverse the scrambling of their own system and came up empty handed.

Only time will tell if malicious hackers can ultimately circumvent this novel technique, but it certainly looks like a step in the right direction.

Nick Bild
R&D, creativity, and building the next big thing you never knew you wanted are my specialties.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles