A New Spin on Computer Vision

AMI-EV leverages a rotating prism to mimic microsaccades of the human eye and enhance the clarity and detail of neuromorphic cameras.

Nick Bild
1 year agoSensors
This neuromorphic camera moves like a human eye (📷: B. He et al.)

Computer vision is essential to the operation of many crucial systems in robotics, such as those that are used in route planning, obstacle avoidance, and object manipulation. The highly dense data produced by a camera gives a great deal more information about the environment than nearly any other type of sensor. But this information comes at a very steep cost — in order to make sense of high-resolution images, a computer system must be capable of processing data points from many millions of pixels, and depending on the application, that may need to happen dozens of times per second.

The need for all of this computational horsepower drives up the cost, size, and energy consumption of the hardware, and that is preventing computer vision systems from being deployed in any number of devices that could otherwise benefit from them. One potential solution to this problem that has been explored recently is the neuromorphic camera. Rather than capturing full images many times per second, these cameras instead only respond to changes. These localized changes in light intensity are then reported by the camera, which gives downstream computing systems a lot less to chew on.

Unfortunately, neuromorphic cameras have some limitations of their own. In particular, when the camera is in motion, objects that are moving in the same direction may be invisible to it. A team led by researchers at the University of Maryland took inspiration from the human visual system in developing a solution to this problem that they call the artificial microsaccades-enhanced event camera, or AMI-EV. As it turns out, the human eye is subject to a similar limitation. We are not aware of this limitation in our everyday lives, however, because of special eye movements called microsaccades.

Microsaccades are tiny, involuntary motions that our eyes make, even as we feel that we have our eyes fixed on a specific point in space. Without these movements, details of the scene we are looking at would start to fade out after a short time. So the researchers wondered if introducing something like microsaccades into a neuromorphic camera might help it to avoid missing moving objects, and also increase the detail of the data it captures.

To test out this theory, the team positioned a wedge-shaped prism in front of the lens of a neuromorphic camera. This prism is then made to rotate, and in doing so, it causes the light entering the camera to “jiggle,” much like the tiny movements that our eyes are constantly making.

By using this rotating prism, it was demonstrated that AMI-EV can capture more consistent and detailed images, even when the camera or the objects in view are moving. The complete system combines this hardware with custom software to create a more reliable and high-quality visual output, making it especially useful for various robotic tasks in dynamic and challenging environments.

As it is currently designed, the AMI-EV system does consume more power than a traditional neuromorphic camera. This is due to the mechanical components that are needed to rotate the prism. Looking ahead, the researchers plan to remedy this situation by using another strategy to rotate the incoming light — perhaps through the use of electro-optic materials. With refinements such as this, AMI-EV may one day make the use of computer vision practical for many more applications.

Nick Bild
R&D, creativity, and building the next big thing you never knew you wanted are my specialties.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles