Bird’s-Eye AI

Scientists have developed a bionic vision device that mimics birds’ eyesight, enabling energy-efficient and highly capable machine vision.

nickbild
9 months ago Sensors
Neuromorphic visual sensor arrays, inspired by birds, use very little energy

The most natural way to give autonomous agents the raw data they need to interact with the world around them is via computer vision. After all, these methods closely mimic how we experience the world. And if we want these agents to handle some of the tasks that we normally do, it only makes sense that we would seek to give them similar capabilities. Furthermore, cameras are capable of providing very rich, high-resolution information that is the ideal input to powerful decision-making algorithms.

However, cameras are not without their limitations. They tend to consume a lot of energy, for instance, which can drain the batteries of a mobile platform very rapidly. Moreover, traditional cameras deteriorate in performance as light levels decrease, and become completely useless when light levels fall below a certain threshold.

The structure of the device (📷: P. Xie et al.)

To address these challenges, scientists at the City University of Hong Kong have developed a novel wearable bionic vision device that mimics birds’ exceptional eyesight while consuming almost no power. Their recently published findings introduce a different approach to machine vision that could significantly enhance the capabilities of autonomous systems, in areas ranging from robotics to smart surveillance.

Birds are known for their highly advanced visual systems, which enable them to spot prey from great distances and perceive a broad spectrum of light, including ultraviolet (UV). Inspired by these capabilities, the research team has created a neuromorphic visual sensor that enhances perception, reduces power consumption, and improves data processing efficiency.

The sensor design consists of a dual-junction structure that combines van der Waals heterojunctions with Schottky junctions. The materials used enable broadband light sensing, from UV to near-infrared, and allow the device to operate in an ultra-low power mode. The researchers achieved this by using specially oriented gallium arsenide nanowires combined with liquid-surface-assembled poly(3-hexylthiophene-2,5-diyl) organic films. This design not only mimics the functionality of avian photoreceptors, but also enables efficient memory storage and in-sensor computing.

The sensor is highly flexible (📷: P. Xie et al.)

The neuromorphic visual sensor easily integrates with artificial intelligence algorithms, particularly deep learning models, for assistance in making sense of the data it produces. The researchers demonstrated these capabilities using a reservoir computing system, which enabled multi-modal recognition of moving objects. It successfully identified shape, motion, color, and even UV grayscale information with an impressive 94% accuracy rate.

The device also has a multi-tasking ability that allows it to process and integrate different types of sensory inputs into a single coherent image, leading to what is known as "fusion imaging." This technique enhances machine vision under difficult conditions, such as those found during nighttime navigation, in foggy conditions, or in areas with variable lighting.

By leveraging the strengths of both biological vision and advanced materials science, the researchers have taken a significant step toward creating machine vision systems that are not only more efficient but also more intelligent. This brings us closer to a future where autonomous systems can "see" the world as effectively as nature’s best hunters — without the limitations of traditional technologies.

nickbild

R&D, creativity, and building the next big thing you never knew you wanted are my specialties.

Latest Articles