This project began as a student assignment in a university course focused on applying AI to animal monitoring using embedded machine learning. The goal was to explore how low-power hardware could detect specific behaviors in wildlife in real time.
What started as an academic exercise soon turned into something much bigger. The student leading the project became deeply involved and eventually joined a tech company through an R&D collaboration we established. Together, we began developing a system capable of classifying bird species in real time using audio signals.
We use Arduino Nano 33 BLE Sense boards and ESP32-based devices for audio capture and on-device inference. The system leverages onboard microphones and optimized neural networks trained to recognize bird calls and classify species directly at the edge — without the need for cloud processing.
The model was developed following two parallel approaches: one using the Edge Impulse platform for streamlined signal processing, model design, and deployment; and another using Python and Zephyr RTOS, building the entire pipeline manually for greater flexibility and low-level control in resource-constrained environments. As part of the system architecture, selected information or classification results can be transmitted via LoRa to a central gateway for aggregation, monitoring, and further analysis.
This lightweight, low-power solution is ideal for remote deployments in natural habitats and aims to support both ecological research and conservation efforts.







Comments