This project introduces an innovative approach to built on Zephyr RTOS and runs a Ultra-Low-Power(ULP) acoustic monitoring system created with Edge Impulse.
We are building an intelligent, Ultra-Low-Power acoustic monitoring ecosystem designed to prevent human-wildlife conflict.
In recent years, Japan has faced a growing conflict between humans and wildlife. Animals like bears, boars, and deer are increasingly encroaching on populated areas, leading to severe crop damage and dangerous encounters with residents.
This issue has a tragic flip side: many of these animals are culled as "pests, " resulting in the loss of precious wildlife that should be a protected part of our ecosystem.
The core problem we are solving is this escalating Human-Wildlife Conflict (HWC). Our mission is to move beyond a cycle of fear and extermination towards a future of sustainable coexistence.
Our Goal: To protect both people and wildlife from conflict.
This project main topics are as follows:
- Edge AI system for Wildlife Sound Recognition.
- Ultra-Low-Power ecosystemusing Zephyr(RTOS).
- Energy-efficient implementation on Low-end embedded SoCs
- Long-range, Low-power communication with Bluetooth Low Energy.
This bellow video summarizes the main results. It's all wrapped up in about four minutes, so please take a look!
2. Hardware DesignThis section details the core components.
For details, Please refer to the KiCadProject-files(.zip), Schematics(.pdf), Gerber(.zip), 3D-Data(.step) and BOM(.xlsx) files in the 'Schematics' section.
These below image are Key features of this project.
PCB Overview
Here is the PCB overview. It looks like an Owl.
We can connect "XIAO nRF54L15(nRF52840) Sense" with PCB.
XIAO nRF54L15(nRF52840) Sense is integrated with:
- IMU(Inertial Measurement Unit)
- Microphone
- Low Power Wireless SoC
Here is the KiCad 3D-image.
Here is the KiCad 2-layer image.
This PCB is 2-Layer, 85.3mm x 140.1mm Size, and Thickness:1.6 mm.
For details, Please refer to the KiCad Project file.
BOM
This device consists below items. (For details, Please refer to the 'Schematics' section.)
- XIAO nRF54L15 sense (or XIAO nRF52840 Sense)
- Battery(5V or 3.7V Li-ion battery)
- ULP Acoustic PCB.(PCB BOM image is below)
Power Design
This device features a USB Type-C port and can be powered by either a 5V source or a 3.7V Li-ion battery.
3.7V Li-ion battery charging is also supported.
The 5mm round and 10mm square holes make it easy to secure the Battery and mount the device in the field.
Form Factor & Usability
Design: This device consists of XIAO nRF54L15(nRF52840) sense and external Buzzer, RGB_LED, RTC, and Battery.
- XIAO Header: 2.54mm pitch pin header is included for attaching the main device, the XIAO nRF54L15(nRF52840) Sense.
- Battery Connector: For extended battery life, it includes a 3-pin connector compatible with commercial Li-ion batteries.
- RGB_LED x2 and Buzzer: The LED indicates the status of Edge AI sound recognition, while the buzzer can be used to deter wild animals.
On-board RTC: A Real-Time Clock is included to keep accurate time during extended battery-powered use.
All source code for this project are available on GitHub.
https://github.com/iotengineer22/edge-aI-earth
We chose Zephyr RTOS for this project. It offers scalability and a Ultra-Low-Power ecosystem for the XIAO nRF54L15(nRF52840) Sense.
- Zephyr RTOS: An open-source RTOS that is well-suited for this kind of embedded Ultra-Low-Power ecosystem.
- Nordic Semiconductor nRF Connect SDK: This SDK is a very useful tool for developing on a custom board for nRF54L15. We will also introduce practical debugging and usage methods. We use SDK Ver v3.0.1.
All source codes for projects are also available on GitHub.
https://github.com/iotengineer22/edge-aI-earth/src
3-2. Board file(DeviceTree)The original board (DeviceTree) files for this project are from the official Seeed Studio.
https://github.com/Seeed-Studio/platform-seeedboards/tree/main/zephyr/boards/arm/xiao_nrf54l15
For instructions on setting up the build environment for the XIAO nRF54L15 board, please refer to the getting started guide below.
https://wiki.seeedstudio.com/xiao_nrf54l15_sense_getting_started/#add-xiao-nrf54l15-board
Here's an introduction to the software development environment used and the simple setup process. For details, please refer to the official link provided.
- nRF Connect for VS Code Extension Pack(nRF Connect for VS Code)
https://www.nordicsemi.com/Products/Development-tools/nRF-Connect-for-VS-Code
For this project, we utilized the nRF Connect SDK via VS Code.
The nRF Connect SDK itself was also installed through the VS Code extension.
- nRF-Util
https://www.nordicsemi.com/Products/Development-tools/nRF-Util
Simply installing nRF Connect for VS Code was not sufficient to flash the nRF5340, resulting in an error.
Flashing build to 1057787458 C:\Windows\system32\cmd.exe /d /s /c "west flash -d c:\boards\test\blinky_3\build --skip-rebuild --dev-id 1057787458"
-- west flash: using runner nrfutil -- runners.nrfutil: reset after flashing requested FATAL ERROR: required program nrfutil not found; install it or add its location to PATH
Therefore we reference Installation guide:
https://docs.nordicsemi.com/bundle/nrfutil/page/guides/installing.html
We downloaded nrfutil and placed it in the following folder within the nRF-SDK:
\ncs\toolchains\***<is your toolchain>\opt\bin\Scripts
To address this, we installed nRF Util via the command prompt. The nrfutil command becomes available after restarting.
C:\Users\***>curl https://files.nordicsemi.com/artifactory/swtools/external/nrfutil/executables/x86_64-pc-windows-msvc/nrfutil.exe -o nrfutil.exe
C:\Users\***>nrfutil
3-4. Edge ImpulseThis project uses Edge Impulse to create a lightweight sound recognition model, primarily based on the Seeed Studio Wiki.
https://wiki.seeedstudio.com/XIAO-BLE-PDM-EI/
The audio dataset used for pre-testing is from the samples listed below.
However, this project has two key differences:
- It uses the nRF54L15 SoC. (* If using XIAO nRF54L15 Sense)
- The model is integrated into a Zephyr C++ environment.
The video below demonstrates the actual model creation process in Edge Impulse.
Here is an overview of the steps:
- Create a new project.
- Upload labeled data. (In this case, we uploaded 'bird', 'cat', 'dog', and 'noise' data.)
- Select the target device: We use "Nordic nRF54L15". (* If using XIAO nRF54L15 Sense)
- Choose the processing and learning blocks: We use "MFCC" and "Classification".
- Set MFCC parameters and generate features.
- Train the classifier model.
- Build and deploy the model as a C++ library. And Check the model from Edge Impulse.
Detailed instructions for integrating and building the lightweight model in the Zephyr C++ environment are provided in each test section below.
3-5. Wildlife-SoundThe audio dataset of wild animal sounds, used for training the Edge Impulse model in this test, is sourced from the ESC-50 dataset and Freesound.org.
For the license of the audio data itself, please refer to the linked source. This document only describes the procedure for creating a sample dataset.
You can use these datasets to create wild-sound recognition model on Edge Impulse.
- Dogs, cats, and birds: Audio data is from the ESC-50 dataset.
https://github.com/karolpiczak/ESC-50
A Python script (.ipynb) is provided below to fetch audio data from the ESC-50 dataset and prepare it for our sound recognition project.
https://github.com/iotengineer22/edge-aI-earth/blob/main/src/sound-data/collect_dataset_esc-50.ipynb
- Bears: Audio data is sourced from Freesound.org.
https://github.com/iotengineer22/edge-aI-earth/blob/main/src/sound-data/README.md
Additionally, another script is available for creating a dataset from your own audio files stored on Google Drive.
https://github.com/iotengineer22/edge-aI-earth/blob/main/src/sound-data/collect_dataset_mydata.ipynb
In both cases, all audio is processed into 1-second samples with a 16kHz sampling rate.
4. Debug and Interface EvaluationThe ULP Acoustic PCB is equipped with interfaces.
We introduce some of these interfaces that I debugged and tested, though not all of them.
4.1 RGB_LEDRGB_LED(GPIO) debugging was performed. The board features onboard switches and LEDs, allowing for operation verification.
As the program is long, you can find it on GitHub. The following program was used for testing:
URL:https://github.com/iotengineer22/edge-aI-earth/tree/main/src/zephyr/gpio
The demo video is below:
Board RGB_LED is controlled from XIAO nRF54L15 Sense GPIO.
We can confirm RGB(Red, Green, Blue) LED.
The LED indicates the status of Edge AI sound recognition.
Buzzer(GPIO) debugging was performed. The board features onboard Buzzer.
By using a buzzer, we can produce sounds wildlife animals hate, helping to prevent Human-Wildlife Conflict.
As the program is long, you can find it on GitHub.
The following program was used for testing:
URL:https://github.com/iotengineer22/edge-aI-earth/tree/main/src/zephyr/buzzer
The demo video is below:
We can confirm Board Buzzer Sound from XIAO nRF54L15 Sense GPIO.
If you need to adjust the sound volume, we recommend either adding a resistor in parallel with the buzzer or adjusting the value of the series resistor.
This device features a USB Type-C port and can be powered by either a 5V source or a 3.7V Li-ion battery. 3.7V Li-ion battery charging is also supported.
The demo video is below:
Here, we will guide you through the setup for both running on 3.7V Li-ion battery and charging it.
Custom Board has 3.7V Li-ion Battery Connector. Battery Line is connected to XIAO nRF54L15 Sense.
We confirmed XIAO nRF54L15 Sense with Li-ion Battery
We can use with TypeC and Battery.
Connecting TypeC and Battery via XIAO nRF54L15 Sense.
We can confirm Charge LED Blinking.
Microphone and Edge AI debugging with Zephyr(RTOS) was performed.
The XIAO nRF54L15 Sense features onboard microphone.
This section covers testing speech recognition via the microphone. The first test demonstrates the recognition of human speech.
We are developing an Edge AI voice recognition application to run on the Zephyr RTOS.
As the program is long, you can find it on GitHub. The following program was used for testing:
URL:https://github.com/iotengineer22/edge-aI-earth/tree/main/src/zephyr/dmic_inference_gpio
The demo video is below:
This demo video showcases voice recognition for the keywords "Down" and "Right."
It runs a lightweight model, built with Edge Impulse, to perform Edge AI detection directly on the board itself.
When "Down" is recognized, the Red LED turns on.
When "Right" is recognized, the green LED turns on.
This confirms that the on-device Edge AI voice recognition is working successfully.
The test procedure in this section is the same as in the previous chapter.
However, the key difference is that the audio recognition model has been changed from human speech to animal sounds.
This new model was created on Edge Impulse using the aforementioned datasets from ESC-50 and Freesound.org.
However, this is intended for prototyping. To create a more accurate dataset, noise reduction and more careful sampling will be required.
The demo video is below:
This is a demo of the board testing audio recognition of animal sounds played from a speaker.In this test, the board recognizes 'Bear' and 'Dog' voice.
This confirms that the on-device Wild-life voice recognition is working successfully.
- Prediction: 'Bear' (Grrrr!) → Red LED turns ON.
- Prediction: 'Dog' (Woof!) → Green LED turns ON.
We conducted a test of Bluetooth Low Energy (BLE) for its long-range and low-power communication capabilities.
The envisioned application is a system that sends a notification immediately upon recognizing the sound of a wild animal.
Our original plan was to test the complete system, including notifications over several kilometers using a gateway.
However, due to material and deadline constraints, we limited the scope of this test to verifying the BLE long-range protocol itself.
The Bluetooth LE Coded PHY (S=8) feature enables extended range communication at the cost of a lower data rate.
As the program is long, you can find it on GitHub. The following program was used for testing:
URL:https://github.com/iotengineer22/edge-aI-earth/tree/main/src/zephyr/throughput
The demo video is below:
For the test, we prepared two of our PCB boards and configured one as a Central and the other as a Peripheral.
We successfully established a connection and confirmed that data transfer between the two boards was successful.
We have verified the ultra-low-power performance of our board running the Zephyr RTOS.
The actual low-current measurements were taken using a modified USB Type-C cable and a multimeter.
The demo video is below:
For comparison, we also measured the idle power consumption of a Raspberry Pi 5, which was approximately 670mA.
On the other hand, our project's PCB had an idle current of 9.4mA.
Even with the Edge AI voice recognition system implemented, our board achieved an exceptionally low power consumption of just 0.5mA.
Furthermore, the Bluetooth LE communication drew a minimal 3.6mA at a debug level, though this can vary depending on the transmission distance.
This confirms that the Zephyr RTOS itself can operate with Ultra-Low-Power.
As the comparison table and graph clearly shows, our board's power consumption is significantly lower than that of a Raspberry Pi 5, even when the latter is idle.
A device with this level of current consumption can operate for several weeks to a month, depending on the battery capacity.
We also conducted a field installation test. The PCB is equipped with mounting holes for both a battery and for securing the unit itself.
We confirmed that the device can be easily installed using zip ties.
The demo video is below:
Furthermore, it is compatible with large-capacity 5V power banks for extended use.
We verified that the audio recognition system operates correctly in an outdoor environment without any issues.
While the previous sections featured tests on the latest XIAO nRF54L15 Sense, this solution is also fully functional on the current, Low-end XIAO nRF52840 Sense.
This demonstrates that an energy-efficient implementation on Low-end embedded SoC with Zephyr(RTOS) is achievable.
For the XIAO nRF52840 Sense, please note that this project is built using the standalone Zephyr RTOS environment (west), not the nRF Connect SDK.
As the program and config-file are long, you can find it on GitHub. The following program was used for testing:
URL:https://github.com/iotengineer22/edge-aI-earth/tree/main/src/zephyr/nrf52840_dmic_inference_gpio
The demo video is below:
Here are the build results from Zephyr (RTOS).
The memory usage is very low, at only 193KB of ROM (FLASH) and 78KB of RAM. This allows us to run Edge AI voice recognition on low-end SoCs.
Memory region Used Size Region Size %age Used
FLASH: 193468 B 788 KB 23.98%
RAM: 77624 B 256 KB 29.61%
IDT_LIST: 0 GB 32 KB 0.00%
Generating files from C:/Users/***/zephyrproject/test/dmic-inf-nrf52/build/zephyr/zephyr.elf for board: xiao_ble
Converted to uf2, output size: 387072, start address: 0x27000
Wrote 387072 bytes to zephyr.uf2
Also, we successfully ran voice recognition on the XIAO nRF52840 Sense using Edge Impulse.
It is able to recognize the voice of the specific person it was trained on.
This section primarily serves as an Introduction to PCB Design.
This project involves a 2-layer PCB. We used KiCad Ver9.0 for creating the schematics and board outline (artwork frame).
The Bill of Materials (BOM) and Gerber data, including the artwork frame, are also made public. Please refer to the released 'Schematics'to check them.
You can use KiCad Project files(Schematics, ArtWork, 3D_Image) free.
This PCB is 2-Layer, 99.5mm x 141.7mm Size, and Thickness:1.6 mm.
In detail, please refer to below PCB Parameter settings.
After placing the order, you just wait for the PCB to arrive. Based on personal experience, the total lead time was approximately 2 weeks.
Despite this, component preparation was conducted in parallel during the fabrication period, which allowed for a smooth process with NextPCB.
Breakdown:
- Board Fabrication: approx. 1-2 days
- Board Assembly: approx. 1 weeks
- Shipping: approx. 5 days
A significant number of features were successfully implemented and tested during this project. However, due to constraints imposed by the project deadline, the following items remain incomplete:
- Improve AI Model Accuracy for Wildlife Sounds
The current model's accuracy is limited by the open dataset, which lacks sufficient data for animals like bears.
Future work involves refining data pre-processing (noise reduction, sampling) and using a more comprehensive dataset to improve detection accuracy.
- Stabilize 3.7VBattery Operation
While the device operates on both 5V and 3.7V batteries, it becomes unstable on 3.7V under high processing loads.
We plan to investigate the XIAO's internal power management to ensure stable 3.7V performance.
- Implement a Full Notification System
We have successfully tested long-range Bluetooth LE (Coded PHY) data transfer on the device.
The next step is to build and test an end-to-end notification system that sends alerts from a remote field location (e.g., a forest) to a user in a town.
This project successfully achieved the following:
- Edge AI system for Wildlife Sound Recognition.
- Ultra-Low-Power ecosystemusing Zephyr(RTOS).
- Energy-efficient implementation on Low-end embedded SoCs
- Long-range, Low-power communication with Bluetooth Low Energy.
This bellow video summarizes the main results.
This has been a great fun challenge.
Thanks to Sponsors and Impact Partners for hosting this exciting competition. Thank you very much for all the support provided.
- EDGE AI FOUNDATION
- NextPCB
- Gorilla Technology
- End Wildlife Crime
- WILDLABS
- Interspecies Internet
- hackster.io
We have created a post about this project in the WILDLABS discussion group. We are delighted to be a part of such a wonderful community.
Here is the link to our post:
https://wildlabs.net/article/edge-ai-zephyr-ulp-acoustic-monitoring-wildlife
Comments