Tangerang is one of the metropolitan city in Indonesia. It has a lot of thematic park in order to make the city more comfortable for its citizen. One of the thematic park is Taman Burung Perak or Perak Bird Park.
It has a few species of birds such as parrot, magpie, dove, etc. They are maintained with careful and given enough food every day. But sometimes the employees of the bird park having difficult time when it comes to count the amount of birds and their position.
So in this time I will create a tinyml based bird watcher.
PreparationThe Bird park already has an integrated internet/network connection so I will use the W5300 ethernet module combine with STM32F4 discovery microcontroller module as its main brain.
As for the detection sensor I am gonna use the Grove Vision AI from seeedstudio with custom data model.
Create a bird detection data model
For the grove vision AI module I already create a custom data model that you can find here, if you don't want to hassle with creating a custom data model. After you download the data model follow these step:
Step 1. Install the latest version of Google Chrome or Microsoft Edge browser and open it
Step 2. Connect Grove - Vision AI Module into your PC via a USB Type-C cable
Step 3. Double-click the boot button on Grove - Vision AI Module to enter mass storage mode
After this you will see a new storage drive shown on your file explorer as GROVEAI
Step 4. Drag and drop the model-1.uf2 file to GROVEAI drive and wait for 1-2 minutes
Now the grove vision AI module can detect the birds
But if you still want to create a custom data model of your own, I already create a full tutorial from previous project that you can check here. The difference is on the roboflow part you need to include a different dataset. You can find various birds dataset on roboflow
Arduino SetupFor this tutorial I'll be using an older version of Arduino IDE
In order to be able to use the STM32F4 Discovery in arduino IDE you need to add the following url on the addtional board URL
https://github.com/stm32duino/BoardManagerFiles/raw/main/package_stmicroelectronics_index.json
then go to board manager and install the STM32 board definition
After the boards manager is installed connect the grove vision AI module to the STM32 board like this picture below
Make sure to choose the board definition correctly using stm32F407 Disc1
And choose the upload method using the SWD option
Before testing the data model we need to add the grove vision AI library to the arduino IDE. you can grab the library here.
After the library is downloaded you can test if the data model is working or not using the example code from the library. Upload the code the open up a bird picture on a browser and open serial monitor like this picture below
If something shows up in the serial monitor then the data model is working correctly.
One of the main challenge on using w5300 ethernet module is the documentation only supports for the stm32 nucleo board, and I find it difficult to locate and buy the board on my local online store. Fortunately I have an STM32F4 Discovery and working on modifying the w5300 arduino library to work on my STM32F4 Discovery board.
add the library to your arduino IDE from the link above then locate the GxIO_STM32F4_FMC.cpp and made some changes.
add two new variable
and uncomment this part since I am using the STM32F407 microcontroller
Now we need to test if the modification could compile successfully by choosing the web server examples
adjust the hardware position in anyway you like then solder the connection below the pcb
To make the detection data available to the local network, I will create a simple web server based on the web server example. Simply upload the code below and change the IP address based on your network
#include <Arduino.h>#include "Ethernet.h"#include "Seeed_Arduino_GroveAI.h"#include <Wire.h>#define SERVER_PORT 80GroveAI ai(Wire);uint8_t state = 0;// Enter a MAC address and IP address for your controller below.// The IP address will be dependent on your local network:byte mac[] = { 0xDE, 0xAD, 0xBE, 0xEF, 0xFE, 0xED};IPAddress ip(192, 168, 1, 177);// Initialize the Ethernet server library// with the IP address and port you want to use// (port 80 is default for HTTP):EthernetServer server(SERVER_PORT);void setup() { //STM32F429ZI's Serial port changed from default Serial Port Serial2.setRx(PC11); Serial2.setTx(PC10); Serial2.begin(9600); Serial.println("Ethernet WebServer Example");// start the Ethernet connection and the server: Ethernet.begin(mac, ip); print_network_info(); // Check for Ethernet hardware present if (Ethernet.hardwareStatus() == EthernetNoHardware) { Serial.println("Ethernet shield was not found. Sorry, can't run without hardware. :("); while (true) { delay(1); // do nothing, no point running without Ethernet hardware } } if (Ethernet.linkStatus() == LinkOFF) { Serial.println("Ethernet cable is not connected."); }// start the server server.begin(); Serial2.print("server is at "); Serial2.println(Ethernet.localIP()); Wire.begin();Serial.println("begin"); if (ai.begin(ALGO_OBJECT_DETECTION, MODEL_EXT_INDEX_1)) // Object detection and pre-trained model 1 { Serial.print("Version: "); Serial.println(ai.version()); Serial.print("ID: "); Serial.println( ai.id()); Serial.print("Algo: "); Serial.println( ai.algo()); Serial.print("Model: "); Serial.println(ai.model()); Serial.print("Confidence: "); Serial.println(ai.confidence()); state = 1; } else { Serial.println("Algo begin failed."); }}void loop() { // listen for incoming clients EthernetClient client = server.available(); if (client) { Serial.println("new client"); // an http request ends with a blank line boolean currentLineIsBlank = true; while (client.connected()) { if (client.available()) { char c = client.read(); Serial.write(c); // if you've gotten to the end of the line (received a newline // character) and the line is blank, the http request has ended, // so you can send a reply if (c == '\n' && currentLineIsBlank) { // send a standard http response header client.println("HTTP/1.1 200 OK"); client.println("Content-Type: text/html"); client.println("Connection: close"); // the connection will be closed after completion of the response client.println("Refresh: 5"); // refresh the page automatically every 5 sec client.println(); client.println("<!DOCTYPE HTML>"); client.println("<html>"); client.println("<HEAD>"); client.println("<meta name='apple-mobile-web-app-capable' content='yes' />"); client.println("<meta name='apple-mobile-web-app-status-bar-style' content='black-translucent' />"); client.println("<link rel='stylesheet' type='text/css' href='https://randomnerdtutorials.com/ethernetcss.css' />"); client.println("<TITLE>TinyML Bird Watcher</TITLE>"); client.println("</HEAD>"); client.println("<BODY>"); client.println("<H1>Reading Result</H1>"); // output the value of each analog input pin if (state == 1) { uint32_t tick = millis(); if (ai.invoke()) // begin invoke { while (1) // wait for invoking finished { CMD_STATE_T ret = ai.state(); if (ret == CMD_STATE_IDLE) { break; } delay(20); }uint8_t len = ai.get_result_len(); // receive how many people detect if(len) { int time1 = millis() - tick; Serial.print("Time consuming: "); Serial.println(time1); client.print("Number of birds: "); client.print(len); Serial.print("Number of birds: "); Serial.println(len); object_detection_t data; //get datafor (int I = 0; I < len; i++) { Serial.println("result:detected"); Serial.print("Detecting and calculating: "); Serial.println(i+1); ai.get_result(i, (uint8_t*)&data, sizeof(object_detection_t)); //get result Serial.print("confidence:"); Serial.print(data.confidence); Serial.println(); } } else { Serial.println("No identification"); client.print("No identification"); } client.println("<br />"); } else { delay(1000); Serial.println("Invoke Failed."); } } else { state == 0; } client.println("</html>"); break; } if (c == '\n') { // you're starting a new line currentLineIsBlank = true; } else if (c != '\r') { // you've gotten a character on the current line currentLineIsBlank = false; } } } // give the web browser time to receive the data delay(1); // close the connection: client.stop(); Serial.println("client disconnected"); }}void print_network_info(void){ byte print_mac[] ={0, }; Serial.println("\r\n-------------------------------------------------"); Serial.printf("MAC : "); Ethernet.MACAddress(print_mac); for (byte I = 0; I < 6; i++) { Serial.print(print_mac[i], HEX); if (i < 5) { Serial.print(":"); } } Serial.println(); Serial.printf("IP : "); Serial.print(Ethernet.localIP()); Serial.printf(": %d\r\n", SERVER_PORT); Serial.println("-------------------------------------------------");}
then test it out with the camera
open the IP address that you set before and see the result on a web page
I add a little bit of html code to make the web page a bit more centered
The TinyML-based Bird Watcher project establishes a powerful precedent for using edge machine learning to monitor local fauna, but its focus remains solely on the observation of the birds themselves. To create a truly comprehensive conservation tool, this monitoring must be correlated with the health of the habitat. The gas-based fire detection model provides this crucial link. By integrating the SenseCAP K1100 kit's environmental sensors (SGP30 and SHT40) into the bird-watching system, we can create a single, unified device that not only watches for birds but also actively protects their environment from existential threats like wildfires. This transforms the project from a passive observer into an active guardian, as the same TinyML principles used to identify a bird can now be used to detect the high-temperature, low-humidity, and high-VOC signature of a fire, ensuring the long-term safety of the very ecosystem the birds inhabit
One of the forest threat where the sanctuary is located is wildfire. And most of the time the response action is too late to do because there are immediate notification.
According to this blog post from edge impulse, a data model can be build to detect fire with the criteria as follow:
Normal: the forest exhibits normal temperature, humidity, and air quality.
- Normal: the forest exhibits normal temperature, humidity, and air quality.
Open fire: fully fledged wildfire with low humidity, high temperatures, and large amounts of volatile organic compounds (VOCs).
- Open fire: fully fledged wildfire with low humidity, high temperatures, and large amounts of volatile organic compounds (VOCs).
So with the sensor provided from the sensecap K1100 kit which are the Grove - VOC and eCO2 Gas Sensor(SGP30) module and Grove Temp&Humi Sensor (SHT40) whidh has capability to read VOC, temperature, and humidity a data model will be build to detect if a fire occur in the forest.
Shawn Hymel already made a video to make a data model from sensor fusion for sensing the air/gas
check the video above before continuing the next step.
The difference here, only two sensor will be used and the data will be collected via sd card.
Before collect data make sure the SD card already have "aqi.csv" on the root folder.
Use the SGP30 and SHT40 and connect it like the picture below
To Collect the data, upload this code to the wio terminal and insert the sd card.
It will save the VOC, temperature and humidity data into csv format to the sd card. The 3 button on the wio terminal will be use to capture different class of data.
Button A to collect background data
Button B to collect fire data
Button C to collect smoke data
When one of the button is pressed it will record 1 second data to the "aqi.csv" that was created before.
Start a small fire and collect all the necessary data
After all data has been collected split each class into single file and add a timestamp index then normalize the data using the guide from the video above. and check on the preprocessed data
Upload each class using the guide from video above. Here are some result for the preprocessed data
Check on the preprocessed data result and save the value to a notepad. The "mins" and "Range Value" will be reused on the inferencing code
Mins: [0.0, 26.53, 33.17, 0.0]
Ranges: [484.0, 22.67, 51.78, 60000.0]
Mins: [0.0, 26.53, 33.17, 0.0]
Ranges: [484.0, 22.67, 51.78, 60000.0]
After running all the code from the google collab. Download the data result zip and extract it. Next step is to split the each data value to each designated label/class. To do that a python small app using pandas library will be used.
copy the code below and save it in the same folder where the data is stored.
import pandas as pd
in_csv = 'smoke.sample0.csv' #change this with file you want to split
number_lines = sum(1 for row in (open(in_csv)))
rowsize = 1
colnames=['timestamp', 'temp', 'humi', 'tvoc']
for i in range(1,number_lines,rowsize):
df = pd.read_csv(in_csv,
names=colnames,
header=None,
nrows = rowsize,#number of rows to read at each loop
skiprows = i)#skip rows that have been read
out_csv = 'smoke.sampling' + str(i) + '.csv' #change this with the label name
df.to_csv(out_csv,
index=False,
header=True,
mode='a',#append data to csv file
chunksize=rowsize)#size of data to append for each loop
import pandas as pd
in_csv = 'smoke.sample0.csv' #change this with file you want to split
number_lines = sum(1 for row in (open(in_csv)))
rowsize = 1
colnames=['timestamp', 'temp', 'humi', 'tvoc']
for i in range(1,number_lines,rowsize):
df = pd.read_csv(in_csv,
names=colnames,
header=None,
nrows = rowsize,#number of rows to read at each loop
skiprows = i)#skip rows that have been read
out_csv = 'smoke.sampling' + str(i) + '.csv' #change this with the label name
df.to_csv(out_csv,
index=False,
header=True,
mode='a',#append data to csv file
chunksize=rowsize)#size of data to append for each loop
Repeat the splitting data for each label.after splitting each label to each own files it will be looked like this
upload all data to edge impulse and split it automatically between training and test data
then follow along the rest of the guide from Shawn's video above
After downloading the model, add it to the arduino IDE. Grab the data model here
test the model using this code. The result is as follow
Conclusion
This project successfully demonstrates the power and accessibility of TinyML for creating intelligent, low-power wildlife monitoring systems. By integrating a custom-trained machine learning model on the Grove Vision AI module with the robust connectivity of the WIZnet Ethernet module, the system effectively serves as a dedicated, automated bird watcher. It successfully achieves its core goal of detecting the presence of birds within a specific environment (the geodesic dome) and relaying this information over the network. The project stands as an excellent proof-of-concept, showcasing how edge AI can bring real-time data processing to remote or self-contained locations, paving the way for more complex ecological monitoring solutions.
Further ImprovementsWhile the project is a resounding success, it lays the groundwork for several exciting enhancements that could increase its scientific value and operational autonomy.
Species-Specific Identification:
Improvement: The current model is trained for general bird detection. A significant next step would be to collect a larger, more diverse dataset of local bird species and train the model to not only detect the presence of a bird but also identify its species with a reasonable confidence score.
Full Autonomy with Solar Power:
Improvement: To make the device truly autonomous for long-term field deployment, integrate a solar panel and a rechargeable battery system (like a LiPo or 18650 cell with a proper charge controller). This would eliminate the need for a constant power supply, making it ideal for remote locations.mote locations.
Environmental Data Correlation:
Improvement: Add environmental sensors to the project, such as a temperature and humidity sensor (SHT40) and an air quality sensor. This would allow you to log environmental conditions alongside bird sightings, providing valuable data to correlate bird activity with changes in their habitat.
Advanced Data Logging and Cloud Integration:
Improvement: Instead of just triggering a notification, the device could be programmed to log every detection event with a timestamp and the classification result. This data could be sent to a cloud IoT platform (like Arduino IoT Cloud, Thingspeak, or AWS IoT) for long-term storage, visualization, and trend analysis.
Behavioral Analysis:
Improvement: With a more advanced model and continuous observation, the system could be trained to recognize specific behaviors, such as feeding, nesting, or signs of distress. This would elevate the project from a simple counter to a behavioral monitoring tool.
Weatherproof Enclosure:
Improvement: For deployment outside of the protected dome, design and 3D-print a durable, weatherproof enclosure that protects all electronic components from rain, dust, and temperature fluctuations, ensuring the project's longevity in the field.









Comments