Pollinators (primarily bees) play a critical role in reproduction and survival of flowering plants. Pollinators are vital to promote the health of the plant ecosystem and sustain agriculture for the world’s food supply. To determine whether pollinators in my local area are struggling or thriving I need to be able to compare pollinator count and diversity over time in a seasonal and environmental context.
I propose building multiple solar/battery operated time lapse camera modules to monitor different plant types to determine pollinator preferences and population change over time.
I would dedicate specific cameras to individual plant types and use Edge AI capabilities using FOMO (Faster Objects, More Objects) object detection to get accurate counts of different pollinator types. It would use time lapse images correlated to environmental data (temperature, humidity, pressure, ambient light) to ensure that data has the correct context.
In a local area context, it would allow selection of appropriate plant types to encourage population growth, and an accurate population count provides early warning to investigate possible environmental issues like pesticides, disease, or predators.
Images from my backyard habitat
I have a diverse set of native plants in my backyard that I have been observing for bee activity. In general, bees are primarily active in the late spring to late summer months but it also depends on which flowers are blooming and the type of bees. Solitary bees like Mason bees and Leaf Cutter bees have very short seasons (6-8 weeks) and even though they are efficient pollinators I rarely see them in large enough numbers to accurately measure. On the other hand, honeybees and bumblebees are usually present in large numbers during summers of "normal" years and I use them as qualitative indicators of the health of the pollinator ecosystem. With climate change I've noticed that the timing of bee presence has more year to year variation. Probably due mainly to the timing of the flowering plants but I thought that I'd try to see if I could come up with a way to get an automated quantitative measure to verify that.
There are definitely certain plants in my backyard that generate a lot of bee traffic. Late spring to mid summer it is the Fireweed shown below with a couple of bumblebees, a honeybee and a mason bee.
And all summer Catmint and Sedum plants attract bumblebees and honeybees.
An AI camera (Xiao ESP32S3 Sense) running FOMO object detection periodically captures an image and stores it on the SD card. The AI provides a count of the different pollinator types in the image. The processor simultaneously reads environmental and ambient light data and stores that on the SD card. All the data is then published as a JSON string using MQTT. A Node Red server receives the MQTT data and stores it in InfluxDB where it can be visualized using Graphana. I also use a Node-Red dashboard for continuous monitoring.
The camera module is powered by a solar battery system using a solar panel with battery charging handled by the Xiao ESP32S3.
For proof of concept I will only be using a single camera module.
ComponentsAI Camera
The Seeed Xiao ESP32S3 Sense module has a rich feature set that matches the requirements for my project.
The base Xiao ESP32S3 board includes:
- ESP32S3 32-bit, dual-core, Xtensa processor chip operating up to 240 MHz
- 384KB ROM
- 512KB SRAM
- 8MB PSRAM and 8MB FLASH
- 2.4GHz Wi-Fi and BLE with external U.FL antenna
- LiPo Battery charging interface
The detachable camera/SD board adds:
- OV2640 camera sensor for 1600*1200 resolution (newer models use the OV3660)
- Digital Microphone
- SD Card Slot, supporting 32GB FAT
Environment Sensor
The Bosch BME688 has the following feature set:
- SPI or I2C communication (I am using I2C)
- Temperature sensor, -40°C to +85 °C, Accuracy: ±1.0°C
- Relative Humidity sensor, 0% to 100%, Accuracy: ±3%RH
- Pressure sensor, 300hPa - 1100hPa, accuracy: ±60Pa (0°C to 65°C)
- Gas sensor, volatile organic compounds (VOCs), volatile sulfur compounds (VSCs), and other gases such as carbon monoxide and hydrogen in the part per billion (ppb) range
Ambient Light Sensor
The TI OPT3001 Ambient Light Sensor has the following features:
- I2C communication interface
- Precision Optical Filtering to Match Human Eye:– Rejects > 99% (typ) of IR
- Measurement range: 0.01 lux to 83 k lux
I haven't tried a turnkey PCB assembly process before and I also have not done any SMD PCB designs because I don't have the proper assembly capability. Therefore this competition was a great opportunity to do both. I'd like to thank NextPCB for selecting me to receive a manufacturing voucher. If the proof of concept works out, being able to acquire additional PCBAs will allow me to scale my project.
The PCB design is fairly simple as it basically just implements the connections in the block diagram. I am going to use a 3D printed case for the initial POC prototype, but I already have existing ABS cases that I can modify if I decide to deploy more units. These cases are 100mm x 60mm x 25mm, so I sized my design to fit this space.
The design was done using Kicad 8. Since this will be a turnkey design I chose my components so that they could be sourced by NextPCB (parts available at HQOnline). The only part that I am providing is the Xiao ESP32S3 Sense. I used a hybrid THT/SMD footprint for the Xiao ESP32S3. For the prototype I am going to install female headers on the PCB to mount the Xiao ESP32S3. For any additional boards I will mount the Xiao ESP32S3 as an SMD component using its castellated pads. That will allow me to move a pretested Xiao ESP32S3 from the breadboard initially for PCBA verification.
Here is the completed design in the Kicad 3D Viewer:
The Fabrication and Assembly process went fairly smoothly. NextPCB did find a DFM issue with the placement of the microUSB connector that I needed to fix.
Edge Impulse FOMO ModelI need an object detection model that can run on a Xiao ESP32S3 and detect multiple objects simultaneously to get an object count in an image. The Edge Impulse FOMO (Faster Objects, More Objects) is a good match. It can detect multiple different objects in the image and provide the centroid location of each object in the frame. It does not provide bounding boxes so it does lack object size information but for my purposes I really just need the a count of each object type.
Data Acquisition
The first step in building a model is acquiring the data required to train it. My original plan was to use images of bumblebees and honeybees captured on Catmint and Sedum plants to build a two class FOMO model. The Fireweed plants are past their bloom since it is late in the summer. Unfortunately, I discovered when I went to capture images that bumblebees are also getting scarce and I had a difficult time getting enough images (had less than 20) for a reasonable dataset. I decided that I would just build a single class honeybee model.
I collected images using my iPhone. It turns out that the iPhone had defaulted to saving in the HEIC format, so I used IrfanView to crop the images to a square aspect ratio and store them in JPEG format. I created a Pollinator Camera project in Edge Impulse Studio and uploaded 138 images with an 85%/15% (114/24) Train/Test Split.
The unlabeled images are put into a labeling queue that allows you to sequentially label and apply bounding boxes to all the individual honeybees in each image.
Impulse development
I then created a FOMO Object Detection Impulse with a 96x96 image input. The default Learning Block is FOMO MobileNetV2.0.35.
Model Training
I was concerned about the quality of the dataset because of the small size of the honeybees relative to the overall image size and the complexity of the background. I took about 300 total images with bees on Catmint, Douglas Aster, and Sedum plants but thought that I would start with a single background flower, the Sedum, to see how well that performed. That represents the 138 images that I uploaded and labeled.
Because of the small dataset, I set the Learning Rate to 0.001 to try to minimize overfitting. I tried 100 and 200 training cycles but at 200 it was clearly overfitting. I chose to use 100 cycles since after that the test accuracy decreased even though the validation accuracy continued to improve.
The model performance was not very good with this impulse. The i8 validation accuracy was 57.1% and the i8 test accuracy was 20%.
I then tried different impulse variations with this dataset to see how well I could do if I varied color depth (grayscale vs RGB), input image size (96x96, 160x160, 320x320) and quantization (float32, int8). Of course, increasing impulse complexity causes increased memory usage and longer inference times.
Model Deployment
I chose to deploy the RGB-160x160 model because the 320x320 models will not fit in the SRAM. I am deploying early to test my assumption. Once I get to a working configuration I will go back and work on the model accuracy. Since I am programming the Xiao with the Arduino IDE, the model is deployed as an Arduino library that can be installed as a zip library and included in the program.
The standard sanity check is to run the static buffer example from the deployed library as shown below:
I initially ran into memory related problems with the RGB_160x160 model probably due to buffer overflows damaging the heap. This resulted in constant rebooting and I had to enter bootloader mode to recover.
I then decided to try the smallest model Grayscale_96x96 and that seemed to run okay but intermediate models did not. I then discovered that if I disabled the external PSRAM that I would then get an error allocating the tensor arena:
This occurs using Arduino IDE 2.3.3 on my Windows desktop. I tried the same library examples compiled on my Windows laptop which runs Arduino IDE 2.3.6 and they ran without any issues. I'm not sure if I have some software versioning issue or some corrupted files. Because time is short until the contest deadline I chose not to troubleshoot it and just continue working on the laptop.
There were a few surprises for me when I ran through a set of the models that I had generated. First, the inference times were shorter than I expected based on the i8_Latency times reported on Edge Impulse. Here's a quick comparison table:
I'm not sure what numbers Edge Impulse is using for pre and post processing.
Second, the 320x320 model ran without requiring any program changes. Just had to enable the PSRAM. The model data is certainly larger than the 512K SRAM, so the PSRAM must be used automatically for inferencing. Not sure how much that is adding to the inference time. This suggests that I might be able to use much larger input images as multi-second processing times should be okay in a time lapse image capture scenario. The increased resolution will definitely help the accuracy.
And finally, I had always assumed that the static buffer program which uses the raw features from live classification of test data would always provide the same results as the live classification. It turns out that it mostly matches. In low resolution models it sometimes produces false positives and in the higher resolution models it sometimes misses a detection. Here's an example from the RGB_320x320 model:
There are 3 honeybees in the image. I was interested in this image because two of the bees are close together and I wanted to see if they would be detected separately. The live classification result detects them correctly:
The top image is the one that I labeled with bounding boxes and the bottom image is the centroids of the detected bees:
But the static buffer program running on the ESP32S3 only detects the the two with the higher scores:
Actually, I was also somewhat surprised that the program provided bounding box information in addition to the centroid location as the documentation indicated that only the centroid location would be provided by the model.
The deployment appears to be working but the real challenge will be to get it working with the camera and application program.
Prototype BreadboardI am designing a custom PCB for this project but in order to start firmware development while waiting for the fabrication and assembly of the PCB, I decided that I would build a functionally equivalent circuit using a Xiao ESP32S3 Sense and sensor modules mounted and wired on a prototyping board. I am using plugin headers for all the modules to allow for removing or replacing them during program debug. I also 3D printed a baseplate to allow it to mounted on a tripod. The battery fits between the protoboard and the baseplate.
The Xiao is in the upper center of the board. The BME688 is upper left and the OPT3001 is lower left. A microUSB connector and Schottky diode for solar panel connection are at the bottom.
The Xiao ESP32S3 board uses a SGM4067-4.2 as its battery charging IC. It normally runs off the +5V USB VBUS voltage with the charge current set at 110mA. The part is suitable for input voltages that are somewhat unstable like solar panels or wireless charging. The operating input voltage range is 2.7V to 7.5V but the battery will only charge when the input voltage is about 400mV above the battery voltage. I am connecting the solar panel to VBUS through a Schottky diode to prevent reverse current flow when the panel is not operating and to allow programming the Xiao without disconnecting the panel. To ensure that the battery can be fully charged I'll need to get a minimum of 4.6V from the panel. I've noticed that even though the panel description is 6V, 5W that it only produces around 4.9V open circuit so I'll need to test to make sure I can fully charge the battery. It's really a bigger panel than I need but I chose it because of the IP65 rating and the long 3m microUSB power cord.
Sensor testing
I wrote a quick test program in the Arduino IDE to configure and read the BME688 and OPT3001 sensors and create a JSON string that I could publish to my Node-Red server via MQTT. Here is the Serial Monitor printout of the JSON string:
MQTT
I run an MQTT broker on an RPi4 on my local network. I also have a Node-Red server, InfluxDB, and Graphana on that RPi4. I developed a Node-Red flow to receive the sensor data and plot them on a dashboard. This will be expanded to show the FOMO data once that is working. Here is a snapshot of the sensor flow showing the received JSON string and the parsed JSON object in the Debug panel:
Node-Red Dashboard
The dashboard with the sensor data plots:
Time Lapse Camera
I wanted to test the time lapse functionality with a simple program and that exposed a number or issues.
The interval image capture to the SD card worked fine.
I realized I should provide more descriptive names, at least a timestamp. That exposed the first issue which is that I had forgotten to synch the Xiao to NTP so all of my file times were incorrect. I'll need to add that to the program so that I can correlate the images to the sensor and inference data. The information sent via MQTT has a correct timestamp that is provided by the server.
The second issue is that the OV2640 sensor on the camera is giving me blurry images. The stock OV2640 has its lens element glued in place with a default focus range from infinity to 17-18 cm. I was trying to use the camera at a distance between 20-30 cm. I tried using the CameraWebserver program to check the focus but I could not get a clear image out to about 40 cm, even using the full camera resolution (1600x1200). I need to figure this out or I won't be able to get reliable inferencing. Here are some zoomed in images from the time lapse captures.
The third issue is an environmental one that may prevent me from getting accurate data. We are approaching fall so we are getting more significant wind gusts in our area. I noticed that when it is sunny the bees are undeterred and will happily ride the swaying flowers, but I'm not sure how much that movement will affect my detection capability. Unfortunately, until I correct my camera blurriness I won't be able to check this out.
Program integration
Presently, the big hill to climb relative to the software is to integrate all of my various test programs into a fully working application program.
PCB QualificationI was in the last group to receive NextPCB vouchers, so I didn't get my PCB submitted until August 27. The PCB and PCBA fabrication time only took a week but shipping took a week and a half, so I had less than 2 weeks to work with the completed assembly. Luckily, this is a simple board so debugging is straightforward. I had made an error when placing the 3D model on the footprint of the BME688, so the vent hole on the assembled PCB was not where I expected and for a moment I was concerned that the part might have been placed incorrectly but it was only a mistake in the 3D view in Kicad.
I added the socket headers for the Xiao and ran the Pollinator_Camera_Sensors.ino program as the PCBA functional verification. Unfortunately, the program did not work initially because it could not find the BME688. A quick run of the I2C Scan program found the BME688 at 0x76 and the OPT3001 at 0x44. It turns out that I had configured the default address differently than the BME688 module that I had used on the breadboard (it was at 0x77). A quick change to the program and the program was working and I was getting data on my Node-Red dashboard. I did provide pad jumpers to allow changing the address but that would require cutting a trace on the PCB.
Here is the board running the initial functional test. The green LED just indicates the presence of the 3.3V output from Xiao. There is a blue LED on the other side that indicates the presence of solar voltage. I did not hardwire a GPIO to monitor the battery voltage but I did provide headers that can be used for that (I should have added a voltage divider).
3D Printed Case
I made a custom case for project which is shown below. There is a top and bottom section. The bottom section holds the entire PCBA and has mounting standoffs at the 4 corners which allow mounting the PCB using M2 screws.
The top basically just has the window for the camera and a slot to expose the BME688 and OPT3001 sensors.
More detail shown in the closed case. The hole on the left is for the SMA jack for the external antenna. The 2 holes on the right are for a 5mm LED and a momentary pushbutton switch. The slot under the camera port are the USB-C connector and SD card access on the Xiao ESP32S3 Sense. The slot for the Solar microUSB connector is on the opposite side.
I had forgotten to put the slot in the cover for the sensors as I have been testing without a case, so hope to get that updated soon.
Finished Assembly
The completed assembly
Camera module with case updated to add cover opening for sensors
Solar panel attached
Due to the contest time constraints I was not able to complete the project as planned. I am still working on getting all the software components integrated into a single application program. I won't be able to complete that and test and document it by the contest deadline, so I am submitting this project as a work in progress. I am pleased with the way that the PCBA turned out and the functional performance of the integrated hardware. I believe that I have the basis for a working prototype but there may be issues with getting reliable bee counts. Only real life testing will tell.
Here's a snap shot of where I am and what I am still need to do:
Updated Dashboard
The updated dashboard with battery voltage and inference results added:
Here's a short 3 minute video that shows the Inference Table updating. The table will expand or contract based on the number of detections in an image. The update rate was set to 10 seconds just for demonstration purposes.
Future Work
- Fix the camera blurriness problem. The new version of the Xiao ESP32S3 Sense ships with an improved OV3660 camera (2048x1536) so that will be worth a try even though it is still fixed focus. There is an autofocus OV5640 that is available but that is larger and runs much hotter (requires heatsink) so that would be a major update to the setup
- Add NTP time synchronization
- Add programming to use the external momentary pushbutton for immediate snapshot capability
- Add programming to use the external LED for status
- Add more images to the training dataset to improve model performance
- Try models with larger input image sizes
- Complete integration and test of the full application program
- Store data in InfluxDB
- Data analysis using Graphana
- Allow configuration and control via MQTT
- Complete the project documentation








Comments