This project is part of the Edge AI Earth Guardians series—a collaborative effort initiated for the Hackster Edge AI Earth Guardians Challenge in partnership with the Edge AI Foundation and NextPCB. Each project in this series tackles a unique aspect of environmental stewardship using Edge AI and modular hardware, serving as a building block for future expansion and innovation
This project details how to implement, train, and deploy a custom squirrel detection model using the Vision AI Starter Kit. Learn how to collect data, develop species-specific detection with real-time computer vision, and automate non-harmful deterrents at the edge—all while supporting biodiversity and privacy. Build an open-source, rugged solution that makes your gardens and feeders safe for birds—no cloud required, just smart technology at work!
Project ChallengeThis project tackles the pressing challenge of effective, species-specific wildlife monitoring and deterrence. Many "smart" deterrent systems rely on cloud connection, simplistic motion detection, or generic models. Here, we design and implement a custom-trained, edge-deployed AI model for squirrel detection, enabling real-time, privacy-focused, offline animal identification at bird feeders—protecting both feed and avian biodiversity in backyards and gardens.
"Project 3" acts as the Technical design and implementation within the Edge AI Earth Guardians series.
Its core mission is to deepen the network's technical capacity by:
- Demonstrating real-world application of transfer learning and edge-compatible architectures (YOLO, MobileNet) for wildlife detection.
- Documenting a full cycle: from dataset acquisition/labeling (e.g., via Roboflow), through model training on Colab, to deployment using the Seeed Grove Vision AI V2.
- Producing reusable, transparent technical workflows that empower others to adapt, train, and deploy edge models for their own environmental monitoring needs.
This project directly builds upon the field-proven hardware and deployment context published in Project 1 and Project 2:
- It supplies a model and AI pipeline that can be directly flashed and configured by users following the hardware manufacturing workflow (NextPCB, Project 2).
- Troubleshooting and lessons learned here (dataset format, model export, annotation issues) inform best practices upstream in documentation and future hardware/software iterations.
- The model's success or challenges feed continuous improvement—enabling a feedback loop to adapt detection approaches and scale to new species or environments in future "Guardians."
I invite you—whether a citizen scientist, educator, student group, or maker—to use, improve, and adapt this project for your own biodiversity and conservation needs!
- The model development pipeline is fully open: fork our notebooks, suggest new datasets, help troubleshoot, or remix the solution for other "nuisance" animals or habitats.
Model Creation ProcessHere is a breakdown of the model creation process for squirrel detection. as described in your project, strictly following your actual documented workflow:
1. Dataset Acquisition and Labeling**Roboflow Setup:**
• Created a free, public Roboflow workspace.
• Used the Roboflow "Universe" search to find public datasets.
• Keywords used: squirrel, and filtered for datasets with ready-to-use models and image metadata.
• Example selected: "bird-feeder-detection" project with 6 images.
**Download Datasets:**
• Downloaded several projects, copying their RAW dataset URLs for direct machine access (for Colab).
• Noted two methods:
■ Use public/labeled Roboflow datasets.
■ (Alternative) Use your own captured images—requires manual annotation via Roboflow's labeling UI.
2. Model Training (Attempted; Issues Encountered)**Platform:**
• Used the Google Colab platform, as recommended in Seeed's tutorial and linked on the SenseCraft wiki.
**Colab Notebook Selection:**
• Chose the Gesture_Detection_Swift-YOLO_192 Colab notebook (as listed in Seeed SenseCraft Model Assistant's Model Zoo).
• Opened directly in Colab with Google account authenticated.
**Dataset Integration:**
• Edited the "download the dataset" section in Colab:
■ Pasted the Roboflow RAW dataset link into code cell for direct fetch onto Colab VM (e.g., via wget or Roboflow API).
■ Confirmed dataset content: image files + YOLOv8-compatible annotation structure.
**Command Line Parameters:**
• Training launch command (sscma.train config.py --cfg-options data_root=./datasets/test_dataset epochs=10) failed for both "Fairfield_Wildlife_Detector_Image_Dataset" and "Squirrel_Computer_Vision_Project".
**Observed Errors:**
• Training failed at the training launch (sscma.train config.py ...).
• Did not succeed in producing a .tflite or .onnx export for downstream deployment.
**Model Architecture:**
• Swift-YOLO_192 referenced in Colab—a compact YOLO-based model tuned for embedded vision and efficiency.
• Training process is designed to be user-modifiable (epochs, learning rate, anchor boxes, augmentation), but blocked by failed runs.
• Configuration (config.py) intended to control parameters for dataset parsing, model saving, and export workflow.
3. Intended Deployment Procedure**Post-Training Model Format:**
• If training succeeded, would generate a final artifact in the appropriate format (e.g., .onnx) compatible with the SenseCraft AI Model Assistant.
**Seeed SenseCraft AI Model Assistant (Web Toolkit):**
• Upload finished model directly to Vision AI Module V2 via USB.
• SenseCraft provides a no-code web interface for importing, validating, and deploying models.
**Supported Inference Frameworks:**
• Module supports TensorFlow Lite and ONNX models (as documented per hardware).
• The hardware itself: Grove Vision AI Module V2 with ARM Cortex-M55 and Ethos-U55 NPUs for accelerated inference.
4. Documentation and Troubleshooting Steps**References Used:**
Seeed Wiki: grove_vision_ai_v2
Deploying Models from Datasets to Grove Vision AI V2
**Next Steps:**
• Try alternative dataset formats/different annotations.
• Adjust model config for compatibility.
• Consider direct export from Roboflow in different formats if training outside Colab.
• Use alternative hardware if the workflow is blocked by board incompatibility (e.g., XIAO ESP32S3, Wio Terminal for UART).
**Future Enhancements:**
• Adjust model hyperparameters in config.py (anchors, augmentations).
• Rerun with additional custom images to boost class balance and model robustness.
• When export is successful, benchmark using the module's built-in inference reporting for FPS and sensitivity, and fine-tune as required.
Next StepOn to the next project in this series which implements an AI-powered squirrel deterrent system for bird feeders, using edge AI vision modules to detect and gently repel squirrels, resulting in improved bird access and reduced seed waste. Its high detection accuracy, robust hardware design, and humane approach showcase edge AI's practical benefits for environmental conservation and scalable wildlife management.
Project 4 Implementation Report
Comments