In the dense forests and mountainous regions of Kerala, especially areas like Nelliampathy, wild elephants frequently traverse routes that have been part of their migratory patterns for centuries. These pathways, known locally as “Aanathaara” (elephant corridors), are deeply embedded in the landscape and ecology. However, with rapid human expansion — including roads, farmlands, and settlements — these natural trails now intersect with human activity, often with devastating consequences.
While locals may instinctively understand where and when to be cautious, most visitors, tourists, and even daily commuters remain unaware of the risks.In early 2025, a tragic incident occurred in Nelliampathy, Kerala. A German tourist, unfamiliar with the terrain and unaware of a wild elephant blocking the road, ventured forward despite the locals’ warnings. Tragically, he encountered the elephant and lost his life. Traditional static signage, such as painted boards warning of elephants, often fades into the background and fails to provide real-time, actionable alerts. The result? Dangerous — and sometimes fatal — encounters that could have been avoided with better awareness.
That's where EleTect 1.5 comes in — combining TinyML, LoRa, solar power, and interactive signage to proactively warn and deter.
🛠️ What It DoesEleTect 1.5 is an advanced extension of the award winning EleTect 1.0 system. It introduces an interactive digital signage system that provides real-time warnings to riders and drivers when elephants are present ahead on forest roads.
🐘 EleTect Node (Detection Unit)
- Detects elephants using a TinyML-powered system which can identify an elephant using vision and sound.
- Uses LoRa to send elephant presence status to the signage node.
- Triggers a deterrent mechanism (e.g., honeybee sound) only when vehicles are present.
🚦 Signage Node (Warning System)
- Placed 500m before known elephant crossings.
- Displays a bright, red, flashing elephant warning.
- Integrated camera detects the presence of vehicles.
- Sends vehicle presence data to EleTect Node.
- All powered entirely by solar energy.
- Elephant detected ➡️ EleTect Node triggers LoRa alert to Sign Board.
- Signboard flashes elephant warning if vehicles are approaching.
- Signboard checks for vehicles using camera:
If vehicles are detected → message sent to EleTect Node.
EleTect Node waits 10 minutes → plays deterrent bee sound.
- After elephants leave, detection stops → Signboard resets.
Eletect is a technology-driven system designed to detect elephants early, deter them harmlessly, and alert nearby communities. Its goal is to protect lives, foster coexistence, and contribute to wildlife conservation.
Despite their size and power, elephants have a surprising vulnerability: they are instinctively afraid of bee buzzing sounds. By carefully and harmlessly using this natural deterrent, Eletect can safely redirect elephants away from human settlements without harming them. This peaceful strategy respects both humans and elephants.
⚙️ How it WorksAt the forest boundaries, multiple TinyML-powered nodes are deployed. Each node can:
- Detect elephants using a vision-based TinyML model on the Seeed Studio Grove Vision AI V2 module.
- Analyze sound using a Seeed Studio XIAO ESP32S3 Sense, running a TinyML audio model to detect elephant vocalizations.
- Trigger deterrents by playing honeybee buzzing sounds through an onboard speaker.
- Communicate via LoRa/LoRaWAN with a central master node to ensure real-time updates even in remote areas.
The system is completely solar-powered, making it sustainable and ideal for deployment in remote forest regionem Architecture. Signages will be placed like below.
Component
XIAO ESP32S3
Grove LoRa-E5 Module
Solar Charging Modules
Custom battery pack
Custom LED panel
Enclosure Using Acrylic sheet
🛠️ Step 1: Build the Custom Signage EnclosureIn this step, we’ll create a weatherproof and visually impactful enclosure that houses the electronics for the elephant warning signage system. The enclosure is made from 5mm clear acrylic sheets, designed in Fusion 360, and laser cut for precision.
🧰 Materials Needed- 5mm thick clear acrylic sheet
- Access to a laser cutter
- Acrylic glue (e.g., Weld-On 3 or Fevikwik)
- Vinyl cutter and precision knife
- Reflective vinyl sticker sheet (yellow and red)
- Matte black vinyl sheet
- Clamps or tape for alignment
- Fusion 360 or similar CAD software
- Cooling film
Create a Design in Fusion360 for the enclosure.
Sketch the front panel dimensions based on your component layout (camera, LEDs, LoRa antenna, etc.).Ensure the box has enough depth to house the electronics.
Export each face of the enclosure as a DXF file for laser cutting.
- Upload the DXF files to your laser cutter’s software.
- Set your laser cutter to the appropriate power/speed settings for 5mm acrylic.
- Carefully cut each panel and label them as you go to avoid confusion during assembly.Peel off any protective film after cutting.
👉 Safety first! Wear proper eye protection and operate the cutter in a ventilated area.
Lay out all the cut pieces on a clean surface.
Begin with the base and edges.Apply acrylic glue along the joining edges and press pieces together.Use clamps or masking tape to hold parts in place until dry.
Continue assembling all sides until the box is complete.
Let the entire assembly cure for several hours to ensure strong bonding.
👉 Tip: Double-check the alignment before applying glue — acrylic bonds instantly!Grind and smoothen the irregular edges using a grinding tool.
✨ Step 1.4: Apply the Reflective GraphicsDesign an elephant silhouette and the text “ELEPHANTS AHEAD” using vector software (e.g., Adobe Illustrator or Inkscape).
Cut the design using a vinyl cutter from reflective vinyl sheet.
Clean the front acrylic panel with a microfiber cloth.Carefully transfer the reflective vinyl design onto the panel using transfer tape.
Cover the remaining back and side edges with matte black vinyl to block internal components and focus attention on the warning.
👉 Result: A bold, reflective front that is highly visible when headlights or onboard LEDs shine on it.
💡 Step 2: Building the Custom LED PanelIn this step, we’ll design and assemble a high-visibility LED panel in the shape of an elephant, mounted inside our previously built acrylic enclosure. This panel serves as a visual alert, visible from a distance even in low-light conditions.
🧰 Materials Required- 4x generic dotted PCBs (perforated board)
- 400x 5mm Red Clear LEDs
- 200x 68Ω resistors
- 22 AWG hookup wire
- Soldering iron + solder wire
- Black matte spray paint (optional, for aesthetics)
- 1x N-channel MOSFET (e.g., IRLZ44N)
- 1x 220Ω resistor (for MOSFET gate)
- 1x Custom 3S3P LiPo battery pack (11.1V nominal)
- Heat shrink, glue, basic tools
Take your four dotted PCBs paint them with matte black paint — this step is optional but gives a professional look and improves contrast against the red LEDs.
Let them dry completely.
👉 Tip: Paste a sheet or paper on the back of the pcb before painting so that the paint won't get into the backside✂️ Step 2.2: Join the PCBs to Form a Large Panel
Measure and cut the boards to your desired dimensions.
Carefully align and glue the four PCBs together to create a larger panel.
Make sure all the solder pads align properly and the board is flat.
🐘 Step 2.3: Trace and Plan the LED LayoutPlace the vinyl elephant signage or sticker over the panel as a reference.Using a white marker or chalk, roughly trace the outline of the elephant and the text “ELEPHANTS AHEAD”.
Plan the LED positions inside this trace to match the shape as closely as possible
👉 Tip: Leave a bit of spacing between each LED to avoid overcrowding.
🔗 Step 2.4: LED Chain DesignWe’ll use a simple and efficient wiring scheme:
2 LEDs in series + 1 resistor (68Ω) = 1 chain
Multiple such chains are wired in parallel across the panel
Why this config?With a 3S (11.1V) LiPo battery, 2 red LEDs (approx. 2V each) + 68Ω resistor draws safe current and provides balanced brightness.
🧪 Step 2.5: Prototype and Test the LED CircuitFirst, build one LED chain on a breadboard.
Power it using a bench power supply.
Confirm brightness and measure current draw.Once satisfied, continue soldering the full design onto the board.
🔩 Step 2.6: Solder All LED ChainsStart from the top of the board, following your traced outline.
Insert 2 LEDs in series and connect the 68Ω resistor to complete the chain.Continue placing and soldering LED chains across the board, following your elephant outline and text.
Use thin wires to connect common positive and negative rails at the back.
After soldering, check for shorts and test small sections individually.
⚡ Step 2.7: Power and Drive CircuitConnect all the negative lines from each LED chain to a MOSFET drain.Connect the MOSFET source to ground.Use a 220Ω resistor on the MOSFET gate and connect it to your microcontroller’s digital output (for PWM or ON/OFF control).The positive rail of all LED chains goes directly to the power supply.
Check heat levels and ensure no resistors or LEDs are overheating.
✅ Final TestingSecurely mount the pack inside the enclosure and wire it to the LED panel via a toggle switch or microcontroller control..Turn on the system and ensure all LEDs light up in the correct pattern.
But unfortunately, it didn’t quite meet my expectations. During the first test, I wasn’t fully satisfied, as it didn’t look as refined or close to the vision I had in mind.
But then I had a small idea — if I could diffuse the LED light, it might look much cleaner and more aesthetically pleasing. To quickly test this, I placed a simple A4 sheet between the front acrylic panel and the LED panel.
After diffusing it with an A4 sheet, the display looked even better—cleaner and clearer. There were some dead LEDs ,so i replaced them and it worked perfect.
Initially, my idea was to keep the elephant silhouette completely black when the sign was inactive and then illuminate it in red once activated. To match this concept, I painted the PCB black so that the LEDs would remain invisible until turned on. However, when I tried adding a simple A4 sheet as a diffuser, it disrupted the clean aesthetic I was aiming for, as the sheet made the panel look less appealing.
To tackle this, I thought of using smoked acrylic. This material would give the panel a sleek black look when off, while still allowing the red LEDs to shine through when active. Pairing it with a thin frosted acrylic sheet underneath would help diffuse the light evenly, achieving both functionality and the desired aesthetics.
The challenge, however, was that smoked acrylic was not easily available at the time, and even if sourced, it would add to the cost and overall weight of the system. To find a more practical alternative, I came up with the idea of applying car window cooling film to the back of the front panel to replicate the smoked effect. For light diffusion, I could then use a lightweight and low-cost option like a thin plastic sheet or even butter paper. This way, I could preserve both the aesthetics and the functionality without compromising on budget or weight.
Firstly apply the cooling film on the back of the front acrylic panel.
Then use a thin plastic sheet or butter paper to diffuse the light on top of that.
So to test it i quickly made a setup and tested it.
Finally, after several trials, I was able to achieve exactly the result I had envisioned: a signage system that remains sleek and minimal when inactive, but transforms into a striking, attention-grabbing warning when activated.
Grove Vision AI Module V2 Overview
The Grove Vision AI Module V2 is a game-changer in the world of microcontroller-based AI vision modules. Powered by the ARM Cortex-M55, it outperforms regular ESP32 CAM-based boards while consuming significantly less power. After extensive testing, we found it to be exceptionally powerful and precise.
Comparison with Xiao ESP32-S3 Sense Board
In our tests, we compared the Grove Vision AI Module V2 with the Xiao ESP32-S3 Sense board. The difference is clear in the comparison video. The Grove Vision AI Module V2 delivers a higher frame rate while maintaining low power consumption, outperforming the Xiao ESP32-S3 Sense board.
The product arrives in standard Seeed Studio packaging. Inside the box, you'll find:
- The Vision AI Module V2
- A connecting wire
- A sticker with a brief introduction to the module
Specifications
The module features the WiseEye2 HX6538 processor, which includes:
- Dual Core ARM Cortex M55:
- High Performance Core clocked at 400MHz
- High Efficiency Core clocked at 150MHz
- ARM Ethos-U55 microNPU (Neural Processing Unit) clocked at 400MHz
- PUF (Physical Unclonable Function) hardware security
These features enable rapid AI and ML processing, making it ideal for computer vision projects requiring high frame rates and low power consumption.
Memory and Connectivity
- 60MB of onboard flash memory
- PDM microphone
- SD card slot
- External camera connectivity
- CSI port
- Grove connector
- Dedicated pinout for connecting Xiao series microcontroller boards from Seeed Studio
Software Compatibility
The module supports a wide range of AI models and frameworks:
- SenseCraft AI models, including Mobilenet V1/V2, EfficientNet-Lite, YOLO v5/v8
- TensorFlow and PyTorch frameworks
It is compatible with popular development platforms like Arduino, Raspberry Pi, and ESP dev boards, making it versatile for further development.
Applications
Our tests confirmed that the Grove Vision AI Module V2 is suitable for a variety of applications, including:
- Industrial Automation: Quality inspection, predictive maintenance, voice control
- Smart Cities:Device monitoring, energy management
- Transportation: Status monitoring, location tracking
- Smart Agriculture: Environmental monitoring
- Mobile IoT Devices: Wearable and handheld devices
We can declare with confidence that the Grove Vision AI Module V2 delivers unmatched AI processing capabilities, flexible model support, a wealth of peripheral possibilities, high compatibility, and an entirely open-source environment after conducting rigorous testing. It is a great option for a variety of AI and computer vision applications because to its low power consumption and great performance.
Hardware Overview
Refer this article, 2024 MCU AI Vision Boards: Performance Comparison, it is possible to confirm how powerful Grove Vision AI (V2) is when compared to Seeed Studio Grove - Vision AI Module, Espressif ESP-EYE, XIAO ESP32S3 and on an Arduino Nicla Vision. Do check it out.
Refer this article, 2024 MCU AI Vision Boards: Performance Comparison, it is possible to confirm how powerful Grove Vision AI (V2) is when compared to Seeed Studio Grove - Vision AI Module, Espressif ESP-EYE, XIAO ESP32S3 and on an Arduino Nicla Vision. Do check it out.
Connecting to a CSI interface camera
Once you have the Grove Vision AI V2 and camera ready to go, then you can connect them via the CSI connection cable. When connecting, please pay attention to the direction of the row of pins and don't plug them in backwards.
Boot / Reset / Flashed Driver
Boot
If you have used some unusual method that has caused the Grove Vision AI to not work properly at all (at the software level), then you may need to put the device into BootLoader mode to revive the device. Here is how to enter BootLoader mode.
Method 1
Please disconnect the connection cable between the Grove Vision AI and your computer, then press and hold the Boot button on the device without releasing it. At this time, please connect Grove Vision AI to your computer with a Type-C type data cable, and then release it again. At this point the device will enter BootLoader mode.
Method 2
With the Grove Vision AI connected to your computer, you can enter BootLoader mode by pressing the Boot button and then quickly pressing the Reset button.
Reset
If you're experiencing problems with device data suddenly not uploading or images getting stuck, you can try restarting your device using the Reset button.
Driver
If you find that the Grove Vision AI V2 is not recognised after connecting it to your computer. Then you may need to install the CH343 driver on your computer. Here are some links to download and install the CH343 driver.
Windows Vendor VCP Driver One-Click Installer: CH343SER.EXE
Windows Vendor VCP Driver: CH343SER.ZIP
Windows CDC driver one-click installer: CH343CDC.EXE
Windows CDC driver: CH343CDC.ZIP
macOS Vendor VCP Driver: CH34xSER_MAC.ZIP
Below is a block Diagram of the Grove Vision AI (V2) system, including a camera and a master controller.
SenseCraft AI empowers users to effortlessly deploy a vast library of publicly available AI models on tot heir edge devices such as Recomputer(Jetson), XIAOS3, and more, and provides a seamless and user-friendly experience, allowing you to deploy public AI models directly on to your edge devices with just a few clicks.Say good bye to complex configurations and coding – with SenseCraft AI, you can efftortlessly unlock the power of AI on your devices. SenseCraft AI also allows you to upload and share your own trained AI models with the community. By publishing your models, you contribute to a growing library of shared knowledge, fostering collaboration and innovation among AI enthusiasts.Now we will quickly get started the modules with SenseCraft AI, and this will only require the module only.
Step 1. Connect the Grove Vision AI V2 to the SenseCraft AI Model Assistant
First, we need to open the main SenseCraft AI Model Assistant page.
Create an account and login
Please use a Type-C type cable to connect Grove Vision AI V2 to your computer.
Here we are using a public model for testing the Grove Vision V2.
we selected the "Gesture Detection" model to deploy.
Click on "Deploy Model"
Then click on "connect"
Click on "Confirm" and select the connected serial port
Now the model will begin to get uploaded to the Grove vision V2
Now you can see that we have successfully uploaded the model.Now we can test it by showing different gestures
we can see how good the new Grove vision V2 is when compared to other MCUs and the previous version, we really got a massive upgrade in every terms, Really Loved it.
We can see that in the Preview Settings on the right hand side, there are two setting options that can be changed to optimise the recognition accuracy of the model.
Confidence: Confidence refers to the level of certainty or probability assigned by a model to its predictions.
- Confidence: Confidence refers to the level of certainty or probability assigned by a model to its predictions.
IoU: IoU is used to assess the accuracy of predicted bounding boxes compared to truth bounding boxes.
- IoU: IoU is used to assess the accuracy of predicted bounding boxes compared to truth bounding boxes.
1. Visit the official Arduinowebsite: https://www.arduino.cc/en/software
2. Click on the "Windows" or "Mac"buttonbased on your operatingsystem.
3. Download the ArduinoIDE1.8.19installer.
4. Once the downloadis complete, runtheinstaller.
5. Follow the installation wizard, acceptingthe license agreement and choosing the installation directory.
6. If prompted, allow the installer to install device drivers.
7. Once the installationis finished, click"Close" to exit the installer.
8. Open the Arduino IDE fromthedesktopshortcut or the start menu. 9. You're now ready tostart usingArduinoIDE 1.8.19!
Downloading the necessary libraries
1. Open your web browser and navigate to the GitHub repository:
https://github.com/Seeed-Studio/Seeed_Arduino_SSCMA
2. Click on the green "Code" button and select "Download ZIP" to download the library as a ZIP file.
3. Save the ZIP file to a location on your computer where you can easily find it.
4. Open the Arduino IDE.
5. Go to Sketch > Include Library >Add.ZIPLibrary.
6. In the file browser window that appears, navigate to the location where you saved the downloaded ZIP file.
7. Select the ZIP file and click "Open" to add the library to your Arduino IDE.
8. The Seeed_Arduino_SSCMAlibrary should now be installed and ready to use.
9. To verify the installation, go to Sketch>Include Library and check if "Seeed_Arduino_SSCMA" appears in the list of installed libraries.
You also need to downloadone more library
Go to the Sketch menu, then select Include Library>Manage Libraries.... This will open the Library Manager.In the search bar at the top of the Library Manager, type in ArduinoJSON. The search results will list the ArduinoJSON library. There will be an Install button next to the library. Click the Install button. The ArduinoIDE will automatically download and install the library into your Arduino development environment.
Installing the board to Arduino IDE
1. Open theArduinoIDE.
2. Go to File>Preferences.
3. In the "Additional BoardsManager URLs" field, enter the following URL:
https://raw.githubusercontent.com/espressif/arduinoesp32/gh-pages/package_esp32_index.json
https://raw.githubusercontent.com/espressif/arduinoesp32/gh-pages/package_esp32_index.json
4.Click "OK" to close the Preferences window.
5. Navigate to Tools >Board>Boards Manager.
6. In the Boards Manager window, search for "ESP32".
7. Locate the "ESP32 by Espressif Systems" entry and click on it.
8. Select the latest version from the drop down menu and click "Install".
9. Wait for the installation process to complete. This may take a few minutes.
10. Once the installation is finished, close the Boards Manager window
Custom model using google collab and RoboflowIn this part, we'll kick off by labeling our dataset with the intuitive tools provided by Roboflow. From there, we'll advance to training our model within Google Colab's collaborative environment. Next up, we'll explore deploying our trained model using the SenseCraft Model Assistant, a process designed to smoothly bridge the gap between training and real-world applications. By the conclusion of this part, you'll have your very own custom model ready to detect vehicles, operational on Grove Vision AI V2.
From dataset to model deployment, our journey consists of the following key stages:
1. Dataset Labeling — This section details the process of acquiring datasets suitable for training models. There are two primary methods: utilizing labeled datasets from the Roboflow community or curating your own dataset with scenario-specific images, necessitating manual labeling.
2. Model Training with Google Colab — Here, we focus on training a model capable of deployment on Grove Vision AI V2, leveraging the dataset obtained in the previous step via the Google Colab platform.
3. Model Upload via SenseCraft Model Assistant — This segment explains how to employ the exported model file to upload our elephant detection model to Grove Vision AI V2 using the SenseCraft Model Assistant.
Step 1.Create a free Roboflow account
Roboflow provides everything you need to label, train, and deploy computer vision solutions. To get started, create a free Roboflow account.
Step 2. Creating a New Project and Uploading images
Once you've logged into Roboflow, Click on Create Project.\
Name your project ("EleTect 1.5"). Define your project as t as Object Detection. Set the Output Labels as Categorical
Now it's time to upload vehicle images.
Collect images of elephants. Ensure you have a variety of backgrounds and lighting conditions. On your project page, click "Add Images".
You can drag and drop your images or select them from your computer. Upload at least 100 images for a robust dataset.
click on Save and Continue
Step 3: Annotating Images
After uploading, you'll need to annotate the images by labeling vehicle.
Roboflow offers three different ways of labelling images: Auto Label, Roboflow Labeling and Manual Labeling.
- Auto Label: Use a large generalized model to automatically label images.
- Roboflow Labeling: Work with a professional team of human labelers. No minimum volumes. No upfront commitments. Bounding Box annotations start at $0.04 and Polygon annotations start at $0.08.
- Manual Labeling: You and your team label your own images.
The following describes the most commonly used method of manual labelling.
Click on "Manual Labeling" button. Roboflow will load the annotation interface.
Select the "Start Annotating" button. Draw bounding boxes around the vehicle in each image.
Label each bounding box as vehicle.
Use the ">" button to move through your dataset, repeating the annotation process for each image.
Step 4: Review and Edit Annotations
It's essential to ensure annotations are accurate.
Review each image to make sure the bounding boxes are correctly drawn and labeled. If you find any mistakes, select the annotation to adjust the bounding box or change the label.
Step 5: Generating and Exporting the Dataset
Once all images are annotated. In Annotate click the Add x images to Dataset button in the top right corner.
Then click the Add Images button at the bottom of the new pop-up window.
Click Generate in the left toolbar and click Continue in the third Preprocessing step.
In the Augmentation in step 4, select Mosaic, which increases generalisation.
In the final Create step, please calculate the number of images reasonably according to Roboflow's boost; in general, the more images you have, the longer it takes to train the model. However, the more pictures you have will not necessarily make the model more accurate, it mainly depends on whether the dataset is good enough or not.
Click on Create to create a version of your dataset. Roboflow will process the images and annotations, creating a versioned dataset. After the dataset is generated, click Export Dataset. Choose the COCO format that matches the requirements of the model you'll be training.
Click on Continue and you'll then get the Raw URL for this model. Keep it, we'll use the link in the model training step a bit later.
Congratulations! You have successfully used Roboflow to upload, annotate, and export a dataset for elephant detection model. With your dataset ready, you can proceed to train a machine learning model using platforms like Google Colab.
Training Dataset Exported Model Step 1. Access the Colab Notebook
You can find different kinds of model Google Colab code files on the SenseCraft Model Assistant's Wiki. If you don't know which code you should choose, you can choose any one of them, depending on the class of your model (object detection or image classification).
If you are not already signed into your Google account, please sign in to access the full functionalities of Google Colab.
Click on "Connect" to allocate resources for your Colab session.
select the panel showing RAM and Disk
select "Change runtime type"
Select "T4 GPU"
Now run the "Setup SSCMA"
you will get a warning like this click on "Run anyways"
Wait untill the repositary is fully clonedand installed all the dependencies.
now its finished
Now run the "download the pretrain model weights file
Step 2. Add your Roboflow Dataset
Before officially running the code block step-by-step, we need to modify the code's content so that the code can use the dataset we prepared. We have to provide a URL to download the dataset directly into the Colab filesystem.
To customize this code for your own model link from Roboflow:
1)Replace Gesture_Detection_Swift-YOLO_192
with the desired directory name where you want to store your dataset.
2)Replace the Roboflow dataset URL (https://universe.roboflow.com/ds/xaMM3ZTeWy?key=5bznPZyI0t
)
with the link to your exported dataset (It's the Raw URL we got in the last step in Labelled Datasets). Make sure to include the key parameter if required for access.
3)Adjust the output filename in the wget
command if necessary
(-O your_directory/your_filename.zip
).4)Make sure the output directory in the unzip
command matches the directory you created and the filename matches the one you set in the wget
command.
Step 3. Adjustment of model parameters
The next step is to adjust the input parameters of the model. Please jump to the Train a model with SSCMA section and you will see the following code snippet.
This command is used to start the training process of a machine learning model, specifically a YOLO (You Only Look Once) model, using the SSCMA (Seeed Studio SenseCraft Model Assistant) framework.
To customize this command for your own training, you would:
1)Replace configs/swift_yolo/swift_yolo_tiny_1xb16_300e_coco.py
with the path to your own configuration file if you have a custom one.
2)Change work_dir
to the directory where you want your training outputs to be saved.
3)Update num_classes
to match the number of classes in your own dataset. It depends on the number of tags you have, for example rock, paper, scissors should be three tags.
4)Adjust epochs
to the desired number of training epochs for your model. Recommended values are between 50 and 100.
5)Set height
and width
to match the dimensions of the input images for your model.
6)Change data_root
to point to the root directory of your dataset.
7)If you have a different pre-trained model file, update the load_from
path accordingly.
Step 5. Exportthe model
After training, you can export the model to the format for deployment. SSCMA supports exporting to ONNX, and TensorFlow Lite at present
Step 6. Evaluate the model
When you get to the Evaluate the model section, you have the option of executing the Evaluate the TFLite INT8 model code block.
Step 6. Download the exported model file
After the Export the model section, you will get the model files in various formats, which will be stored in the Model Assistant folder by default. Our stored directory is EleTect 1.5.
select "ModelAssistatnt"
In the directory above, the .tflite model files are available for XIAO ESP32S3 and Grove Vision AI V2. For Grove Vision AI V2, we prefer to use the vela.tflite files, which are accelerated and have better operator support. And due to the limitation of the device memory size, we recommend you to choose INT8 model.
After locating the model files, it's essential to promptly download them to your local computer. Google Colab might clear your storage directory if there's prolonged inactivity. With these steps completed, we now have exported model files compatible with Grove Vision AI V2. Next, let's proceed to deploy the model onto the device.
Upload models to Grove Vision V2 via SenseCraft Model AssistantPlease connect the device after selecting Grove Vision AI V2 and then select Upload Custom AI Model at the bottom of the page.
You will then need to prepare the name of the model, the model file, and the labels. I want to highlight here how this element of the label ID is determined.
If you are using a custom dataset, then you can view the different categories and its order on the Health Check page. Just install the order entered here.
Then click Send Model in the bottom right corner. This may take about 3 to 5 minutes or so. If all goes well, then you can see the results of your model in the Model Name and Preview windows above.
Click on deploy and connect your grove vision V2.Press Confirm and you are good to go.
Now that we have done training the vision based model, now we can connect it to EleTect node
Now that we have our physical enclosure and the custom LED warning panel ready, it's time to connect the system to the EleTect detection node using LoRa communication.
📡 Communication Architecture:1.EleTect Node (Forest-side)
Detects elephant presence using:
- Grove Vision AI V2 → vision-based elephant detection.
- XIAO ESP32S3 Sense → sound-based detection.
- On detection → Sends
ELEPHANT_DETECTED
message via LoRa to the Signboard Node. - If
VEHICLE_PRESENT
is received back → waits 10 minutes → activates bee sound deterrent via DFPlayer Mini + Speaker. - Sends
ELEPHANT_LEFT
when elephants leave → resets the system.
2.Signboard Node (Roadside)
- Listens for elephant alerts from EleTect Node.
- On detection → flashes warning LED (with elephant symbol) continuously until elephant leaves.
- Uses Grove Vision AI V2 running a TinyML vehicle detection model to constantly check for vehicles.
- If vehicles are detected while elephants are present → sends
VEHICLE_PRESENT
message to EleTect Node.
Outcome
- 🚦 Elephant but no vehicles → Only flashing signboard (no sound, less disturbance).
- 🚦 Elephant + vehicles present → Signboard flashes + EleTect triggers bee sound after 10 minutes.
- ✅ Once elephant leaves → Signboard turns off, deterrent stops, system resets.
- XIAO ESP32S3 – For LoRa and LED control
- LoRa-E5 Grove Module – For receiving data from EleTect node3S3P Li-ion Pack – Custom power solution for high LED current drawMOSFET (e.g., IRF540N or similar) – To drive the LED panel
- 220Ω Resistor – Gate resistor for the MOSFET
- Jumper Wires
Here's a basic sketch to control the LED flashing when elephant presence data is received from the EleTect node.
🧾 Code for the Signboard Node (Receiver):#include <Arduino.h>
#include <LoRaE5.h>
#define LED_PIN 5 // LED/Signboard pin
#define LORA_RX 6
#define LORA_TX 7
HardwareSerial loraSerial(1);
// State flags
bool elephantPresent = false;
bool vehiclePresent = false;
unsigned long lastBlink = 0;
bool ledState = false;
void setup() {
pinMode(LED_PIN, OUTPUT);
digitalWrite(LED_PIN, LOW);
Serial.begin(115200); // Debug
loraSerial.begin(9600, SERIAL_8N1, LORA_RX, LORA_TX); // LoRa
Serial.println("Signboard Node Ready");
}
void loop() {
// 1. Listen for LoRa messages
if (loraSerial.available()) {
String msg = loraSerial.readStringUntil('\n');
msg.trim();
Serial.println("LoRa IN: " + msg);
if (msg == "ELEPHANT_DETECTED") {
elephantPresent = true;
} else if (msg == "ELEPHANT_LEFT") {
elephantPresent = false;
vehiclePresent = false;
digitalWrite(LED_PIN, LOW);
}
}
// 2. Read Vision AI V2 serial output (vehicle detection)
if (Serial.available()) {
String visionData = Serial.readStringUntil('\n');
visionData.trim();
if (visionData == "vehicle") {
vehiclePresent = true;
if (elephantPresent) {
loraSerial.println("VEHICLE_PRESENT");
Serial.println("Vehicle present → Sent alert to EleTect Node");
}
} else {
vehiclePresent = false;
}
}
// 3. Flash LED if elephant detected
if (elephantPresent) {
if (millis() - lastBlink > 500) { // Blink every 500ms
ledState = !ledState;
digitalWrite(LED_PIN, ledState ? HIGH : LOW);
lastBlink = millis();
}
}
}
🔋 Powering the System:The entire signboard is powered by a custom 3S3P battery pack made using Li-ion cells (11.1V ~ 12.6V).And downed using a buck converter to 5V.
The MOSFET allows the microcontroller to switch the high current LED panel without overloading the XIAO.
🧩 Step 5: 3D Printing and Mounting the LED Panel to a Custom StandStand Construction from Scrap MetalTo reduce costs and promote sustainability, the stand for the EleTect warning signage and solar panel was made entirely from scrap metal. Despite being built from repurposed material, it is sturdy, weather-resistant, and highly visible.
Materials Used- 1-inch square metal pipe (scrap, reused)
- Welding machine (for joints)
- Cutting tool (angle grinder)
- Anti-rust paint (to protect against corrosion in forest conditions)
- Mounting bolts & brackets (for signage + solar panel)
- Height: ~8 feet (tall enough for clear visibility to drivers).
- Pole: Single vertical square pipe serves as the main post.
- Base: Welded flat support with angled bracing, bolted into the ground for stability.
Top Section:
- Solar panel mounted with a small angled bracket for optimal sunlight.
- Warning signage (“Elephants Ahead”) firmly attached just below the panel.
- Cutting the Scrap Pipe:
The scrap pipe was cut into one 8 ft piece (main vertical) and smaller pieces for the base support. - Building the Base:
A flat base with short stabilizers was welded to the bottom so it could be anchored securely into the ground with bolts.
Mounting the Sign & Solar Panel:
- A horizontal bracket at the top holds the solar panel at an angle.
- The triangular signage board is bolted slightly below the solar panel.
Painting & Finishing:
The entire stand was coated with anti-rust paint, ensuring long durability outdoors.
Ultra Low Cost: Made from waste scrap material.
Eco-Friendly: Reuses metal that would otherwise be discarded.
Durable: Strong enough to withstand wind, rain, and wildlife contact.
Scalable: Simple design can be replicated in bulk for multiple forest corridors.
Designing the Mount
To ensure stability and long-term deployment in outdoor environments, we designed a custom 3D printed mount that securely holds the LED signboard and aligns it on the custom metal pipe-based stand.
🛠️ Materials Used:- PLA filament (Black)
- 3D printerM4 bolts and nutsScrews (for enclosure-mounting)steel pipe (for pole mounting)
CAD Modeling:
Using Fusion 360 design a mounting bracket that matches the dimensions of the acrylic signboard enclosure
Printing:
Assembly:
Once printed, the enclosure was carefully slotted into the mount
Secure with M4 bolts on either side to lock it in place.The mount was then clamped or screwed onto a metal/PVC pole, completing the physical installation.
🌍 Sustainability and Edge Deployment- No internet required: Uses LoRa for remote communication
- Fully solar-powered, ideal for deployment in forest areas
- Deterrent only activates when necessary, conserving energy and minimizing disturbance to wildlife
One of the most impactful upgrades we are planning for EleTect is the integration of Google Maps into the system. Currently, EleTect is capable of detecting elephant movement and triggering local deterrents or alerts. However, by incorporating Google Maps, we aim to create a real-time, centralized monitoring system that will drastically improve response times and ensure safer coexistence between humans and elephants.
🔹 How It Will Work:
- Live Location Mapping – Each EleTect unit deployed in the field will send detection data (timestamp, location coordinates, and event details) to a central cloud server.
- Google Maps Visualization – The data will be displayed on Google Maps, showing the exact position of elephant sightings or conflicts in real-time.
- Risk Zone Alerts – Areas with frequent detections will be marked as “High Risk” zones, allowing authorities and locals to take precautionary measures.
- Community Access – Farmers, drivers, and forest officials will be able to access a live web/app dashboard powered by Google Maps, ensuring they are instantly informed of nearby elephant movement. Historical Data & Prediction – By overlaying historical data on Google Maps, the system can predict elephant routes and hotspots, helping in long-term conflict mitigation planning.
🔹 Benefits:
- Prevent Road Accidents – Drivers will receive live alerts on elephant crossings ahead, reducing the risk of collisions (like the unfortunate German tourist incident).
- Safer Agriculture – Farmers can check real-time elephant movement before stepping into their fields at night.
- Better Resource Deployment – Forest officials can allocate patrols and deterrents more effectively.
- Community Awareness – Local communities gain accessible, visual information, fostering safer human-elephant coexistence.
While designed for mitigating human-elephant conflict, the system can also be adapted for other regions and animals, such as kangaroos, deer, or bison, with only minimal modifications required. The overall goal remains the same: to reduce accidents, save human lives, and protect wildlife.
This integration will make EleTect not just a detection and deterrent system, but a smart, location-aware safety network that can save countless lives – both human and elephant.
🌱 Let’s Save Lives — Humans and Elephants AlikeWith EleTect 1.5, we bridge the communication gap between the wild and the road. Let’s make our forests safer — not by blocking wildlife, but by understanding and respecting their paths.
📎 Resources
Comments