This project demonstrates how to study the behavior of elephants through edge devices and solve most of the problems which are leading cause of elephant's diminishing populations and loss of farmers due to elephants farm raid. We developed this project after careful examination and study of researches, elephant behaviors over the years in our farm (we are farmers and have suffered a huge loss already due to elephant raids in our farms regularly: Kerala, India). The project Gajraj AI ( Gajraj means elephant in India, also symbolizes Hindu deity, Ganesha) uses Edge Impulse ML models to detect for elephants vocalization and daily behaviour analysis (both the tasks were challenging so we came with a better solution). However, due to practical limitations we were not able to test or prove our device's performance and efficacy but have explained our conclusions while making this project. Also, we have improved our Gajraj AI, that can not only make people and park rangers aware but can also prevent elephants from entering into farms or local villages.
Do you have completely out-of-the-box ideas that have never considered? - Yes,team Gajraj AI has.Understanding the elephants:
Elephants help maintain forest and savanna ecosystems for other species and are integrally tied to rich biodiversity. Elephants are important ecosystem engineers. They make pathways in dense forested habitat that allow passage for other animals. An elephant footprint can also enable a micro-ecosystem that, when filled with water, can provide a home for tadpoles and other organisms. The seeds of many plant species in central African and Asian forests are dependent on passing through an elephant's digestive tract before they can germinate. It is calculated that at least a third of tree species in central African forests rely on elephants in this way for distribution of seeds.
As keystone species, they help maintain biodiversity of the ecosystems they inhabit.
What's the mainreasons behind silent extinction of the elephants?
- Human Impact- The main threats to elephant populations today are habitat loss/ fragmentation and poaching. As the human and elephant populations grow closer together in proximity, there are increased occurrences of crop raiding by elephants, attacks on each other, etc.
- Poaching Repercussions- Survival is a great challenge for orphaned young elephants due to the sociological importance of maternal upbringing. Abnormal behaviors may develop in orphaned young bulls that have not benefited from proper maternal care. Documented behaviors include abnormal aggressiveness, including fatal attacks, and reproductive inexperience.
- Climate Change - Climate change may enhance conflict between humans and elephants as they must compete for increasingly limited land, water and other natural resources. High temperatures affect animals in different ways, but such changes are particularly severe for those that cannot dissipate heat easily, such as elephants.
Our device, Gajraj AI has potential to protect and track elephants in no-time. With increasing development in Edge AI devices, collecting and analyzing data on the spot without cloud has become easy. Gajraj AI has some cool features to show which will change the future of Elephant's collar devices.
Our study and findings to make elephantactivity tracking possible:
Elephants really show complex emotions and behavioral patterns and are thus very difficult to track. For elephant activity and behavior monitoring we are relying upon IMU sensors and vibration recording technology for detecting vocalizations. Microphone technology for elephant vocalization using ML was an option too but the main caveat lies is that those MEMS microphone are unable to capture audio on infrasonic levels and elephants communication involves almost 90% of infrasonic sounds. Gajraj AI is not only a smart collar device with it's own network capabilities but also comes with a small ML capable BLE node device attached on animal trunk to detect for infrasonic vibrations, behaviors, etc.
Why elephant's trunk is a good spot for elephant vocalization?
Well, elephant's trunk is the most complex, powerful and active organ but it can help us to better understand emotions, sounds made by elephants and their behaviors more appropriately. To understand the anatomy of elephant's trunk, below are some point:
- Trunk acts like a large resonating chamber, the air vibrates in the vocal tract/resonating chamber, depending on how the elephant holds the different parts of the chamber(mouth, trunk, tongue, larynx, pharyngeal pouch) and is able to modify and amplify different parts of the sound
- Elephants can produce low frequency sounds partially because they are so big and partially because they have an adaptation to be able to make their resonating chamber and vocal chords bigger to produce lower frequency sounds
- A male elephant's trunk may add up to two meters on to the resonating chamber
- The low frequency calls also generate powerful vibrations that send signals through the ground that elephants can feel and interpret
A normal microphone has detection range of 20Hz - 20kHz, and thus it cannot capture most of the critical sound data in infrasonic form but luckily we have got some good piezo-ceramic vibration sensors (in next section we will reveal the module) which works same as microphone and very much easy to analyse data using DSPs and can measure low vibrations (10 Hz - 15kHz). Below are some vocalizations for elephants:
- Rumble: A rumble represents a long, low-frequency call that elephants often use for communication within and among herds. Rumbles can lie in the frequency range of 10–170 Hz. With the human hearing range being between 20-20, 000 Hz, these calls have some components which may not be audible to the human ear or normal microphones.
- Roar: Roars are long, noisy and loud, high-frequency vocalisations. These can occur in a number of contexts such as aggression, distress or play.
- Trumpet: This is generally not a good sign and usually signals distress.
- Males make a petticoat low rumble when they are in must( aggressive, ready to breed)
A call may either occur in an isolated manner, or in combination with other call types. In addition to these vocalisations, elephants may sometimes also produce non-vocal sounds such as blowing air through the trunk. See these links for more details on trunk anatomy and elephant calls:
1. http://www.cornell.edu/video/listening-to-elephants
2. https://www.elephantvoices.org/elephant-communication/acoustic-communication.html
Elephants rumble is not easy to detect using ordinary microphones due to low frequency, and it is the rumble which alerts the elephants of approaching dangers and it is very helpful for us to look for any poachers threat to elephant in the forests.
So how do you know if an elephant is rumbling if you can't hear it? When elephants are listening, they hold their ears out. When they're rumbling, their ears tend to flap.- https://elephantlisteningproject.org/all-about-infrasound/
Why elephant's trunk is a good spot forelephant activitymonitoring?
Elephant trunk is a highly muscular organ without a single bone and comprising of different muscles and can lift up to 770 pounds! The trunk is used all days long for various activities from feeding, protecting to making sounds. Moreover, trunk activity becomes very much prominent in detection of musth period in elephants -
They have very specific behaviors that signal they are in musth. They dribble urine and have swollen temple glands which secrete a sticky fluid. They take their trunks and swing them across their face(smearing themselves with this smelly substance), charge their tusks and trunks on ground to visibly discharge the fluid rapidly. - https://www.nationalgeographic.com/news/2015/04/150419-ngbooktalk-elephant-behavior-rituals-animals-africa/
Researchers have done some successful tests to monitor sleep patterns in elephants by installing accelerometers on trunks. The findings were? The elephants in the study slept an average total of only two (two!) hours a day, and on several occasions went without sleep for up to 46 hours.
Fitbit wearables has successfully done the above sleep monitoring tests - https://www.govtech.com/question-of-the-day/How-are-researchers-tracking-wild-elephant-behavior.html )
Philips ActiWatch and Wits University has also shown some promising results too, see provided link and video:- wareable.com/wearable-tech/fitness-trackers-used-to-study-elephants-sleep-activity-4003
"Understanding how different animals sleep is important for two reasons, firstly, it helps us to understand the animals themselves and discover new information that may aid the development of better management and conservation strategies, and, second, knowing how different animals sleep and why they do so in their own particular way, helps us to understand how humans sleep."
Our Gajraj AI too use IMU sensors on trunk to detect for sleep durations, aggressive behaviors and other daily activities.
All above ML models for IMU and vocalization would essentially help to classify different activities and make us understand the complex behavior of elephants more better (this BLE node alerts and data would be sent to dashboard too)
How Gajraj AI is going to save farms and human populations?At night, some opportunistic elephants find their way onto farms and are capable of flattening entire crops in a matter of hours. For the desperate farmer, this is obviously heartbreaking and breeds intense animosity towards elephants. This makes the challenge of protecting wildlife even more difficult as villagers are more likely to side with poachers than any entity trying to protect something they view as a threat to their livelihoods. Thus, human-elephant conflicts is rising, forcing farmers to use electrical fences which can be very dangerous to lives of elephants. Trials for beehive fences as suggested by Dr. Lucy E. King is also effective but let's see if IoT devices can help to save farms.
We have lost almost 13% of our cardamom plantation, 27% cassava destroyed due to elephant raids at night. No matter how hard we tried but elephants are really smart creatures are understand all our tricks in no time( they had no effect on any flash lights, nor to recorded noises). Below are some of the techniques we made to stop elephant raids in our farms.
We installed a audio player to play human noises and some beep sounds to keep elephants away from the farms but after a few weeks elephants became accustomed to it and understood our tricks so we had to keep changing audio files regularly. Below are the circuit diagrams for our noise generator to prevent elephant raids at night.
Elephants are afraid of bees!
In recent years, researchers and advocates have persuaded farmers to use the elephant’s fear of bees as a potential fence line to protect crops. By stringing beehives every 20 meters – alternating with fake hives – a team of researchers in Africa has shown that they can keep 80 percent of elephants away from farmland.
Our device Gajraj AI also uses the same technique, the GPS coordinate would be coded and if the elephant is found to be out of forest range(geofence) and invading towards human population, bee humming sound would be played on the elephant's collar to keep elephants away from the farm.
Let's build Gajraj AI device:Since we have already explained, our Gajraj AI collar consists of two devices, a collar device with network capabilities and other a BLE node as activity monitoring device on elephant's trunk.
We are using Sparkfun Artemis Nano board to run ML models and act as BLE node to push data to the Particle Boron as Gateway device. The Artemis BLE node has IMU and Vibration sensors - the IMU sensor used is GY-360 ADXL 362/346, the ADXL362 is a complete 3-axis MEMS acceleration measurement system that operates on extremely low power consumption levels. It measures both dynamic acceleration, resulting from motion or shock, and static acceleration, such as tilt.
For vocalization using trunk vibration we are using piezo vibration sensor, to avoid confusion, Vibration sensor is not same as microphone sensor, although it is also made of piezo-ceramic substance under the same principle but the difference is microphones measure/detect sound, or pressure variations in gases. Frequencies below 20Hz are rarely measured, but up too 200kHz is not uncommon. Of course most microphones work between 20Hz and 20kHz. Typically you would express the sensitivity of a microphone in Volts per Pascal, or voltage per unit of pressure.
Vibration sensors are like dynamic accelerometers that measure/detect acceleration. The range can be from 0Hz (like gravity) to MHz. Alternating accelerations are often called vibration. Typically you would express the sensitivity of an accelerometer in V per m/s/s ( the vibration sensor we are using to measure infrasonic vibration is below. Our choice KEMET VS-BV203-B sensor can be used to pick up many harmonic vibrations during elephant calls if placed on the truck( the sensor is of low profile, metal-sealed and tiny so it would be easy to place in position). Other alternatives for VS-BV203-B sensor at a very low cost would be Seed Grove - Piezo Vibration Sensor ( Wide dynamic range: 0.001Hz~1000MHz and good for body sensing).
We have also chosen Artemis module our MCU for trunk tracking and sensing as it has good amount of memory and RAM.
How piezo vibration sensor will capture raw vocalization?
Our vibration sensor has good in-built amplifier and DC-offsets to give output voltage to analog pins of MCU with respect to vibration levels, sampling can be performed on the output data and can be feeded to ML models, Multiple of these vibration sensors can be used in a ring structure to understand the vocalization by analyzing vibrations of trunks while elephants make calls, during elephant calls they mostly use trunk to product low frequency sounds, understanding the muscular organ by multiple of these sensors can help us model the different ways in which trunks force vibrations in air under attacks.
( I have used this sensor as vibration sensor instead of capturing audio waveforms because capturing infrasonic audio and sampling cannot be done so well using just FFT and low power boards).
Note: Since we could not collect ML datasets from the trunk for elephant vocalization and activities due to physical limitations as well as our vibration sensor did not ship on time so we are demonstrating the concept through synthetic data.
- Collecting the IMUdata: For the IMU sensor we have collected the x, y, z accelerations as comma separated values, later we will add this data in json format for the Edge Impulse. Below is the code used for IMU data collection.
#include <SPI.h>
#include <ADXL362.h>
#define FREQUENCY_HZ 50
#define INTERVAL_MS (1000 / (FREQUENCY_HZ + 1))
ADXL362 xl;
int16_t temp;
int16_t XValue, YValue, ZValue, Temperature;
void setup() {
Serial.begin(115200);
xl.begin(7); // Setup SPI protocol, issue device soft reset
xl.beginMeasure(); // Switch ADXL362 to measure mode
Serial.println("Start: Simple Read");
}
void loop()
{
static unsigned long last_interval_ms = 0;
// read all three axis in burst to ensure all measurements correspond to same sample time
if (millis() > last_interval_ms + INTERVAL_MS)
{
last_interval_ms = millis();
xl.readXYZTData(XValue, YValue, ZValue, Temperature);
Serial.print(XValue);
Serial.print(",");
Serial.print(YValue);
Serial.print(",");
Serial.print(ZValue);
Serial.println();
}
}
See the below wirings for ADXL362 IMU with Sparkfun Things Plus
The data collected for the Edge Impulse training were classified under the below categories mentioned in picture ( all actions mentioned are valid and have been observed in all elephants, even our personal observations too.).
- Collecting the audio and vibrationdata: We are trying to figure out if microphone alongside vibration sensor can improve our detection accuracy or not, since vibration sensor is only going to sense for faintest vibration produced by muscular walls of trunk but not detect high frequency vibrations, so microphones would be used for loud call classifications, thus we would be able to minimize loss in accuracy.
Also we found that audio dataset for non-human speech is full of inaccuracy if microphone architecture differs widely and I wonder why no one is considering this fact while using global sound datasets, so it is important to ensure that data capturing microphone and inference running microphone are of same architecture.
Recording elephant audible calls data on the Sparkfun Artemis microphone before training via Edge Impulse. The Sparkfun Artemis boards come with example code called record to wav with python script,
Select the board as Artemis ATP then from File->Examples->Sparkfun Redboard Artemis Example->PDM->Record_to_wav
Here's the python script to record the audio output to.wav audio format.
#!/usr/bin/python
from __future__ import division
"""
Author: Justice Amoh
Date: 11/01/2019
Description: Python script to stream audio from Artemis Apollo3 PDM microphone
"""
import sys
import serial
import numpy as np
import matplotlib.pyplot as plt
from serial.tools import list_ports
from time import sleep
from scipy.io import wavfile
from datetime import datetime
# Controls
do_plot = True
do_save = True
wavname = 'recording_%s.wav'%(datetime.now().strftime("%m%d_%H%M"))
runtime = 50#100 # runtime in frames, sec/10
# Find Artemis Serial Port
ports = list_ports.comports()
try:
sPort = [p[0] for p in ports if 'cu.wchusbserial' in p[0]][0]
except Exception as e:
print 'Cannot find serial port!'
sys.exit(3)
# Serial Config
ser = serial.Serial(sPort,115200)
ser.reset_input_buffer()
ser.reset_output_buffer()
# Audio Format & Datatype
dtype = np.int16 # Data type to read data
typelen = np.dtype(dtype).itemsize # Length of data type
maxval = 32768. # 2**15 # For 16bit signed
# Plot Parameters
delay = .00001 # Use 1us pauses - as in matlab
fsamp = 16000 # Sampling rate
nframes = 10 # No. of frames to read at a time
buflen = fsamp//10 # Buffer length
bufsize = buflen*typelen # Resulting number of bytes to read
window = fsamp*10 # window of signal to plot at a time in samples
# Variables
x = [0]*window
t = np.arange(window)/fsamp # [x/fsamp for x in range(10)]
#---------------
# Plot & Figures
#---------------
plt.ion()
plt.show()
# Configure Figure
with plt.style.context(('dark_background')):
fig,axs = plt.subplots(1,1,figsize=(7,2.5))
lw, = axs.plot(t,x,'r')
axs.set_xlim(0,window/fsamp)
axs.grid(which='major', alpha=0.2)
axs.set_ylim(-1,1)
axs.set_xlabel('Time (s)')
axs.set_ylabel('Amplitude')
axs.set_title('Streaming Audio')
plt.tight_layout()
plt.pause(0.001)
# Start Transmission
ser.write('START') # Send Start command
sleep(1)
for i in range(runtime):
buf = ser.read(bufsize) # Read audio data
buf = np.frombuffer(buf,dtype=dtype) # Convert to int16
buf = buf/maxval # convert to float
x.extend(buf) # Append to waveform array
# Update Plot lines
lw.set_ydata(x[-window:])
plt.pause(0.001)
sleep(delay)
# Stop Streaming
ser.write('STOP')
sleep(0.5)
ser.reset_input_buffer()
ser.reset_output_buffer()
ser.close()
# Remove initial zeros
x = x[window:]
# Helper Functions
def plotAll():
t = np.arange(len(x))/fsamp
with plt.style.context(('dark_background')):
fig,axs = plt.subplots(1,1,figsize=(7,2.5))
lw, = axs.plot(t,x,'r')
axs.grid(which='major', alpha=0.2)
axs.set_xlim(0,t[-1])
plt.tight_layout()
return
# Plot All
if do_plot:
plt.close(fig)
plotAll()
# Save Recorded Audio
if do_save:
wavfile.write(wavname,fsamp,np.array(x))
print "Recording saved to file: %s"%wavname
Once you have recorded all your audio files you can use audacity to check for proper spectrogram and remove noises and make a separate background noise directory.
Below is the installation for vibration sensor and microphone.
After the data acquisition (all data are synthetic or either simulated ) let's use Edge Impulse to build classifier model.
Part 1: Accelerometer sensor data
- Violent Behavior: The elephants have abrupt head shaking, often signaling of serious charge for it's defence. Below is the uploaded data on Edge Impulse.
- Sleeping behavior: When an Elephant is sleeping, there is no or very little change in the the deflection of any axis but it can be a standing sleep too.
- Sign of musth: Draping trunks over their tusks and shaking head most of the times, is a sure sign of musth https://africageographic.com/stories/elephant-body-language-101-a-guide-for-beginners/
- Random trunkmovements: This dataset is a noise reduction dataset, since elephants use the trunk for almost all works, so the basic activities and trunk movements would be ignored and only above highlighted movements would be detected.
Creating Impulse and generating features: We have used Time Series Data block which Operates on time series sensor data like vibration or audio data. Let's us slice up data into windows. In the processing blocks, we have selected Spectral Analysis: with input axes- X, Y and Z axis, Good for analyzing repetitive motion, such as data from accelerometers. Also it extracts the frequency and power characteristics of a signal over time. In the learning block, we have selected Neural Network Keras: Learns patterns from data, and can apply these to new data. Great for categorizing movement or recognizing audio. In DSP block we have plots for frequency domains done using FFT.
Training the model: The following hyper-parameters shown in below images were used, after training my model accuracy was 93.06% but that is not be counted for now because all data were simulated and synthetic. After training just deploy the model to native C/C++ codes to run on MCU.
Part 2: Audio and Vibration dataset
This audio dataset is provided by https://www.elephantvoices.org/ The audio dataset contains loud elephant calls, mainly for elephant musth behaviors sound calls for males and females elephants. We have collected our own dataset too https://github.com/vilaksh01/Gajraj-AI but could not use it due to lack of time, anyone willing to use the dataset may use.
Note: We could not fully simulate the vibration sensing since we didn't had actual sensors but it would be similar to what we have done for IMU accelerometer model, the only difference would be that we would have another noise dataset that would help model ignore the high frequency and rapid vibrations and only consider infrasonic ones so we won't have problem of getting incorrect readings due to elephants trunk movements.
Also our direct conclusion which came as of our research was that trunk is the most precise area which can reveal activities and actions of elephant that any other part of elephant's body.
Part 2. Building smart network capability CollarThis is very important feature in our project, the trunk sensing BLE node would send all it's data to collar to be sent to dashboard, BLE broadcasting would help to check for any failures or battery drains for our trunk BLE sensing node. The collar is powered by Particle Boron 2G/3G cellular IoT, I went to use Particle boards because of their code simplicity and also I saw LTE becoming a new norm for IoT devices in 5G era.
For Geofencing, we have hardcoded the geolocations, if collar is found to be out of the set geofence and found to be near human population, an alert to park rangers and a bee audio clip would be played to prevent elephant from raiding and let it return back to forest. To be continued....we will update our further works as we could not get enough time to complete all the things
Some critical information:- 1.56 Lakh Hectare is affected by wildlife menace which annually causes loss of Rs.229 crores to the farmers.
- A study conducted in HP by an NGO found that the wild animal causes loss of Rs.400 crore to Rs.500 crore every year. (GyanVigyanSamiti)
- The extent of loss is up-to 89% of crops in some cases.
- As per Kerala forest department figures 38, 994 Farmers have lost their crops in wild animal attack between 2010-2018.
- 996 Farmers died in Kerala due to wild animal attacks between 2010-2018.
- 60% of Elephants die due to Electrocution by electric fencing.
- Highest crop damage (30%) was recorded from the forest Ranges coming under the Northern Circle: pineapple (47%), sweet potato (47%). tapioca(42%). alocassia (39%). beans (25%) and plantains (23%) recorded highest percentage of damage.
Comments