Wildlife, including plants and animals, play an important part in stopping climate change and in maintaining a healthy ecological system. Saving (and then growing) our natural wildlife has been shown to help with climate change and global warming. However, our wildlife faces multiple issues from deforestation to poachers. Currently conservation is done by using camera traps to capture pictures of animals to count their population and understand their movement patterns, as well as using forest rangers to capture illegal poachers and loggers.
However, unless any illegal activity happens in front of the camera trap, it cannot capture any image. To tackle this problem, we propose stopping poachers and deforestation by using the microphone on the QuickFeather Development board and Machine Learning with SensiML AI framework to train an AI model that can use sounds to classify whether something illegal is happening.
Project SetupHardware Setup
The first step is to set up the hardware so that it can be used to capture training data. A good starting place is here. In short, here are the steps that yo need to do:
1. Install Python and clone the TinyFPGA programmer application from here
2. Install the tinyfpga library by doing `pip install tinyfpgab`
3. Flash the hardware with the Data Collection Firmware. For this project, we are collecting Audio data over USB Serial Connection, so we will use the quickfeather-audio-data-collection-usb-serial.bin file.
4. The command to flash the board is python tinyfpga-programmer-gui.py --port COMX --m4 <bin_file> --mode m4
5. Once that is done, you need to connect the USB to TTL serial adapter to the QuickFeather device.
Data Collection
To work on this project, we need to collect data to train the model. Three types of data need to collected:
1. Normal Animal Background Noise: This is needed as a base class for normal behavior or sound. This sound was taken from here.
2. Poaching Noise: This contains noise of gunshots and other poaching noise. This sound was taken from here
3. Deforestation Noise: To also capture people trying to cut trees illegally, we need sound associated with it. This sound was taken from here.
Software Setup
Before we can train our model on data, we need to set up the Data Capture Lab to capture training data from our QuickFeather device. You can set it up by first creating an account and then downloading it from here.
After that, you need to go to capture mode and set up your device as well as the sensor to start collecting data.
Data Capture Lab (Collecting and Labelling data)
Now that we have set up our device and software, we can start to collect and label out data in the Data Capture Lab (DCL).
To collect data, we played all the sounds using a speaker in front of the microphone on the QuickFeather board. A separate recording was made for each of the input noise type. We set up three classes in DCL: background, poacher and deforestation. Then once we captured all the data, we created and equal number of segments for each of the input data.
Analytics Studio
Once you have captured and labelled the data into segments in DCL, it is automatically synced online with the Analytics Studio where you can train your models. There are 4 main steps to train your model:
1. Create Query: This is used to get the data that you want to use for training your models. You need to select the label, source and metadata in the query. On the right hand side, you can see a breakdown of the different labels in your data.
2. Create Pipeline: Once you have created your query, you need to create a training pipeline to build your model. You need to select the query you created in the previous step and the type of segmenter, optimization metric and classifier size.
3. Explore Model: You can also explore your model to see how well it has performed on the data and also see the feature vector plot. This can be useful to explain your model.
4. Download Model: The final step is to download your model so that it can be deployed to the device. In my case, I chose the model that has a good balance between accuracy and model size.
The final step is to deploy your model





Comments