The journey of data capture and model development is finally behind you, and now the final leap - deploying your Edge AI model where it matters most - at the edge! After investing countless hours in data acquisition and model training, the real litmus test of your model in action is just ahead.
In this second part of this series (here you find the first part), we will take the next step by deploying our trained model on the target device, leveraging the seamless DEEPCRAFT™ MicroPython integration. This enables us to deploy our model directly from DEEPCRAFT™ Studio with a single click, streamlining the entire process. Once the model is generated, our custom utility will convert it into a MicroPython-compatible version, ready to be effortlessly dropped into your device and imported into your MicroPython application. The workflow is straightforward, as illustrated below:
Now, that we have set the stage, let's dive into the nitty-gritty of deploying our Edge AI model. We will use an audio-based model deployment as our example application, and walk you through the process step-by-step. If you are new to PDM microphones or need a quick refresher on acquiring data from PDM microphones to develop your model, be sure to check out our previous articles linked.
Hardware SetupAll you need is the CY8CKIT-062S2-AI kit. The kit has two built-in IM72D128 digital microphones. The pins connected to the PDM-PCM bus in this kit are:
- CLK :
P10_4
- DATA:
P10_5
1. By now you should already have installed DEEPCRAFT™ Studio, but if not, please follow the steps detailed here and start your project.
2. MicroPython firmware running on your PSOC™ device. Find all the details on setting this up here.
3. Converter Utility GitHub repository cloned at root of your DEEPCRAFT™ project with steps mentioned in documentation.
4. Ensure to install GNUWIN32 Make.
Step 2. Deploy Model1. To get started, create a new project or open an existing one in DEEPCRAFT™ Studio and ensure your model is already created. Here we take the example of Drill Material Detection Audio Starter Model.
2. Next generate the source code for your model. To do so, open the model file named conv1d-medium-accuracy-1
and select Code Gen
. Click on the Generate Code
tab and you will see a folder named Gen
created with model.c
and model.h
in it.
3. Next, clone the GitHub repository at the root of your DEEPCRAFT™ project. Follow the instructions provided in the README to make the installed utility script executable. Once repository is cloned successfully, you should see the following:
4. Right-click on the utility script and select Run Script
to execute it. This will open a CMD prompt, which will guide you through the process of generating a MicroPython version of your model.
5. Simply follow the instructions provided in the CMD prompt, and your deepcraft_model.mpy
- a MicroPython version of your model - will be ready to be used.
4. Access the generated MicroPython version of your model from any MicroPython-supported IDE, such as Thonny. Select the Upload to
option to transfer your model to the edge device. This will ensure that your model is deployed and ready for use on the device.
1. Once your model is uploaded to the edge device, you can import it just like any other module in your MicroPython code. Simply use the following line of code to import your model:
import deepcraft_model as m
model = m.DEEPCRAFT()
2. In the application code, first import the essential modules and instantiate the model.
import time
import math
import deepcraft_model as m
import gc
import array
import random
import sys
import select
from machine import PDM_PCM, Pin
import machine
model = m.DEEPCRAFT()
3. Configure the parameters for microphone and model. For details on microphone configuration, please refer to previous article.
# Constants (adjust these according to your hardware and requirements)
SAMPLE_RATE_HZ = 16000 # Desired sample rate in Hz
AUDIO_BUFFER_SIZE = 512 # Size of the audio buffer
AUDIO_BITS_PER_SAMPLE = 16 # Dynamic range in bits
MICROPHONE_GAIN = 12 # Microphone gain setting(best prediction observed at 12)
DIGITAL_BOOST_FACTOR = 50.0 # Digital boost factor for input signal
IMAI_DATA_OUT_SYMBOLS = ["unlabelled", "air", "plastic", "plastic_out", "wood", "wood_out"]
# Initialize label scores and labels
label_scores = [0.0] * len(IMAI_DATA_OUT_SYMBOLS)
label_text = IMAI_DATA_OUT_SYMBOLS
data_out = array.array('f', [0.0] * len(IMAI_DATA_OUT_SYMBOLS))
# PDM_PCM configuration
clk_pin = "P10_4"
data_pin = "P10_5"
rx_buf = array.array('h', [0] * AUDIO_BUFFER_SIZE)
4. Implement a function to normalize samples. This is completely based on application in development.
# Function to normalize sample into range [-1, 1]
def sample_normalize(sample):
return sample / float(1 << (AUDIO_BITS_PER_SAMPLE - 1))
5. Now, simply initialize the microphone to start capturing real-time signals.
def main():
machine.freq(machine.AUDIO_PDM_24_576_000_HZ)
# Initialize the model
result = model.init()
# Initialize audio
pdm_pcm = PDM_PCM(
0,
sck=clk_pin,
data=data_pin,
sample_rate=SAMPLE_RATE_HZ,
decimation_rate=64,
bits=PDM_PCM.BITS_16,
format=PDM_PCM.MONO_LEFT,
left_gain=MICROPHONE_GAIN,
right_gain=MICROPHONE_GAIN,
)
pdm_pcm.init()
print("PDM initialized successfully")
6. Finally, start reading the data and input it into the developed DEEPCRAFT model using model.enqueue()
and receive the classified output using model.dequeue()
while True:
num = pdm_pcm.readinto(rx_buf)
sample_max = 0.0
audio_count = num // 2
for i in range(audio_count):
# Get sample from rx_buf
raw_sample = rx_buf[i]* DIGITAL_BOOST_FACTOR
# Normalize the sample to range [-1, 1]
normalized_sample = sample_normalize(raw_sample)
# Apply digital boost factor
boosted_sample = normalized_sample
# Pass the boosted sample to the model
result = model.enqueue([boosted_sample])
sample_abs = abs(boosted_sample)
if sample_abs > sample_max:
sample_max = sample_abs
# Check if there is any model output to process
output_status = model.dequeue(data_out)
if output_status == 0:
max_score = -math.inf
best_label = 0
for idx, score in enumerate(data_out):
print(f"Label: {label_text[idx]:<10} Score(%): {score*100:.4f}")
if score > max_score:
max_score = score
best_label = idx
print("\r\n")
print(f"Output: {label_text[best_label]:<30}\r\n")
Here, we first normalize the raw data and boost the sample to make it ready to input to model. For every data input, a corresponding classified output is expected. Play an audio of drilling through different substances and verify your model on the edge.
Following is output snapshot on playing an audio of drilling through air.
And with that, you're now equipped to detect drilling activities in your surroundings with ease!
Keep in mind that you are working with devices that have very limited RAM. Models that are too large or computationally heavy may not function reliably.
Please check the documentation here to see the full list API's supported to interact with your model.
And that's how easy it is to get your model to edge devices now. By following these simple steps, you can take your model from development to deployment in a matter of minutes. So why wait? Start building and deploying your Edge AI models today with DEEPCRAFT™ - MicroPython Integration!
Got Problems with MicroPython for PSOC™ 6?Leave us a note here and we will do all it takes to fix it! 🚀
Curious for More?This article is covering the model deployment process using DEEPCRAFT™ -MicroPython Integration. If you are looking for instructions on how to get started with data acquisition and model training, please check our Part I Protip of this series.👩💻
Find additional information related to the PSOC™ 6 MicroPython enablement in our documentation or open a topic in our discussion section on GitHub.
Comments