Edge AI is becoming increasingly important as more applications demand real-time intelligence directly on embedded devices. With this in mind, the UNIHIKER K10 stands out as a powerful new single-board computer designed for education, prototyping, and AI-based projects. It comes packed with onboard peripherals such as a camera, microphone, display, speaker, buttons, and multiple sensors, making it an excellent platform for learning and experimenting with Edge AI.
However, being a newly released board, the UNIHIKER K10 presents a unique challenge. DFRobot provides a pre-built UNIHIKER K10 library that simplifies access to all the onboard peripherals. While this makes the board very easy to use for beginners, it also limits low-level hardware control. This becomes a major obstacle when trying to build custom applications, especially advanced workflows like integrating Edge Impulse with custom sensor pipelines and camera handling.
In this tutorial, we go beyond the default library approach. We not only demonstrate how to interface Edge Impulse with the UNIHIKER K10 and use its sensors for Edge AI applications, but we also reverse-engineer the board to gain deeper control over the hardware. By studying the official datasheet and hardware architecture, we implement our own custom code, enabling advanced Edge Impulse use cases that are not possible using the standard libraries alone.
By the end of this project, you will understand:
- How to run Edge AI models on the UNIHIKER K10
- How to work with its camera and sensors for Edge Impulse
- How to bypass library limitations using datasheet-based custom coding
- How to unlock the board’s full potential for real-world Edge AI applications
This guide is ideal for makers, students, and developers who want to move beyond basic examples and truly explore low-level Edge AI development on the UNIHIKER K10.
For beginner, in this tutorial, you’ll learn how to build real Edge AI models on the Unihiker K10 using Edge Impulse.
We’ll walk through:
✔ IMU gesture classification
✔ Environmental sensor regression
✔ Audio classification
✔ Image object detection
By the end, your K10 will be running multiple AI models directly on the device — no cloud processing needed!
Supplies1x SD Card
1x 3D Printed Case(Optional)
1. Download the Project GitHub Repository
- Go to the project’s GitHub page: Learn-Edge-AI-on-Unihiker-K10-Edge-Impulse-Beginner-Tutorial
- Click Code → Download ZIP.
- Extract the ZIP file to your computer.
2. Copy the Assets Folder to the SD Card
Inside the extracted project folder, locate the: assets folder
This folder contains required images
- Insert the SD card used with the Unihiker K10 into your computer.
- Copy the entire assets folder into the root directory of the SD card.
Download the Case.stl file and 3D print it, or open the case design in Fusion 360 if you want to make changes.
Step 1: Introduction to UNIHIKER K10The UNIHIKER K10 is an all-in-one AI learning board designed for beginners, makers, and educators who want to explore Edge AI, TinyML, sensors, and IoT without any wiring or extra modules.
Key Built-In Hardware
- ESP32-S3 microcontroller (TinyML-ready)
- 2.8" Color Display (240×320)
- 2MP Camera (perfect for image classification & object detection)
- Dual MEMS Microphones + Speaker (sound classification, anomaly detection, voice features)
Sensors:
- Temperature & Humidity
- Ambient Light
- 6-Axis IMU (motion & gesture detection)
- RGB LED, Buttons, MicroSD Slot, GPIO Expansion
- Wi-Fi + Bluetooth connectivity
Everything you need for AI experiments is already on the board — no extra hardware required.
Built-In AI Features
The K10 ships with ready-to-use demos like:
- Face detection
- Object recognition
- QR code detection
- Motion detection
- Offline voice commands
These give you an instant taste of what the hardware can do.
Why It’s Perfect for Edge Impulse
- Onboard sensors make data collection effortless
- Runs lightweight ML models directly on the device
In this tutorial, we’ll use all these onboard features to build real AI projects step-by-step.
Step 2: Introduction to Edge ImpulseEdge Impulse is a platform that makes it easy to build and deploy machine learning models on edge devices—like the UNIHIKER K10, without needing deep AI expertise.
It’s designed for beginners, makers, and engineers who want to turn sensor data → trained ML model → real-world application in just a few steps.
Why Edge Impulse?
- No AI background needed: Build models using a visual workflow.
- Works with any sensor: IMU, microphone, camera, environment sensors and more...
- Collect data directly from the device: Record motion, sound, images, or environmental readings in real time.
- Automated ML pipeline: Feature extraction, model training, tuning—handled for you.
- Easy deployment: Export models directly as firmware, libraries, or code you can run on the K10.
To build any ML project in Edge Impulse, you’ll work through a simple workflow:
Acquire Data → Build an Impulse → Train → Deploy
Here’s what each part means and how you’ll use it in this tutorial.
3.1. Data Acquisition (Collecting Data)
Machine learning always starts with good, clean data.
Edge Impulse supports multiple ways to bring data into your project.
In this tutorial, we’ll cover these three:
● Data Forwarder
- Streams live sensor values (IMU, temperature, humidity, light, etc.) from the K10 to Edge Impulse.
- Great for collecting gesture, environmental, or motion data.
● CSV Wizard
- Upload CSV files and automatically map columns to sensors.
- Useful for any data you already logged or generated separately.
● Data Upload
- Drag-and-drop images, audio files, or sensor logs directly.
- Perfect for camera datasets or pre-recorded sound samples.
These three methods give you flexibility no matter what sensor or project you’re working on.
3.2. Creating an Impulse
An Impulse is your complete ML pipeline — the recipe that explains how data flows from input → preprocessing → model → output.
An Impulse contains:
- Input block (sensor data, audio, images, etc.)
- Processing block (feature extraction)
- Learning block (the model you train)
- Output block (classification, prediction, score, detection)
3.3. Understanding the Blocks
● Time-Series Blocks
- For IMU, environmental sensors, or audio.
- Generates features like frequency, energy, motion patterns.
● Spectrogram / MFCC
- For audio: Converts sound into frequency images that the model can learn from.
● Image Processing
- For camera images: Resizes, normalizes, and prepares data for object detection or classification.
● Neural Network / Regression / Anomaly Blocks
- Models that learn patterns and make predictions.
3.4. Types of Models in Edge Impulse
Classification
Identifies which category something belongs to.
Good for:
- Gestures
- Sound types
- Simple image classes
Regression
Predicts a continuous value.
Great for:
- Temperature/light prediction
- Sensor trend estimation
- Context-aware adjustments
Anomaly Detection
Finds "something unusual" compared to normal behavior.
Useful for:
- Identifying abnormal movements
- Monitoring environmental changes
Object Detection
Locates and identifies objects in images (bounding boxes).
Good for:
- Counting objects
- Detecting tools or items
- Camera-triggered automation
Each model type pairs naturally with different K10 sensors, enabling a wide variety of applications.
3.5. Training the Model
Once your impulse is set, you train your model with your collected data:
- Choose training/test split
- Tune hyperparameters (automatically or manually)
- Check accuracy and performance metrics
- Visualize loss curves and confusion matrix
Good training = reliable real-world results.
3.6. Deploying the Model
After training, Edge Impulse lets you deploy in multiple ways:
- Firmware / Binary for supported boards
- C++ or Arduino Library or Python library or...
- WebAssembly for browser demos
- Edge Impulse Studio live classification
For this tutorial, you’ll deploy models as an Arduino Library for the Unihiker K10, run inference directly on the device, and display the results on the screen.
Step 4: Motion Data ClassificationFrom Step 4 to Step 10, we will train and deploy a Motion Classification model on the Unihiker K10 using Edge Impulse.
To begin, go to edgeimpulse.com.
If you are a new user, create an account and log in.
Once logged in, you will see your Project Dashboard.
Click Create new project, enter a project name, select Public or Private, and click Create.
After the project is created, you’ll be redirected to the main project page.
Now we start with the first important step: data collection.
Step 5: Prepare for Data AcquisitionIn this step, we will collect 6-axis IMU data from the Unihiker K10 into the Edge Impulse project using the Data Forwarder method.
5.1 Flash the Data-Streaming Arduino Code
- Download the GitHub repo for this project: : Learn-Edge-AI-on-Unihiker-K10-Edge-Impulse-Beginner-Tutorial
- Open the Motion_Data_Collect sketch in the Arduino IDE.
Before uploading the code, ensure that the Unihiker K10 board manager is installed.
5.2 Install the Unihiker K10 Board in Arduino IDE
- Open:
- File → Preferences
- In Additional Board Manager URLs, paste:
https://downloadcd.dfrobot.com.cn/UNIHIKER/package_unihiker_index.json- Click OK.
- Now go to:
- Tools → Board → Board Manager
- Search for Unihiker and install it.
Once installed, you should see Unihiker K10 in the board list.
5.3 Connect and Upload the Code
- Connect the Unihiker K10 to your PC via USB.
- Go to Tools:
- Select Board: Unihiker K10
- Select the correct COM port
- Enable USB CDN on Boot
- Upload the sketch.
Once the upload is successful, the K10 is ready to stream IMU data over USB.
5.4 Install Edge Impulse CLI
To send data from the device to Edge Impulse, install the CLI on your PC:
Edge Impulse CLI Installation Guide:
https://docs.edgeimpulse.com/tools/clis/edge-impulse-cli/installation
After installation, open a new Command Prompt window and run:
edge-impulse-data-forwarderThe CLI will ask you to log in using your Edge Impulse email and password.
5.5 Select Your Device and Project
Once logged in:
- You’ll see a list of available COM ports.
- Select the port used by the Unihiker (same as seen in Arduino IDE).
- The CLI will list all your Edge Impulse projects.
- Select the project you created for this tutorial.
5.6 Naming the Sensor Parameters
The CLI will now read the incoming IMU data string from the K10 and ask:
What do you want to call them? Separate the names with ', '
This defines the CSV column names.
For IMU data, name them:
x, y, z(If you are using environmental sensors in the future, you can name fields like Temp, Humi, Press, etc.)
Next, it will ask for a device name. Enter any name you like and press Enter.
Your device is now successfully connected to Edge Impulse.
5.7 Start Collecting Data
Do not close the Command Prompt window.
Now return to your browser, open your Edge Impulse project, and begin collecting IMU motion samples.
Now that your Unihiker K10 is streaming IMU data, let’s start collecting samples inside Edge Impulse.
- Open your Edge Impulse project and go to Data Acquisition.
- If your CMD window is still running the forwarder, you should see your device listed under “Connected Devices.”
- Next, set the Label and Sample Length.
Label is simply the “name” of the motion you’re recording — for example:
- idle
- shake
- left_right
- up_down
- Click Start Sampling.
- Edge Impulse will record IMU data directly from your K10.
Gestures to Record
For this tutorial, we’ll collect 4 motion types:
- Idle (keep the device still)
- Shake
- Left-Right(LR) motion
- Up-Down(UD) motion
Record:
3 samples × 10 seconds each → 30 seconds of data per gesture.
Each motion type will show a unique waveform pattern in the raw accelerometer traces.
- After all samples are collected, click the three dots next to each 10-second sample and choose Split sample.
- Split them into 1-second windows for cleaner training.
- Finally, move a few of your samples from Training to Testing, keeping an 80:20 split.
Your IMU dataset is now ready for building the model!
Step 7: Create ImpulseNow that your data is ready, let’s build the Impulse — the pipeline that processes your sensor data and trains your AI model.
- Go to the Create Impulse tab in your Edge Impulse project.
- Add the blocks:
- Processing Block: Select Spectral Analysis
(For IMU data, spectral features help capture motion frequency patterns.
Note: Different data types require different processing blocks.)
- Learning Block: Select Classification
(Because we want the model to recognize which motion is happening.)
- Click Save Impulse to apply the configuration.
Your impulse is now set — next, we’ll generate features and train the model!
Step 8: Generate FeaturesNow that your impulse is configured, it’s time to convert the raw IMU signals into meaningful features that the classifier can learn from.
- Go to Spectral Features under Impulse Design.
- Scroll through the parameters (FFT length, filters, spectrum options).
- For this tutorial, we will keep the default values — simply click Save parameters.
- After saving, switch to the Generate Features tab.
- Click Generate features.
Edge Impulse will now:
- Resample your data
- Slice it into windows
- Run FFT on each window
- Create a feature vector for every sample
Once processing is complete, you’ll see:
- A Feature Explorer plot showing clusters for each gesture (Idle, LR, Shake, UD)
- On-device performance estimates (RAM usage + processing time)
These clusters visually confirm that each motion has a distinct pattern — perfect for classification.Now that your impulse is configured, it’s time to convert the raw IMU signals into meaningful features that the classifier can learn from.
- Go to Spectral Features
- Scroll through the parameters (FFT length, filters, spectrum options).
- For this tutorial, we will keep the default values — simply click Save parameters.
- After saving, switch to the Generate Features tab.
- Click Generate features.
- Edge Impulse will now:
- Resample your data
- Slice it into windows
- Run FFT on each window
- Create a feature vector for every sample
Once processing is complete, you’ll see:
- A Feature Explorer plot showing clusters for each gesture (Idle, LR, Shake, UD)
- On-device performance estimates (RAM usage + processing time)
These clusters visually confirm that each motion has a distinct pattern — perfect for classification.
Step 9: Train the ModelWith your features ready, it’s time to train the motion classification model.
- Go to the Classifier panel under Impulse Design
- Keep all settings at their default values
- Click Save & Train.
Edge Impulse will now train a neural network using your IMU feature data.
After Training you will see:
- Accuracy (in our case, 100%)
- Loss value
- A Confusion Matrix showing how well each motion was recognized
- Performance Metrics (Precision, Recall, F1 Score)
- Data Explorer visualization showing correct vs incorrect classifications
- On-device performance
- Inference time (around 1 ms)
- Peak RAM usage
- Flash usage
If your dataset was clean and well-separated, you should see strong accuracy and tight clusters — just like in the screenshots.
Step 10: Deploying the Model on Unihiker K10Now that your model is trained, it’s time to run it directly on the Unihiker K10 and visualize the results with RGB LED colors + Emoji images, just like in your sample code.
1. Export the Model as Arduino Library
- Go to the Deployment panel in Edge Impulse.
- Under Deployment options, choose Arduino Library.
- Click Build.
- After the build finishes, the.zip Arduino library will download automatically.
2. Install the Arduino Library
- Open Arduino IDE.
- Go to: Sketch → Include Library → Add.ZIP Library…
- Select the downloaded Edge Impulse library.
After installation, you'll see it under: File → Examples → Your Edge Impulse Library
3. Import the Example Code
- Navigate to your GitHub repo folder and open: Motion_Data_Classification.ino
- This is your main Arduino sketch.
4. Fix the Library Name
Every Edge Impulse deployment generates a library with a unique name.
To avoid compile errors:
- Open any example from the installed EI library
- Copy the exact library include name
- Replace the include in your.ino file:
#include <your_ei_library_name_here.h>The gesture classifier object names may also differ—match them with the classes in your library.
5. Upload to Unihiker K10
Now:
- Select the Unihiker K10 board
- Select the correct COM port
- Click Upload
⏳ Compilation takes a while—be patient.
Once upload completes, the K10 will automatically start running inference on incoming IMU data.
How the Code Works (Your Example Functionality)
Your classification logic handles gesture → RGB color + Emoji display:
If Gesture = Idle
- RGB LEDs → Green
- Image → Normal face (EMJ1.png)
If Gesture = UD
- RGB LEDs → Pink
- Image → Happy emoji (EMJ2.png)
If Gesture = Shake
- RGB LEDs → Red
- Image → Spiral eyes (EMJ3.png)
If Gesture = Left/Right
- RGB LEDs → Blue
- Image → Sleepy emoji (EMJ4.png)
Your mapping code (from the screenshot) looks perfect:
// =============================
// RGB COLOR + IMAGE DISPLAY
// =============================
uint32_t color = 0x000000;
const char *image_path = "S:/default.png";
if (strcmp(gesture, "Idle") == 0) {
color = 0x00FF00;
image_path = "S:/EMJ1.png";
}
else if (strcmp(gesture, "UD") == 0) {
color = 0xFF00FF;
image_path = "S:/EMJ2.png";
}
else if (strcmp(gesture, "Shake") == 0) {
color = 0xFF0000;
image_path = "S:/EMJ3.png";
}
else if (strcmp(gesture, "LR") == 0) {
color = 0x0000FF;
image_path = "S:/EMJ4.png";
}This ensures the RGB LEDs and emoji images update instantly as the IMU detects each motion class.
✔ Final Output on Unihiker K10
Once the code runs:
- The IMU detects the gesture
- The model classifies it in ~1 ms
- The correct emoji appears on the screen
- The LED color changes according to the gesture
From Step 11 to Step 14, we will train and deploy an Environmental Sensor Regression model on the Unihiker K10.
This model predicts a Comfort Score based on On-Board Temperature and HumiditySensor values.
11.1. Create a New Edge Impulse Project
- Start by creating a new project in Edge Impulse.
- Then go to the Data Acquisition panel.
11.2. Generating a Dataset (Using Web App)
Unlike motion data, it’s difficult to manually collect environmental samples for all temperature/humidity combinations.
So, to generate a realistic dataset, I used a web app (created with Claude) that simulates many environmental conditions and calculates a comfort score.
Open the dataset generator:
🔗 https://claude.ai/public/artifacts/c39715ad-48e6-4e8b-ad16-cbb41714074a
Configure the dataset:
- Set minimum & maximum Temperature
- Set minimum & maximum Humidity
- Choose the number of data points (I used 1000)
- Click Generate
Once generated, download the dataset as a CSV file.
11.3. Import Data Using CSV Wizard
Back in your Edge Impulse project:
- Go to Data Acquisition
- Click CSV Wizard
- Choose your downloaded CSV file
- Click Looks Good
- When asked:
- Select No time-series data
- Choose Comfort score as the label
- Select temperature and humidity as features / values
- Click Finish Wizard
11.4. Upload the Data
- Now click Upload the data and select the same CSV again.
- Keep the auto split between training and testing.
- Then click Upload.
Edge Impulse will now ingest all 1000 data points and organize them into your project dataset.
Step 12: Create Impulse & Generate FeaturesNow that we have uploaded our environmental dataset, the next step is to set up the impulse—this defines how Edge Impulse will process the data and train the regression model.
12.1. Create the Impulse
Navigate to Create impulse in the left panel.
For this regression project, the default settings work perfectly, so we will not change the window size or frequency.
Now configure the blocks:
Processing Block
- Select Raw Data
- Since temperature & humidity are already clean numeric values, no extra processing is required.
Learning Block
- Select Regression
- We want to predict a continuous value: Comfort Score.
Click Save impulse.
12.2. Generate Features
Once the impulse is saved:
- Go to the Raw data panel
- Click Save parameters
- Then click Generate features
Leave all settings to default and let Edge Impulse process the dataset.
After generation, you will see:
- A Feature Explorer scatter plot
- Values mapped according to temperature, humidity, and comfort score
- On-device performance metrics (RAM, flash usage, etc.)
This completes the feature generation step.
Step 13: Train the Regression Model & Understand the ResultsNow that all features are generated, the next step is to train the regression model that predicts the Comfort Score based on Temperature and Humidity.
1. Train the Model
- Go to the Regression panel in Edge Impulse.
- Leave all settings at their default values.
- Click Save & Train.
Edge Impulse will now train a lightweight neural network regression model.
Once training is complete, you will see the model performance results.
2. Understanding the Training Results
After training, Edge Impulse provides two important visual reports:
Model Metrics (Validation Set)
These metrics indicate how accurate the model is:
Loss (MSE): 0.42
Low mean squared error → very good accuracy
Mean Absolute Error (MAE): 0.51
Predictions are off by only ±0.5 on average
Explained Variance Score: 0.93
Model explains 93% of the variation → excellent
Interpretation
- A 0.42 MSE and 0.93 variance score indicate that the model fits the data very well.
- The small MAE means comfort scores are predicted very close to the real values.
- Overall, this is a stable and highly accurate regression model ready to be deployed.
Data Explorer (Prediction Visualization)
The scatter plot shows how well the model predicts values across the entire dataset.
- Green dots – Predictions that are correct (within the acceptable error threshold).
- Red dots – Predictions that are incorrect (outside the error threshold).
From the visualization, we can observe:
- Most data points are green, meaning the model learned the relationship between Temperature, Humidity, and Comfort Score very well.
- Only a few points are red, usually in extreme environmental ranges.
- This indicates a high-quality regression model suitable for real-time applications.
Edge Impulse also lists the on-device performance:
- Inferencing time: ~2 ms
- Peak RAM usage: ~1.2 KB
- Flash usage: ~10.2 KB
This confirms the model is extremely lightweight and ideal for deployment on Unihiker K10.
3. Export the Model
Now go to the Deployment section:
- Select Arduino Library
- Click Build to generate the library
- Edge Impulse will automatically download the .zip file
This library will be used in the next step to run comfort score predictions directly on the Unihiker K10.
Step 14: Upload & Run the Regression Model on Unihiker K10Now let’s deploy the Comfort Score Regression model on the Unihiker K10.
14.1. Open the Arduino Example Code
- From the downloaded GitHub repository, open the
- Environmental_Sensor_Regression sketch in Arduino IDE.
14.2. Install Required Libraries
a. Install the Edge Impulse Regression Model Library
- Go to Sketch → Include Library → Add.ZIP Library
- Select the Arduino library you downloaded from Edge Impulse.
- Once installed, the library will appear under File → Examples.
Important: Update the Edge Impulse library name in the code to match the exact name of your downloaded library folder.
b. Install the AHT20 Sensor Library
- Open Library Manager (Tools → Manage Libraries)
- Search for AHT20
- Install the official AHT20 temperature & humidity sensor library.
14.3. Upload the Code to Unihiker K10
- Select the correct Board (Unihiker K10)
- Select the correct COM port
- Click Upload
The first compilation may take some time—be patient.
14.4. View the Results on the Screen
Once uploaded successfully, the K10 display will show:
- Live Temperature readings
- Live Humidity readings
- Predicted Comfort Score (from the regression model)
The values update in real time, demonstrating on-device machine learning running directly on the Unihiker K10.
Step 15: Audio Classification – Data CollectionIn Step 15 to Step 17, we will collect audio data using the Unihiker K10, train an audio classification model, and deploy it on the device.
1. Create a New Edge Impulse Project
- Go to edgeimpulse.com
- Create a new project for audio classification.
2. Upload Audio Data Collection Firmware
- Open the Audio_Data_Collect Arduino sketch from the GitHub repository.
- In the code:
- Set the number of labels
- Define the label names (classes)
// ---------------- Settings -----------------
String labels[] = {"Noise", "Gun", "Chainsaw"};
int labelCount = 3;
int currentLabelIndex = 0;- Upload the code to the Unihiker K10.
3. Audio Recording Interface on K10
Once uploaded, the K10 screen will display:
- Current label name
- Sampling time (in seconds)
- Name of the last recorded audio file
Each audio file is saved as:
<LabelName>_<RandomNumber>.wav4. Button Controls
The onboard buttons control recording and navigation:
- Button A
- Click → Start audio recording
- Long press → Go to previous label
- Button B
- Click → Increase recording time by 1 second
- Double click → Reset recording time to default
- Long press → Go to next label
5. Collect Audio Samples
For this demo, record the following classes:
- Noise (background sound)
- Gunshot
- Chainsaw
Data Collection Plan
- Each recording: 10 seconds
- Total data per class: ~1 minute
This provides enough variation for training a reliable audio classification model.
Step 16: Audio Data ProcessingOnce the audio samples are recorded and saved on the SD card, the next step is to import and prepare the data in Edge Impulse.
1. Upload Audio Data
- Open your Edge Impulse project
- Go to Data Acquisition → Upload data
- Select all audio files belonging to one label
- Set the Label name correctly
- Upload the files
Repeat this for each audio class.
2. Split Audio Samples
- After uploading, each audio file will be 10 seconds long
- For better training, split each sample into 1-second segments (Both in training and test)
- This increases the number of training samples and improves accuracy
3. Crop Important Audio Features
- While splitting, make sure to crop around meaningful audio spikes
- Remove silent or irrelevant portions of the audio
- Focus on sections where the sound pattern is clearly visible
Once your audio dataset is clean and properly split, it’s time to train and deploy the model.
1. Create the Impulse
Go to Create Impulse
Set:
- Processing block:
- MFE → for non-voice sounds (noise, machines, events)
- MFCC → for voice or speech-based data
- Learning block → Classification
- Leave all other settings as default
- Click Save Impulse
2. Generate Features
- Open the MFE / MFCC panel
- Click Save parameters
- Click Generate features (use default settings)
This step extracts meaningful frequency features from the audio samples.
3. Train the Model
- Go to the Classification panel
- Keep all parameters at their default values
- Click Save & Train
4. Review Model Results
Once training is complete, Edge Impulse will display:
- Overall model accuracy
- Confusion matrix showing how well each class is recognized
- Per-class performance metrics
If the accuracy is low, consider collecting more data or improving audio cropping.
Model Results Explanation
From the results shown above, we can see that the audio classification model is performing extremely well:
Accuracy: 100%
- The model correctly classified all validation samples.
- Confusion Matrix
- Chainsaw, Gun, and Noise are all classified with 100% confidence
- No overlap or misclassification between classes
- This means each sound has a very distinct audio pattern.
Evaluation Metrics
- Precision, Recall, and F1 Score are all 1.0
- Area under ROC Curve is 1.0, indicating perfect separability
Data Explorer
- Each class forms a clearly separated cluster
- This shows the MFE/MFCC features successfully captured unique sound characteristics
On-device Performance
- Inference Time: 7 ms
- RAM Usage: 14.7 KB
- Flash Usage: 41 KB
This confirms the model is accurate, lightweight, and perfectly suited for real-time edge deployment on the Unihiker K10.
Step 18: Image Object Detection – Data CollectionIn Step 18 to Step 20, we will collect image data, train an image object detection model, and deploy it on the Unihiker K10.
1. Create a New Project
- Go to Edge Impulse
- Create a new project for image object detection
2. Upload Image Data Collection Firmware
- Open the Image_Data_Collect Arduino sketch from the GitHub repository
- In the code:
- Set the number of labels
- Define the label names
/* =========================================================
Labels Configuration
========================================================= */
String labels[] = {"BACKGROUND","IN", "CN", "USA", "HK"};
int labelCount = 5;
int currentLabelIndex = 0;- Upload the code to the Unihiker K10
3. Image Capture Controls
Once the code is uploaded, use the buttons to capture images:
- Button A → Capture image
- Button B → Next label
- Long press Button A → Previous label
Each captured image is saved as:
<LabelName>_<RandomNumber>.jpeg4. Collect Image Dataset
For this demo, images were collected for 4 different coins:
- Indian Coin
- Chinese Coin
- Hong Kong Coin
- USA Coin
5. Background Images
- Background-only images (without objects) help the model learn what not to detect
- These improve detection accuracy and reduce false positives
Data Collection Tips
- Capture 15–16 images per class
- Use different orientations, angles, and lighting
- Ensure the object is clearly visible in each frame
6. Upload Images
- Copy images from the SD card
- Upload all images to Data Acquisition in Edge Impulse
- Upload without assigning labels (labels will be added during bounding box annotation)
After uploading all images, the next step is to label the objects using bounding box annotation.
1. Annotate Images
- Open Data Acquisition
- Click on each uploaded image
- Draw a bounding box around the object (coin)
- Assign the correct label name
Make sure the box tightly covers only the object.
2. Annotate All Samples
- Repeat this process for all images
- Annotate both training and test datasets
Once all images are annotated, your dataset is ready for impulse creation and model training in the next step
Step 20: Training the Image Object Detection ModelNow that all images are annotated, we can create and train the object detection model.
1. Create the Impulse
Go to Create Impulse
Set:
- Processing block → Image
- Learning block → Object Detection
- Leave all other settings as default
- Click Save Impulse
2. Configure Image Processing
- Open the Image block
- Select Grayscale (reduces memory usage and improves edge performance)
- Click Save parameters
- Click Generate features
3. Train the Object Detection Model
- Go to the Object Detection panel
- Change the following parameters:
- Training cycles: 60
- Learning rate: 0.01
- Leave other settings unchanged
- Click Save & Train
4. Model Results Explanation
From the results shown above:
F1 Score: 96%
- Indicates strong balance between precision and recall for object detection.
Confusion Matrix
- Coin from India, China, Hong Kong, and USA are detected correctly
- Background samples help reduce false detections
Metrics
- Precision (non-background): 1.00 → Very few false positives
- Recall (non-background): 0.92 → Most objects are detected successfully
- F1 Score: 0.96 → Excellent real-world performance
On-device Performance
- Inference Time: ~1.1 seconds
- RAM Usage: 119 KB
- Flash Usage: 81 KB
This confirms the model is accurate and optimized for edge object detection on the Unihiker K10.
UNIHIKER K10 – Detailed Pin Mapping, I²C Addresses & Control LinesThis section describes how each major peripheral on the UNIHIKER K10 is electrically connected to the ESP32-S3, including exact GPIO pins, I²C addresses, and signal roles, as observed from the schematic and validated by practical code mapping.
1. Core MCU – ESP32-S3-WROOM-1The ESP32-S3 acts as the central controller and directly interfaces with camera, audio (I2S), SPI peripherals, USB, and I²C devices.
Camera Sensor: GC2145
Interface Type: 8-bit parallel DVP + I²C control
Camera Data & Sync Pins
Camera Control
Display Controller: ILI9341
Interface: SPI (no MISO)
SPI Pins
Display Control Lines
SPI alone is insufficient - display initialization requires XL9535 configuration first.
4. SD Card (SPI Mode)Interface: SPI (shared bus)
SD Card Pins
I2S Signal Mapping
Function: Digital microphone ADC
Interface: I2S + I²C
IC: XL9535QF24
Address: 0x20
Functions Controlled via XL9535
Buttons are NOT connected directly to ESP32 GPIOs.
In this tutorial, you learned how to build end-to-end Edge AI applications on the Unihiker K10 using Edge Impulse.
We covered:
- IMU-based gesture classification - Motion_Data_Classification
- Environmental sensor regression for comfort prediction
- Audio classification using the onboard microphone - Audio_Data_Classification
- Image object detection using the camera - Image_Data_Object Detection
You explored the complete workflow — from data collection and preprocessing to model training, optimization, and deployment as an Arduino library running directly on the device with real-time inference and on-screen visualization.
This demonstrates how powerful and accessible Edge AI has become — enabling anyone to turn sensor data into intelligent, responsive applications without cloud dependency.
What’s next?
- Add more sensors or classes
- Improve models with more data
- Combine multiple models into a single smart system
- Build real-world applications like smart assistants, safety systems, or interactive devices
Thanks for following along, and happy building with Edge Impulse on Unihiker K10!











Comments