This tutorial will guide you through creating and running a custom object detection AI model on the Raspberry Pi AI Camera. It's designed for beginners to follow along step-by-step.
Running custom AI models on the Raspberry Pi AI Camera involves several technical steps, including preparing training code and optimizing the pre-trained model for the camera's hardware. This process can feel overwhelming at first.
To simplify AI model development for the Raspberry Pi AI Camera, we've created sample code specifically for developers. Using this sample, you'll be able to train and deploy your own object detection model directly on the AI Camera.
In this tutorial, we'll use a publicly available dataset of geometric shapes (circles, triangles, and squares) to demonstrate the complete workflow from training to deployment.
Looking for more? Check out Part 2 of this series, which covers keypoint detection models next week
🎯 Overview of this article- Build your own object detection AI model using the provided sample code
- Set up a training environment using either Docker or a traditional local setup
- Generate optimized model files and deploy them to your Raspberry Pi AI Camera
- Raspberry Pi (any model compatible with the AI Camera)
- Raspberry Pi AI Camera
- Computer with NVIDIA GPU (highly recommended for efficient training)
- Ubuntu 22.04 (or compatible Linux distribution)
- Python 3.10
Note: While a GPU significantly speeds up training, you can train on CPU but it may take longer.
Building the environment locallyNote
This section covers setting up the training environment locally on your machine. If you prefer to use Docker, check out the Docker section in the repository's README.
1. Clone the repository
git clone https://github.com/SonySemiconductorSolutions/aitrios-rpi-training-samples.git
cd aitrios-rpi-training-samples2. Setup
Install the necessary packages.
sudo apt update
sudo apt -y install --no-install-recommends apt-utils build-essential libgl1 libgl1-mesa-glx libglib2.0-0 python3 python3-pip python3-setuptools git gnupg3. Create and activate a Python 3.10 virtual environment
This tutorial assumes Python 3.10, so first, let's confirm that 3.10 is installed.
Note: Due to version dependencies for the required libraries, make sure to use Python 3.10.
python3.10 --versionIf this is displayed, Python 3.10 is installed:
If you don't have it yet, you can install Python 3.10 with the following steps, for example:
sudo apt update
sudo apt install -y software-properties-common
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt update
sudo apt install -y python3.10 python3.10-venv python3.10-devNext, create a virtual environment.
python3.10 -m venv .venv
source .venv/bin/activate4. Install the packages
pip install .
pip install -e third_party/nanodet/nanodet5. Confirm that the installation was successful
The sample provides the following seven configuration files.
To keep things organized, we store all the important settings (training, quantization, and evaluation) in a convenient.ini file.
✅ PointsThe characteristic feature is that you can flexibly adjust training conditions just by changing the settings and without directly rewriting the Python code.You can choose the dataset and task, and adjust parameters.
Creating an object detection AI modelIn this section, we'll build an object detection model using a dataset of geometric shapes: circles, triangles, and squares.
Download the dataset from:
https://github.com/SonySemiconductorSolutions/aitrios-rpi-dataset-sample
Move to the samples folder where the ini files are stored and run the following command to read the ini file and start training.
The following command will perform training and quantization of the model based on the specified .ini file.
cd samples
imx500_zoo imx500_zoo nanodet_plus_card.iniTraining will start as shown.
Once training and model quantization are complete, metrics will be displayed as follows.The mAP is 0.84, and AP@50 is 0.99, indicating that the training has been successful.
This generates the following model files:
./samples/model/nanodet_plus_card/
├── nanodet_plus_card.keras # float model
└── nanodet_plus_card_quantized.keras # quantized model (can be deployed to IMX500)Step 2: Quantize and convert the trained modelConvert and package the quantized model into a format compatible with the Raspberry Pi AI Camera.
Note: Continue working in your venv environmentUse the same TensorFlow version for quantization that you used for training. Version mismatches cause errors.
This tutorial explains the process to convert.keras to network.rpk that we can upload to the IMX500.
1. Install Edge-MDT (Model Development Toolkit) including tools required to quantize, compress, and convert AI models:
pip install edge-mdt[tf]2. Execute the following command:
cd ./samples/model/nanodet_plus_card/
imxconv-tf -i nanodet_plus_card_quantized.keras -o convert_resultNote:⭐️ From here, the operations will be on Raspberry Pi ⭐️
Move the converted folder to the Raspberry Pi.In this tutorial, the folder name is convert_result.
1. Install the necessary tools with:
sudo apt install imx500-toolsNote: Before performing this step, ensure that the file convert_result/packerOut.zip exists.
2. Package the model into an RPK file with:
imx500-package -i convert_result/packerOut.zip -o rpk_output_folder1. Install the necessary libraries on Raspberry Pi:
sudo apt install python3-opencv python3-munkres2. Clone the picamera2 Python library:
git clone https://github.com/raspberrypi/picamera2.git
cd picamera2/examples/imx500/3. Create a class file for object detection. Create custom_label.txt and write the following content:
pi@raspberrypi:~/Desktop/Tutorial/picamera2/examples/imx500 $ cat custom_label.txt
circle
triangle
rectangle4. Now let's run our custom model on the AI Camera
Select the network.rpk model you created, and choose the custom_label.txt file you created above:
python3 imx500_object_detection_demo.py --model rpk_output_folder/network.rpk --labels custom_label.txtResultsThe object detection model works! In the images below, you can see the model correctly identifying each geometric shape in our dataset. Circles, squares, and triangles are all detected with their respective labels.
Ready to use your own dataset? Here's what to change:
1. Update the .ini configuration file:
[DATASET] NAME = YourDatasetName[MODEL] CLASS_NUM = Number of classes[TRAINER] CONFIG = Your YAML configuration file
2. Edit custom_label.txt:
Replace the shape names with your own class labels (one per line).
Common errors and solutionsHere are fixes for typical errors you might encounter. If you encounter any other errors, drop a comment with the error message below.
- An error occurs during model conversion
- Cause:
sdsp.app.AppKtis compiled with Java 17 (class file version 61.0). - Solution: You need to update your Java version to Java 17 or higher.
Error: LinkageError occurred while loading main class sdsp.app.AppKt
java.lang.UnsupportedClassVersionError: sdsp/app/AppKt has been compiled by a more recent version of the Java Runtime (class file version 61.0), this version of the Java Runtime only recognizes class file versions up to 55.0When the error occurs, if you check the Java version, you will probably find that a version below 17 is being used.
In that case, you need to update as follows:
sudo apt install openjdk-17-jdkWhen in troubleIf you encounter difficulties during the article, please feel free to comment on this article.
If you have questions related to Raspberry Pi, you can also check and use the forum below:
ConclusionGreat work completing this tutorial! You now have the skills to create custom object detection models for the Raspberry Pi AI Camera.
Apply these techniques to your own projects:
- Custom object detection for your specific needs
- Edge AI applications in robotics or IoT
- Real-time vision systems
I'd love to hear about your projects! Share your implementations in the comments.
Enjoyed this tutorial? Give it a like and follow for more Raspberry Pi AI content!
Want to learn moreExperiment further with the Raspberry Pi AI Camera by following the Get Started guide on the AITRIOS developer site.
What's next?Check out Part 2 of this series next week.
Code









Comments