This project implements a real-time safety compliance system that monitors whether workers are wearing proper Personal Protective Equipment (PPE) in industrial environments. Using a custom-trained NanoDet model combined with BYTETracker, the system detects people and safety vests, then matches them to verify compliance automatically.
The solution provides a live annotated video feedback, showing compliant and non-compliant people, which could make workplace safety monitoring more efficient and reliable.
The aim of the project is to show you how to build an AI vision system from end to end. You can also use it as a reference project for real-time safety compliance monitoring systems.
Architecture OverviewThe application consists of five main components:
- Model Inference
- Object Tracking
- Object Matching
- Counting
- Visualization
A custom model packaged in RPK format is loaded via the Custom_Nanodet class, extending Model from modlib. The model takes input frames from AiCamera, runs inference, and outputs bounding boxes with class IDs and confidence scores.
detections = frame.detections[frame.detections.confidence > 0.4]Only detections with confidence above 0.4 are retained to minimize false positives.
2. Tracking with BYTETrackerBYTETracker assigns a unique, persistent ID to each detected object across frames. This tracker reduces flickering, handles occlusions, and ensures smooth tracking over time.
detections = tracker.update(frame, detections)It uses motion consistency and spatial alignment to maintain consistent identity across frames.
3. Matching: People to Safety gearThe system matches specific object classes, for example, matching persons (class ID 1) with safety vests (class ID 7). This is done via the Matcher module which uses spatial proximity heuristics to determine if a person is wearing a vest.
matched_people = person_detections[matcher.match(person_detections, vest_detections)]This enables compliance evaluation, such as identifying workers without safety equipment.
4. Counting and Compliance StatisticsTwo ObjectCounter instances are used:
total_counter— counts all persons detected.matched_counter— counts persons matched with safety equipment.
The difference between the two gives the number of non-compliant individuals.
"Total people detected " + str(total_counter.get(1))
"Total people missing vest: " + str(total_counter.get(1) - matched_counter.get(1))5. Annotated VisualizationCustom visualization is handled by custom_annotate_boxes, which overlays bounding boxes and class labels onto the frame.
- Non-compliant persons are highlighted in red.
- Compliant persons are highlighted in green.
- Frame-level statistics are displayed at the top using
Annotator.
frame.image = custom_annotate_boxes(
frame=frame,
detections=person_detections,
annotator=annotator,
labels=p_labels,
colour=[0, 0, 255],
)Application Summary- Input: Live video from an AiCamera.
- Detection: NanoDet model infers bounding boxes and class IDs.
- Tracking: BYTETracker assigns consistent IDs.
- Matching: Safety gear matched to individuals.
- Counting: Compliance statistics computed.
- Output: Annotated video showing compliant and non-compliant people.
This application is ideal for real-time safety compliance monitoring in industrial environments, construction sites, or any scenario where verifying protective equipment is essential.
Try It YourselfWant to build your own real-time object tracking and compliance detection system? Follow this hands-on guide to train your own model and deploy the application.
1. Train a Custom NanoDet Object Detection ModelThis application uses a NanoDet model for detecting people and safety vests. You can train your own using the script the following link: NanoDet model retraining instructions in Jupyter script.
The NanoDet retraining tutorial shows how to:
- Setup a dataset PPE-Detection-Using-CV-3 from Roboflow
- Setup the Nanodet model
- Training
- Quantization using Model Compression Toolkit - MCT
- COCO evaluation
- Visualization
- Conversion
After conversion the last step is to package the model for the Raspberry Pi AI Camera.
2. Package Your Model FilesClone the Application Repository
git clone https://github.com/SonySemiconductorSolutions/aitrios-rpi-sample-apps
cd aitrios-rpi-sample-apps/examples/highvisCreate a folder named models/ inside the project directory and place your trained files there. Run the imx500-packager application to convert the packerOut.zip to .rpk format:
$ imx500-package -i <path to packerOut.zip> -o <output folder>
$ aitrios-rpi-sample-apps/examples/highvis/
├── models/
│ ├── model.rpk
│ └── labels.txt
...The packaging procedure is in detail described in the Raspberry Pi documentation
3. Set up uvUv is a modern, fast Python package manager and runner—an excellent choice for running Python apps reproducibly.
Install it by running:
curl -Ls https://astral.sh/uv/install.sh | sh4. Run the app$ uv run app.py --model network/model.rpkThis will:
- Initialize the AI Camera device
- Load your trained model
- Start real-time video processing
- Display annotated bounding boxes with compliance labels
If you encounter any issues while reading the article, please feel free to comment on this article.
If you have questions related to Raspberry Pi, please use the forum below:
Want to learn moreExperiment further with the Raspberry Pi AI Camera by following the Get Started guide on the AITRIOS developer site.
Code






Comments