The project is built on a robust Finite State Machine (FSM) framework, which handles the complexities of real-world inspection with grace. It manages state transitions intelligently, accounting for uncertainties and variations in lighting, positioning, and part conditions that can trip up simpler solutions. Using an FSM allows a structured approach to manage different states for inspection.
You can adapt this project to your parts by retraining the model with your own dataset.
Project overview:What It Does:
- Uses a Finite State Machine (FSM) to control the inspection flow
- Instantly identifies nuts and bolts as "good" or "bad" using a custom-trained object classification AI model
- Provides live visual feedback with result overlays directly on the video feed
- Tracks inspection statistics to monitor quality trends
The FSM defines different states related to the scanning process. It transitions between states based on specific conditions and then processes frames of data accordingly.
Key components:1. FiniteStateMachine Class:
- Initialization: Sets up the FSM with various states (for example, StateInit, StateWaitForBackground, StateScanning) and initializes variables to track the current and previous states, scan sequences, and results.
- State Transition: The change_state_to method instructs the FSM to transition between states, logging the change.
- Tick Method: The tick method processes incoming frames, updates the current state, and retrieves results from the current state’s run method.
2. State Classes:
- Each state (for example, StateInit, StateScanning) inherits from a base state class (BaseState) and implements specific behavior in the run method.
- The run method processes frames, assigns mapped classes, and may trigger state transitions based on conditions.
3. ScanSequence Class:
- Manages a sequence of frames, trimming unnecessary background frames and analyzing the results.
- Calculates elapsed time, takt time, and verdicts based on the analysis of the frames.
- Provides methods to calculate time differences and determine scan results.
1. Initialization: The FSM is initialized with settings and states.
2. Frame Processing: As frames are received, the tick method is called:
- The current state processes the frame.
- Results are stored and may trigger state changes.
3. State Behavior: Each state has specific logic to handle frames and determine when to transition to another state.
4. Result Analysis: The ScanSequence class analyzes the frames after scanning, determining the scan duration and verdict based on the processed data.
ConclusionThe FSM efficiently manages the scanning process by transitioning through a number of states, processing frames, and analyzing results to determine the outcome of the scan. Each state has defined responsibilities, and this overall structure provides clear and organized flow control.
Setting up the project: What you’ll needClone the project repository
Open a terminal on your Raspberry Pi and run:
git clone git@github.com:SonySemiconductorSolutions/aitrios-rpi-sample-apps.git
cd aitrios-rpi-sample-apps/examples/line-monitorapp.py code breakdownThe main parts of the app.py code for the line monitor project are:
1. Setup annotator
annotator = Annotator(color=ColorPalette.default(), ...)Annotator is used to draw status overlays on each frame with text and colored indicators.
2. Set up FSM and counters
fsm = FiniteStateMachine(DEFAULT_SETTINGS)
good_ctr = 0
bad_ctr = 0The FSM handles frame-by-frame decision-making (for example, start scan, calculate results for a stream of data, delay before the next part). The counters count the results produced by the FSM-based algorithm.
3. Main inference loop
for frame in stream:
idx = frame.detections.class_id[0]
...
result = fsm.tick(ai_result)Each frame from the AI Camera is:
- Inferred using the deployed model.
- Parsed through the FSM, where the detected class and the current state determine how to handle the frame and whether it triggers a state change or contributes to the score calculation.
4. Quality verdict handling
if result and "scan_result" in result:
ng_status = result["scan_result"]["VERDICT"]
if ng_status == 2: # bad part
bad_ctr += 1
elif ng_status == 1: # good part
good_ctr += 1Delivers the result from the state machine in a dictionary.
5. Draw result indicator
if ng_status == 2:
cv2.circle(frame.image, (560,15), 15, (0,0,255), cv2.FILLED) # Bad → Red
elif ng_status == 1:
cv2.circle(frame.image, (560,15), 15, (0,255,0), cv2.FILLED) # Good → GreenDisplays a colored circle at the top of the frame to indicate the last part’s quality.
6. Draw inspection stats on frame
all_text = [good_text, bad_text, last_result_text, time_stamp_text]
for i, text in enumerate(all_text):
annotator.set_label(image=frame.image, x=50, y=30 + 35*(i+1), ...)Displays the following in the top-left corner:
- Count of good parts
- Count of bad parts
- Last result
- Timestamp
7. Live display
frame.display()Shows the live display. Perfect for production monitoring, prototyping, or debugging.
Training a custom AI model1. Prepare a dataset
This project includes a pre-labeled dataset of bolts and nuts to help you get started. The images are organized into four distinct classes:
1. bad
2. bg
3. good
4. object
Get the dataset for this project in the zip file training_dataset_nuts_and_bolts.zip or in examples/line-monitor/assets/training_dataset_nuts_and_bolts.zip in the project repository cloned earlier.
2. Train and convert the custom model
Follow our comprehensive Jupyter notebook tutorial that walks you through training a MobileNet classifier model from scratch and converting it for the IMX500 chip in your AI Camera: Train mobilenet with custom dataset.
The output from the last step of training and conversion is a quantized model and some files generated during conversion. Use the generated file packerOut.zip for the next step.
Deploying the application1. Package the model
Follow the instructions in AI Camera: Packaging to package the model in.rpk format for the AI Camera.
Important: The packaging step must be done on the Raspberry Pi.
2. Configure the application for the new model
When the model is ready, and you have a labels.txt file, you need to edit the class mapping so that each AI class in the labels file maps to the correct internal class of the FSM.
Open line-monitor/fsm/init.py and update the DEFAULT_SETTINGS dictionary to match your labels.txt file:
DEFAULT_SETTINGS = {
"Map": {
"BACKGROUND": "BACKGROUND",
"BG": "BACKGROUND",
"OBJECT": "OBJECT",
"GOOD": "RESULT_OK",
"BAD": "RESULT_BAD",
}
}The class names from your AI model’s labels.txt go on the left side, and the corresponding internal state machine actions go on the right side.
This mapping is crucial. It’s how the application interprets each AI result and knows which action to take.
3. Physical setup for the AI Camera
Position your AI Camera for best detection:
- Mount the camera around 15° downwards from horizontal
- Aim the camera at the inspection area (for example, your desk)
- Try to match the viewing angle to your dataset images as much as possible.
- The nuts and bolts in the dataset images are M8x40mm, steel quality A4 (shiny finish).
- One way to move bolts and nuts under the camera is to place them on white paper and then slide the paper underneath the camera.
4. Run the code
Launch the application with:
# Create a directory and copy your model files
$ mkdir network
$ cp -v [MODEL_PATH]/model_name.rpk line-monitor/network
$ cp -v [LABELS_PATH]/labels.txt line-monitor/network
# Set up a Python environment and run the application
$ uv venv --system-site-packages
$ uv run app.pyReplace [MODEL_PATH] with the path to your packaged.rpk model file, and [LABELS_PATH] with the path to your labels.txt file.
View the resultsMove some nuts and bolts under the camera, and see the live results updated in real time in the application overlay.
Grab some nuts and bolts and move them under the camera. Watch the live results as your vision AI application:
- Detects objects entering the inspection zone in real-time
- Classifies each item as good or bad
- Updates the overlay instantly with detection results
Next steps
We've built a working AI-powered quality inspection system. Here are some ideas to take it further:
- Expand the dataset - Add more bolt/nut variations to improve accuracy
- Integrate alerts - Add email or SMS notifications for defect detection
- Track metrics - Log pass/fail rates to a database for quality analytics
- Custom objects - Adapt the system to inspect completely different products
If you have questions related to Raspberry Pi, please use the forum below: Raspberry Pi Forums
Want to learn moreExperiment further with the Raspberry Pi AI Camera by following the Get Started guide on the AITRIOS developer site.
Code





Comments