This project uses a custom-trained keypoint model to read analogue gauges in real time. The model detects two keypoints on the meter: the start (centre/origin of the needle) and the end (tip of the needle). From their positions you can compute the needle angle and, with simple calibration, convert it to the value the gauge is showing.
The pipeline is built around a Keras PersonLab-style model trained on a gauge dataset, then quantized and converted for the IMX500 sensor in the AI Camera. You run inference either on saved images (e.g. in a Jupyter notebook) or live on a Raspberry Pi with the AI Camera. All code needed to train, convert, and run the model is in the tutorial notebook.
What it does
- Detects two keypoints on an analogue gauge: start (needle centre) and end (needle tip).
- Uses a custom PersonLab-style model trained on a keypoint dataset (e.g. digital gauge dataset on Roboflow).
- Runs on the Raspberry Pi AI Camera via the Application Module Library (modlib); both the notebook and the Pi use the same PersonLab model class and modlib
Annotator. - Draws the detected keypoints and the connection between them on the frame so you can verify alignment; the angle between start and end can then be mapped to a gauge value with calibration. Visualization differs by environment: in the notebook you run inference on images, save annotated frames to a folder, then display those saved images in a separate cell. On the Pi, a dedicated script uses the AI Camera stream and shows a live window with the keypoints drawn as a yellow line on the gauge.
Main components
1.PersonLab modelA keypoint model with two keypoints (start, end). Input images are resized to a fixed size (e.g. 481×353), normalized, and passed through the network. The post-processor (pp_personlab) turns raw heatmaps and offsets into keypoint coordinates and scores.
2.Pre- and post-processingPre-processing resizes the image and scales pixel values (e.g. by 1/256). Post-processing uses configurable parameters such as peak_thresh, nms_thresh, and kp_radius to extract keypoints and optionally filter by score.
3.VisualizationIn the notebook, the code uses a Keras interpreter and an image source: you run visualize() on a folder of images (e.g. validation set), which saves annotated frames to disk; a separate cell then displays the saved images. There is no live OpenCV window in the notebook. On the Raspberry Pi, you use the script in the tutorial’s notebooks/personlab-gauge/aicamera folder (personlab_aicam.py), which loads the packaged .rpk model and uses the AI Camera as the source; it shows a live window with keypoints drawn as a yellow line on the gauge.
The two keypoints define a line from the centre of the needle to its tip. You can compute the angle of this line (e.g. with atan2) and map that angle to a gauge scale using the meter’s range and orientation. The tutorial focuses on training and running the model; the exact calibration formula depends on your meter’s scale and how it is mounted.
Hardware
- Raspberry Pi (with a compatible OS)
- Raspberry Pi AI Camera
Software on the Raspberry Pi (for running the model)
- Python environment with modlib installed (see the modlib repository for install steps).
- Your packaged model (
.rpk) and the dataset config JSON (e.g.posenet_digitalgauge.json) used during training.
For training and conversion (typically on a PC or Colab)
- The tutorial is a Jupyter notebook that uses GPU in Colab. It covers: installing the training repo, dataset setup, training and quantization, inference and visualization, conversion to IMX500 format, and running on the Raspberry Pi AI Camera.
All steps: dataset, training, quantization, conversion, and the inference/visualization code are in this repository:
PersonLab gauge tutorial (custom_personlab.ipynb)
- Dataset: Roboflow “digital-gauge-klmhl” (version 2), with two keypoints:
startandend. - Training uses the “posenet” tools from aitrios-rpi-training-samples and the training docs for Posenet.
- The notebook includes the PersonLab model class, a Keras interpreter, and a
visualize()function for running on validation images (saving annotated frames and displaying them in a follow-up cell). The same repo provides a separate script for the Raspberry Pi innotebooks/personlab-gauge/aicamerafor live AI Camera visualization.
The notebook defines a small model wrapper and a visualization pipeline. Below are the main ideas; for the full code, use the notebook.
1. Personlab model classThe model class loads the weights (Keras or packaged RPK) and the dataset config JSON. The config defines NUM_KP (2), EDGES (e.g. [(0, 1)]), PEAK_THRESH, NMS_THRESH, KP_RADIUS, and input size (e.g. 481×353).
- pre_process: Resize image to the model input size and normalize (e.g. divide by 256), then add batch dimension.
- post_process: Call
pp_personlab()with the config parameters; it returns aPoses-like structure with keypoint coordinates and scores.
The notebook uses an in-process interpreter (InterpreterDevice that loads the .keras model via custom_layers_scope and MCT custom objects). It runs the model’s pre_process and post_process so the same PersonLab pipeline is used for inference. The notebook’s visualize() function only accepts .keras models; for the AI Camera you use the separate aicamera script (see below).
In the notebook, the visualize() function:
- Takes a
.kerasmodel path, config path, and a folder of images (e.g. validation set). - Uses
Images(images)as the source and deploys the model withInterpreterDevice. - For each frame: filters detections by keypoint score, calls
annotator.annotate_keypoints()withnum_keypoints,skeleton(edges), andkeypoint_score_threshold. - Does not open a live display window: if
save_image=True, it saves annotated frames to a folder (e.g.saved_images/); a separate cell then displays those saved images. If you run the same code outside the notebook (e.g. as a script) withsave_image=False, it would callframe.display()for a live window, but in the notebook the typical workflow is save then display in the next cell.
On the Raspberry Pi, the tutorial provides a different script in notebooks/personlab-gauge/aicamera (personlab_aicam.py). It uses the same PersonLab model class and modlib Annotator, but deploys the packaged .rpk model to the AI Camera and uses the camera as the frame source. That script shows a live window with frame.display(), and the keypoints are drawn as a yellow line on the gauge.
Training uses two config files created in the notebook:
- posenet_digitalgauge.ini: Framework (Keras), model name (Posenet), input size,
NUM_CLASSES=2, batch size, number of epochs (with early stopping), and path to the JSON config. - posenet_digitalgauge.json: Dataset type,
NUM_KP=2, keypoint names["start", "end"],EDGES, path to COCO-style annotations and image dirs, and training/visualization options.
For a different gauge (e.g. three keypoints), you would change NUM_CLASSES/NUM_KP, keypoint names, edges, and kpt_oks_sigmas accordingly, as described in the notebook.
The tutorial uses the digital-gauge-klmhl dataset (COCO format) with two keypoints. To use Roboflow you need a Roboflow account and to accept their terms. The notebook uses roboflow.login() and then downloads the dataset into the expected directory layout.
In the notebook, training and quantization are run with:
cd aitrios-rpi-training-samples/samples && imx500_zoo posenet_digitalgauge.iniBefore that, the notebook sets the config (e.g. RETRAIN=True, VALIDATE=False, NUM_EPOCHS=75 or higher). Training uses early stopping; the notebook notes that around 75 epochs can give usable detections and more epochs improve results. The output is a quantized Keras model (e.g. posenet_digitalgauge_quantized.keras).
Convert the quantized model to the format required by the IMX500:
imxconv-tf -i <path-to-quantized.keras> -o converted --overwrite-outputThis produces packerOut.zip (and other artifacts). Then package the model for the AI Camera on the Raspberry Pi as described in the Raspberry Pi AI Camera documentation (Packaging). Use the generated packerOut.zip as input to the packager.
Conversion is also documented in the Sony IMX500 Converter documentation.
Running on the Raspberry Pi AI Camera1.Get the aicamera script from the tutorial repo: go to the notebooks/personlab-gauge/aicamera folder. The script personlab_aicam.py there uses the same PersonLab model and modlib Annotator as the notebook but deploys the packaged model to the AI Camera and shows a live window.
2.Install modlib on the Pi (see modlib).
3.Set paths in the script to your converted model (packerOut.zip) and the dataset config JSON (e.g. posenet_digitalgauge.json). The script (or its README) defines variables such as where to place the converted model and config.
4.Put the converted model and config in the locations those variables point to. The first time you run the app, it will package the converted model and then deploy to the AI Camera.
5.Run the app (e.g. uv run personlab_aicam.py from the aicamera folder). You can point the camera at a gauge or at validation images on a screen to confirm keypoint detection.
6.Adjust the camera so the gauge is in frame and the overlay (yellow line between keypoints) aligns with the needle.
Viewing the resultsIn the notebook: After running visualize() with save_image=True, the annotated frames are in the output folder (e.g. saved_images/). The next cell displays them so you can confirm that start and end align with the centre and tip of the needle. Some images may show "No detections" if the model was trained with few epochs.
On the Raspberry Pi: The aicamera script shows a live window. Each frame is annotated with the two keypoints and a yellow line between them. Use this to check that “start” and “end” match the centre and tip of the needle. From there you can add your own logic to compute the angle and map it to the gauge scale.
Next steps- Calibration: Implement angle-to-value mapping for your meter’s scale and mounting.
- More keypoints: If your meter has a non-zero arc (e.g. needle extends past centre), try three keypoints and update the config as in the notebook.
- Different gauges: Retrain with a new dataset (same two keypoints or more) and the same pipeline.
If you encounter any issues while reading the article, please feel free to comment on this article. Please note that it may take some time to respond to comments.
If you have questions related to Raspberry Pi, please check and utilize the forum below.







Comments