I’ve always been passionate about experimenting with edge AI vision projects. However, running YOLO models on traditional Raspberry Pi boards has always been a frustrating experience. The frame rate is far too low for real-time inference, and adding external AI accelerator modules only complicates things further.
These external accelerators are not only costly, but they also require messy wiring, tedious compatibility checks, and time-consuming assembly work. More often than not, I end up spending more time troubleshooting accelerator hardware issues than actually developing and tuning the detection model itself.
By coincidence, I got the chance to use Seeed Studio reComputer R2000 series, and everything changed since then.It is equipped with an onboard integrated Hailo-8L NPU computing unit, which not only saves the cost of buying expensive external accelerators, but also gets rid of all the troubles caused by messy wiring, compatibility verification and complicated assembly.
I tested this YOLOv8 object detection project to test this all-in-one solution, and also to provide a simple, reusable guide for fellow developers facing the same pain points. It delivers smooth real-time inference with minimal configuration effort, perfectly solving the biggest headaches in edge AI deployment
Set Up Your reComputer DeviceThe reComputer used in this tutorial is designed based on Raspberry Pi 4. It is really easy to get started, operates almost the same as a regular Raspberry Pi, and brings very little migration cost.
You can fully follow the official Wiki to finish SD card flashing and device configuration. Here I will briefly summarize the whole process for you.
Tool Download- Install Raspberry Pi Imager
- Download the official pre-built system image
Open Raspberry Pi Imager, chooseOS - Use custom, then select the system image you just downloaded.
Select your SD card in the Storage option, then keep clicking Next and wait for the flashing process to complete.
Access Your reComputer DeviceThe easiest way for initial access is to connect an external display. Set up your account and password on the display, then you can remotely access the device freely via SSH for subsequent development.
Edge AI Algorithm DeploymentMy first experience deploying algorithms on the reComputer was incredibly pleasant, as the entire workflow has been fully pre-packaged for us.Install Hailo Software
sudo apt install hailo-all
sudo rebootCheck Software and Hardware Installation Statushailortcli fw-control identifyIf the following result appears, the installation is successful.
Check the connection status of Hailo-8L
lspci | grep HailoThe correct output is shown below:
git clone https://github.com/Seeed-Projects/Benchmarking-YOLOv8-on-Raspberry-PI-reComputer-r1000-and-AIkit-Hailo-8L.git
cd Benchmarking-YOLOv8-on-Raspberry-PI-reComputer-r1000-and-AIkit-Hailo-8L
bash ./run.sh object-detection-hailoThe effect demonstration is as follows:
The official demo only provides inference scripts for fixed video files and does not support real-time inference from video streams. If you, like me, need to run real-time object detection for practical scenarios, you can modify the official demo by following my steps to achieve this functionality.Demo Script Modifications
run.sh:
Modify the code on line 29 to:
python3 "$1".py --input 0You can also modify the pose-estimation-hailo.py script in the same way.
Modify the__init__ function starting from line 126 to:
class GStreamerDetectionApp(GStreamerApp):
def __init__(self, args, user_data):
super().__init__(args, user_data)
# Force USB Camera mode (Wed Camera /dev/video0)
self.source_type = "usb"
self.video_source = "/dev/video0"
self.batch_size = 2
self.network_width = 640
self.network_height = 640
self.network_format = "RGB"
self.default_postprocess_so = os.path.join(self.postprocess_dir, 'libyolo_hailortpp_post.so')
if args.network == "yolov6n":
self.hef_path = os.path.join(self.current_path, './hailomodel/yolov6n.hef')
elif args.network == "yolov8s":
self.hef_path = os.path.join(self.current_path, './hailomodel/yolov8s_h8l.hef')
elif args.network == "yolox_s_leaky":
self.hef_path = os.path.join(self.current_path, './hailomodel/yolox_s_leaky_h8l_mz.hef')
else:
assert False, "Invalid network type"
self.app_callback = app_callback
nms_score_threshold = 0.3
nms_iou_threshold = 0.45
self.thresholds_str = f"nms-score-threshold={nms_score_threshold} nms-iou-threshold={nms_iou_threshold} output-format-type=HAILO_FORMAT_TYPE_FLOAT32"
setproctitle.setproctitle("Hailo Detection App")
self.create_pipeline()Now you can access the local camera.
At this point, you have successfully deployed YOLOv8 object detection on the reComputer R2000. By modifying the script, you have also enabled real-time inference with a USB camera. You no longer need to deal with the hassle and high cost of external accelerator modules, and you have broken the limitation of the official demo which only supports inference on fixed videos. This allows edge AI to be truly applied to real-world scenarios.
You can further expand on this project to better fit your own needs:
- Push real-time detection results to IoT platforms such as Alibaba Cloud and Tencent Cloud via the MQTT protocol, build a complete edge AI monitoring system, and realize remote data viewing and abnormal alarm notifications.
- Optimize script parameters, adjust the model confidence threshold and inference frame rate to adapt to different detection scenarios, such as high-precision close-range detection and wide-range long-distance detection.
- Perform secondary processing on the real-time detection stream with OpenCV, add functions such as object counting and regional intrusion alerts to meet practical demands in smart retail, workshop monitoring and other scenarios.
- Refer to the Hailo model library, replace the YOLOv8 model in the script with other models like YOLOv6 and YOLOX, and compare the inference performance of different models on the reComputer.
If the above expansion ideas still cannot meet your project requirements, you can retrain the model with a custom dataset, compile it using Hailo tools, and deploy it to the reComputer. This enables object detection for dedicated scenarios, such as industrial part inspection and pet recognition.
Feel free to share your script optimization ideas, newly expanded functions, or more application cases of the reComputer R2000 in edge AI projects in the comments section!





Comments