PROJECT NAME: Test Drive: reComputer AI R2130-12
I was thrilled to be selected as one of two winners of an impressive SeeedStudio reComputer AI R2130-12. I was awarded the prize as a participant in the Hackster.io Edge AI Giveaway contest finishing last September 2025.
OutlinePART 1: Hardware & Setup
Hardware OverviewUnboxing ExperienceSetup ProcessReComputer Setup (with 6 steps)
PART 2: Remote Access, AI, & Advanced Topics
Raspberry Pi Connect (complete section with options)AI DevelopmentGetting Started: Basic Python CodeFrom Zero to AI Hero project linkPractical AI Model Demos (Frigate, YOLO, CLIP)Hailo (Hardware & Software)Where the Hailo Demos LiveCore Hailo rpicam-apps DemosUsing the GitHub Example SetCommon Issues & SolutionsHailo Raspberry Pi 5 ExamplesCONCLUSIONS & NEXT STEPS
The reComputer AI R2130-12 is a compact yet powerful edge AI development kit. Key specifications include:
- CPU & GPU: Raspberry Pi 5, 2.4GHz quad-core 64-bit Arm Cortex-A76 and VideoCore VII GPU.
- Power Supply: 5V/5A DC power via USB-C (Raspberry Pi Power Supply (NOT INCLUDED) The official Raspberry Pi 27 W USB-C power supply is rated at 5.1 V, 5.0 A (approximately 25.5 W) and is explicitly designed for the Raspberry Pi 5, including running high-power USB peripherals.
- AI processor: Hailo-8 M.2 acceleration module, 26 Tera-Operations Per Second.
- Memory: 8 GB RAM.
- Operating system: Raspberry Pi OS (Flashed to MicroSD card NOT INCLUDED), Ubuntu (Ubuntu on NVMe).
- Ethernet & USB: 1 × 10/100/1000 Mbps Ethernet; 2 × USB 3.0 (USB-A); 2 × USB 2.0 (USB-A).
- Camera / Display: 2 × micro HDMI ports (4Kp60) (CABLE NOT INCLUDED); 2 × 4-lane MIPI camera/display transceivers.
- Storage: 1 × microSD card slot (supports high-speed SDR104 mode); 2 × M.2 slots (PCIe 3.0 NVMe SSD / Hailo M.2 acceleration module).
- Power button: On/Off button included.
- Wireless communication: Dual-band 802.11ac Wi-Fi; Bluetooth 5.0 / BLE.
- Video decoder: 4Kp60 HEVC decoder.
The reComputer AI R2130-12 unit is the sole item included in the box; no power supply, cables, or accessories are provided and must be purchased separately. To operate the reComputer, you will need to acquire a Raspberry Pi 27 W USB-C power supply and either a Micro USB to HDMI cable or a Micro USB to HDMI adapter. Also you will need a MicroSD card (32–128 GB, Class 10/UHS-I) for the OS. A 128 GB is a good size if you plan to store lots of data (logs, media, or large projects) or multiple OS images on one card. I had a 128 GB MicroSD class10 on hand, which I used.
These two necessary items are not commonly owned. Fortunately, I already had them available from my other Raspberry Pis, which allowed me to proceed.
The absence of a "getting started" guide or instructions for initial operation forced me to figure out the setup process for the unit. This led me to create this guide to assist other members who purchase the reComputer in getting it operational.
SETUP PROCESSThe product is described as including both an operating system and a ready-to-use Hailo software stack, but with some flexibility. The unit unfortunately does not come with a pre-installed OS and Hailo, which is pretty typical of the Raspberry Pi boards.
RECOMPUTER SETUP
Install Raspberry Pi OS on the reComputer AI RR2130-12 as you would a Raspberry Pi 5, then add the Hailo and Seeed-specific pieces. This section will lead you step by step through the process.
1. PREP ON YOUR HOST PC
- Download and install Raspberry Pi Imager for your OS (Windows) from https://www.raspberrypi.org/software/
- Insert a good-quality microSD (32–128 GB, Class 10/UHS-I) into your host machine. Example: SanDisk 128GB microSDXC UHS-1
2. FLASH RASPBERRY PI OS
- Launch Raspberry Pi Imager.
- Click Choose device and select Raspberry Pi 5.
- Click Choose OS and pick Raspberry Pi OS (64-bit, Bookworm or later); the standard desktop is recommended for development.
- Click Choose storage and select your microSD card.
- Press Ctrl+Shift+X (Advanced options) and configure: hostname, enable SSH, username/password, Wi-Fi, locale/timezone. Note: Not available on Raspberry Pi Imager v2.0.3, so only configure hostname.
- Click Save, then Next, confirm erasing the card, and wait for flashing and verification to finish.
- Safely eject the microSD card.
3. FIRST BOOT OF THE RECOMPUTER
- Insert the flashed microSD into the reComputer's Pi 5 slot, connect HDMI, keyboard/mouse, and network (Ethernet or Wi-Fi antenna if applicable).
- Apply power and let it boot to the Raspberry Pi OS desktop or first-boot wizard.
- Complete any on-screen setup (region, password confirmation, updates prompt).
- Open a terminal and fully update the system and firmware:
sudo apt update && sudo apt upgrade -y
sudo rpi-eeprom-update- Reboot when prompted: sudo reboot
4. OPTIONAL: ENABLE NVME BOOT AND UBUNTU ON NVME
If you plan to boot from the built-in NVMe drive:
- In Raspberry Pi OS, run:
sudo raspi-config- Go to Advanced Options → Boot Order → NVMe/USB Boot, confirm, then exit and reboot.
- On the host PC, use Raspberry Pi Imager again to flash Ubuntu for Raspberry Pi directly to the NVMe (select the NVMe as storage via USB adapter/enclosure if using that workflow).
- Install the pcie-fix.dtbo overlay file into /boot/overlays/ and add dtoverlay=pcie-fix to config.txt, then save and reboot to ensure stable PCIe/NVMe operation.
5. INSTALL HAILO AI SOFTWARE STACK
- On the running reComputer (Raspberry Pi OS, Bookworm or later):
- Ensure the system is still up to date:
sudo apt update && sudo apt upgrade -y- PCIe configuration: in a terminal shell run
raspi-config → Advanced Options → A9 PCIe Speed → enabling Gen 3, and exit with Finish.- Install the full Hailo stack in one step:
sudo apt install hailo-all- This pulls in the Hailo driver/firmware, HailoRT, TAPPAS libraries, and rpicam Hailo demo stages.
Reboot: sudo reboot.- After reboot, verify the Hailo module:
hailortcli fw-control identify- Confirm that it prints device information (serial, firmware version) with no errors.
- Check if the Hailo-8L is connected by confirming that the Hailo-8L card is on the PCIe bus with this command:
lspci | grep Hailo6. BASIC VALIDATION AND NEXT STEPS
- Test camera and AI pipeline (if a camera is attached) using the provided Hailo demos under rpicam-apps Hailo stages.
- Example apps from the Seeed wiki for the reComputer describe three example workloads you can run: Frigate NVR, YOLO object detection, and CLIP zero-shot classification.
SIDEBAR
The R2130‑12 can use almost any UVC‑compliant USB webcam or USB camera module that supports Linux, from cheap 1080p webcams to industrial board‑level cameras with global shutter and interchangeable lenses
I used a UVC‑compliant USB webcam that I had on hand for my experiments. Consumer USB webcams: Logitech, Microsoft, and many generic 720p/1080p webcams enumerate as UVC and “just work” on Linux, suitable for basic vision and prototyping.
The R2130‑12 (R2000 industrial family) exposes 2× USB‑A 3.2 host ports for peripherals and 1× USB‑C 2.0 device port used only for flashing OS/debug, not for cameras.
Connect the webcam to a USB‑A port on the front panel; for high‑res/high‑FPS use, prefer a short, good‑quality USB 3.x cable.
RASPBERRY PI CONNECTRaspberry Pi Connect lets you securely reach your Pi's desktop or shell from a browser, over the internet, using your Raspberry Pi ID account.
1. WHAT RASPBERRY PI CONNECT IS
Purpose:
- Secure, browser-based remote access to your Pi's desktop and a terminal, via Raspberry Pi's relay service at
- connect.raspberrypi.com.
Modes:
- Full Connect: remote desktop plus browser shell.
- Connect Lite: remote shell only, no screen sharing, via rpi-connect-lite.
2. REQUIREMENTS AND PREPARATION
Hardware:
- Raspberry Pi 4, 5, or 400 recommended, running a 64-bit Raspberry Pi OS Bookworm that uses Wayland for full desktop sharing.
Software:
- Updated Raspberry Pi OS: sudo apt update && sudo apt upgrade
Network:
- Pi must have internet access (Ethernet or Wi-Fi).
Account:
- Raspberry Pi ID (different from forum login), created at id.raspberrypi.com
3. INSTALL RASPBERRY PI CONNECT
NOTE: I already had Raspberry Pi Connect installed, and the Connect tray icon was available on the desktop, so this system is using the full version. Because of that, I did not need to perform the following installation steps myself, but they are included here for readers who have not yet installed the full version.
- On a fresh or existing Bookworm system:
- Update and upgrade packages:
sudo apt update
sudo apt upgrade- Install Connect:
Full: sudo apt install rpi-connect
Lite (shell only): sudo apt install rpi-connect-lite- Reboot the Pi so the Connect tray icon is available on the desktop if you installed the full version.
4. TURN ON AND LINK YOUR PI
You must both start Connect and link the device to your Raspberry Pi ID.
OPTION A: DESKTOP / TRAY ICON
- After reboot, look for the new Connect icon in the top-right system tray and click it.
- Choose "Turn On Raspberry Pi Connect" (or similar wording). This starts the service for your current user.
- The first time, a browser window opens prompting you to sign in with Raspberry Pi ID or create one.
- Sign in, then give the device a unique, descriptive name and click "Create device and sign in".
- When linking succeeds, the tray icon turns blue to show the device is signed into the Connect service, and you receive an email saying a new device was linked to your account.
OPTION B: COMMAND LINE / SSH
- On the Pi (local or via SSH), enable Connect for your user:
rpi-connect on- Generate a sign-in URL:
rpi-connect signin- The command prints a URL like https://connect.raspberrypi.com/verify/XXXX-XXXX
- Open that URL on any device, sign in with your Raspberry Pi ID, name the device, and finish linking.
5. USE RASPBERRY PI CONNECT FROM ANYWHERE
Once linked, you control the Pi from a browser.
- On any computer, go to https://connect.raspberrypi.com/ and sign in with your Raspberry Pi ID.
- You see a list of devices registered to your account; pick the appropriate Pi and click "Connect".
- Choose session type (depending on what you installed and what the service offers): remote desktop session to the Pi's graphical environment, or remote shell session running in a browser tab.
- For IoT workflows, typical uses include checking service status and logs via shell (systemd services, MQTT brokers, sensor daemons) and accessing GUI tools such as IDEs, configuration panels, or dashboards running on the Pi desktop.
HANDY CLI ACTIONS
- Turn on (per-user): rpi-connect on
- Turn off for current user: rpi-connect off
- Re-run linking if needed: rpi-connect signin
6. PRACTICAL TIPS FOR IOT USE
- Security model: Access is gated by your Raspberry Pi ID and device linking; connections tunnel through Raspberry Pi's cloud, so you do not need to open ports on your router.
- Multiple devices: Give each Pi a clear name like "Barn-LoRa-Gateway" or "Lab-Edge-AI-Node" so you can quickly pick the right one in the web UI.
- Headless setups: You can install Raspberry Pi OS, enable SSH, and then use rpi-connect over SSH to get browser-based desktop access without leaving VNC or RDP running.
This section details my experience with the reComputer AI R2130-12, where I tested its capabilities by running various edge AI demonstrations and completing an associated online course.
GETTING STARTED: BASIC PYTHON CODE
- This is a very simple piece of code that uses cv2 to display your camera feed in a window.
import cv2
cap = cv2.VideoCapture(0)
while True:
ret, frame = cap.read()
if not ret:
break
cv2.imshow("Live Feed", frame)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
cap.release()
cv2.destroyAllWindows()This Python code snippet uses the OpenCV library (cv2) to open a computer's default webcam feed and display it in a window until the user presses the 'q' key.
Here is a line ny line breakdown of the code:
1. import cv2: Imports the OpenCV library, which is used for computer vision tasks, including handling camera input.
2, cap = cv2.VideoCapture(0): Creates a VideoCapture object. The argument 0 typically specifies the default or first connected webcam.
3. while True: Starts an infinite loop to continuously read frames from the camera.
4. ret, frame = cap.read(): Reads a single frame from the camera. ret is a boolean that is True if the frame was read successfully, and False otherwise. frame is the actual image frame (a NumPy array).
5. if not ret: break: Checks if the frame was read successfully. If not (e.g., the camera was disconnected), it breaks out of the loop.
6. cv2.imshow('Live Feed', frame): Displays the captured frame in a window titled 'Live Feed'.
7. if cv2.waitKey(1) & 0xFF == ord('q'): break: This is the logic for exiting the loop: cv2.waitKey(1) waits for 1 millisecond for a key press. & 0xFF == ord('q') checks if the pressed key was the 'q' key.
8. cap.release(): Releases the camera hardware, freeing it up for other applications. This is important for cleanup.
9. cv2.destroyAllWindows(): Closes all the OpenCV windows that were created (in this case, the 'Live Feed' window).
In summary, it's a boilerplate script for a basic live camera feed viewer. The comment suggests it's a placeholder for an AI-based application:
# Example: Open camera feed (to be enhanced with AI model code)
The photo below shows results of the call to "cv2.imshow("Live Feed", frame)"
- If this does not work for you then you need to diagnose you USB camera connection.
FROM ZERO TO AI HERO: RASPBERRY PI AND COMPUTER VISION
- My Hackster.io project
- Discover how to leverage AI on the Raspberry Pi for computer vision tasks using an AI kit. This guide teaches you to integrate AI into practical IoT applications, covering everything from fundamental object detection and image classification to more sophisticated visual recognition. Follow this link to get started:
- https://www.hackster.io/skruglewicz/from-zero-to-ai-hero-raspberry-pi-and-computer-vision-63d506
The following sections provide step-by-step guides for three example workloads you can run on the reComputer: Frigate NVR, YOLO object detection, and CLIP zero-shot classification.
Frigate NVR – an IP‑camera network video recorder that uses hardware‑accelerated detection (via the Hailo‑8L) for home/edge surveillance, recording, and smart event detection.
YOLO object detection – a real‑time detector demo that runs a YOLO model on the Hailo accelerator and shows bounding boxes and labels on live video or test clips.
CLIP zero‑shot classification – a vision‑language example where CLIP embeddings are used to classify or search images using text prompts without task‑specific training.
Frigate turns the reComputer into an AI NVR with object detection on camera streams.
- Prepare the system: ensure Raspberry Pi OS Bookworm is installed (SD or NVMe) and updated, and confirm Hailo-8 is detected and that the Hailo runtime is installed and working.
- Install Docker and Frigate: install Docker Engine on Raspberry Pi OS (standard apt plus Docker instructions for arm64), then pull or run the Frigate container image built for Raspberry Pi 5 with Coral/Hailo acceleration.
- Connect cameras: connect IP cameras on the same LAN or USB cameras directly to the reComputer USB ports, and verify you can view the streams using ffmpeg or vlc on the Pi.
- Create Frigate configuration: create config.yml for Frigate with ffmpeg inputs for each camera RTSP/HTTP stream and a detector section configured to use the Hailo-accelerated model, then map this config file into the Docker container with a volume mount.
- Run and test Frigate: start the Frigate container with the correct device mapping for Hailo and the config volume, open the Frigate web UI in a browser to see live video, detections, and alerts, and adjust motion detection zones and notification rules as needed.
This demonstrates real-time YOLOv8 object detection accelerated by Hailo on the reComputer.
- Verify Hailo toolchain by making sure the Hailo AI Software Suite and runtime are installed on the Pi and confirming that your OS is Bookworm if you want one-command hailo-all support.
- Prepare a YOLO model by training or obtaining a YOLOv8 model and exporting it to ONNX on a development machine, then use the Hailo Model Build Environment (PC or VM) to compile ONNX to HEF.
- Deploy the HEF model to the reComputer by copying the generated.hef file over SSH or via storage and placing it in a known directory for your inference script or demo app.
- Set up the inference application using the HailoRT API or the example scripts from the YOLO demo project. Configure the script to open a video source (USB camera, CSI camera, or RTSP stream), run inference using the HEF model on the Hailo-8, and draw bounding boxes and class labels on the output frames.
- Run benchmarks and tune: run the demo and check FPS. The platform can reach over 200 FPS with YOLOv8s under optimal conditions. Adjust image resolution and model size to balance accuracy and speed.
CLIP lets the reComputer match images to text prompts without task-specific retraining.
- Set up the CLIP environment by installing Python and required libraries (PyTorch or similar) per the CLIP demo instructions and ensuring the Hailo runtime is integrated if the CLIP vision encoder is compiled to run on Hailo.
- Prepare text prompts and images by defining a list of text labels such as "a photo of a person" or "a photo of a car" and setting up a camera or image input pipeline on the reComputer.
- Run CLIP inference by using the CLIP model to encode the image and the text prompts into embeddings, computing similarity scores between the image embedding and each text embedding, and selecting the label with the highest score as the predicted description.
- Integrate into an application by wrapping the CLIP inference in a simple REST API, CLI tool, or GUI, and use it for tasks like tagging images from a camera, filtering frames by content, or building a simple visual search demo.
- HARDWARE INTRODUCTION
Hailo offers cutting-edge AI processors tailored for high-performance deep learning applications on edge devices. The company's solutions focus on enabling the next era of generative AI on the edge, alongside perception and video enhancement, powered by advanced AI accelerators and vision processors. The reComputer, equipped with the Hailo-8 NPU accelerator providing 26 TOPS of AI performance, is capable of achieving over 200 FPS with YOLOv8s.
- SOFTWARE INTRODUCTION
The Hailo AI Software Suite provides powerful tools to run AI models efficiently on hardware accelerators. It is designed to integrate seamlessly with existing deep learning frameworks, offering smooth workflows for developers. The process involves generating a HEF (Hailo Executable Binary File) from an ONNX file in the Model Build Environment. Once created, the HEF file is transferred to the inference machine (Runtime Environment), where it is used to execute inference with the HailoRT API.
To learn more about examples of using the Hailo NPU, refer to the Seeed Studio repository:
https://wiki.seeedstudio.com/r2000_series_getting_start/#clip
- WHERE THE HAILO DEMOS LIVE
There are several Hailo-enabled demos bundled with rpicam-apps; they are selected via JSON post-processing files and run with the --post-process-file flag on rpicam-hello or rpicam-vid.
The Hailo JSON post-processing stages ship in the assets directory of the rpicam-apps repository and in the system assets package on Raspberry Pi OS, for example under /usr/share/rpi-camera-assets/. If you installed from packages, look in /usr/share/rpi-camera-assets/ for files whose names start with "hailo_", such as hailo_yolov6_inference.json and hailo_yolov8_pose.json.
- CORE HAILO RPICAM-APPS DEMOS
Typical examples (names may vary slightly by OS version) include:
Object detection (YOLOv6) using hailo_yolov6_inference.json with a command like:
rpicam-hello -t 0 --post-process-file /usr/share/rpi-camera-assets/hailo_yolov6_inference.json --lores-width 640 --lores-height 640
Pose estimation (YOLOv8 pose) using hailo_yolov8_pose.json with a command like:
rpicam-hello -t 0 --post-process-file /usr/share/rpi-camera-assets/hailo_yolov8_pose.json --lores-width 640 --lores-height 640
Classification and other tasks use JSON files with names like hailo_classifier.json, invoked similarly via --post-process-file.
- USING THE GITHUB EXAMPLE SET
Hailo maintains the hailo-rpi5-examples repository, which documents these as the "Raspberry Pi Official Examples" for rpicam-apps with the AI Kit. That repository's documentation references the same rpicam-apps Hailo post-processing stages and links back to Raspberry Pi's rpicam-apps documentation for usage details.
- COMMON "NO POST PROCESSING STAGE FOUND" ISSUE
If you see messages like "No post processing stage found for hailo_yolo_inference" or "object_detect_draw_cv", ensure rpicam-apps was built or installed with Hailo support and that the post-processing shared objects are installed where rpicam-apps expects them. On Raspberry Pi OS with the official AI Kit, using the packaged rpicam-apps plus the supported HailoRT and TAPPAS versions (for example HailoRT 4.17 and TAPPAS 3.28.x for the current Raspberry Pi demos) avoids the version mismatch that causes missing stages.
- HAILO RASPBERRY PI 5 EXAMPLES
The hailo-rpi5-examples repository is the solution code for running Hailo on Raspberry Pi 5. The core entry points you will actually run are the example scripts in the basic_pipelines directory, such as detection_simple.py, detection.py, pose_estimation.py, instance_segmentation.py, and depth.py.
Follow this link to the Repo:
https://github.com/hailo-ai/hailo-rpi5-examples/blob/main/README.md
- WHAT TO RUN
After cloning and installing, the main solution scripts are:
basic_pipelines/detection_simple.py
basic_pipelines/detection.py
basic_pipelines/pose_estimation.py
basic_pipelines/instance_segmentation.py
basic_pipelines/depth.py
These scripts already implement full working pipelines against the Hailo-8 or Hailo-8L via the Apps Infra dependency, so you do not need to write your own inference loop just to get started.
- MINIMAL END-TO-END STEPS
On your reComputer with the Hailo AI:
Clone the repository:
git clone https://github.com/hailo-ai/hailo-rpi5-examples.gitEnter the directory:
cd hailo-rpi5-examplesInstall dependencies:
./install.shSource the environment in each new shell:
source setup_env.shRun a solution example, for instance detection:
python basic_pipelines/detection_simple.py
or
python basic_pipelines/detection.pyUse "--input rpi" or "--input usb" (or /dev/videoX) to switch between Raspberry Pi camera and USB camera input.
- IF YOU WANT CUSTOM SOLUTION CODE
To build your own application instead of using these stock examples, the recommended pattern is to treat this repository plus Hailo Apps Infra as your reference implementation. Import the Apps Infra Python APIs in your own script and replicate the pipeline structure from the basic_pipelines scripts, replacing the model, pre- and post-processing, or sinks as needed.
ReferencesGitHub - Seeed-Projects/Tutorial-of-AI-Kit-with-Raspberry-Pi-From-Zero-to-Hero:
Tutorial-of-AI-Kit-with-Raspberry-Pi-From-Zero-to-Hero
Getting Started with reComputer AI R2000 Series | Seeed Studio Wiki
CONCLUSIONS AND NEXT STEPSDeploy my own custom-trained vision model with Edge Impulse.
Integrate MQTT for IoT data streaming.
Explore a real-world use case such as smart security or predictive maintenance.
Share an open-source project repository and update Hackster.io with findings.
END OF PROJECTThanks for reviewing my project and be sure to leave a comment. I greatly appreciate the Feedback on what you think about my experience and I would like to hear about your experiences






Comments