This story is about what everyone is talking about: Machine Learning and Artificial Intelligence (ML/AI) on the UNO Q board. This is the feature drawing the most attention and is the primary point of comparison with the established Raspberry Pi platform.
Let’s look at the raw computing power. The UNO Q's processor architecture—a Quad-core ARM Cortex-A53 running at 2.0 GHz—stacks up extremely well against the popular Raspberry Pi 3 (Quad-core ARM Cortex-A53 at 1.4 GHz). This level of power is more than sufficient for a huge range of applications, a fact proven by the countless projects built on the widely-adopted RPi 3. But I admit that the UNO Q is a big step ahead with its lower power consumption and temperature (no heat sink and just feel normal when being touched).
Starting with Sample AppsOne advantage of using the Arduino App Lab + Arduino UNO Q over a Raspberry Pi is the readily available sample apps that demonstrate the use of "bricks" (pre-built features created from middleware-level software on Linux). These allow you to begin testing complex code, such as UI, ML, and AI, without writing a single line.
Three sample apps demonstrate ML/AI without needing extra hardware:
- Glass breaking sensor (Audio)
- Classify images (Vision)
- Detect objects on images (Vision)
These apps use a web UI where you can simply drag and drop your audio or image files for immediate processing. Other sample apps require additional hardware, like a camera or sensor—for instance, Face detection on Camera. Note that for a USB webcam, you will need a USB hub with an external power input.
The first sample app I chose to test was Detect objects on images, a common task familiar to anyone in computer vision. The test was simple: click the Run button on the top right of the application's instruction screen.
The console displayed the App Lab launching a container named local-share-arduino-app-cli-examples-object-detection-ei-obj-detection-runner-1. This container runs an object detection model from Edge Impulse (the Edge AI company acquired by Qualcomm). The browser then opened a webpage at http://localhost:7000. On this page, you can drag and drop an image file, and upon clicking the Run detection button, the model processes the image and displays the detection results.
Because the Edge Impulse model is quantized to reduce its size, its detection performance on the original image was noticeably inferior when compared directly to YOLO. It remains an open question whether a full YOLO model could run effectively on this hardware without quantization—a topic for future investigation.
The new thing: brickThe UNO Q's greatest advantage over the Raspberry Pi, stemming from its dual-processor architecture and Debian Linux OS, is the "brick" mechanism—a set of middleware-level software that drastically simplifies coding. Brick configuration is managed in the app.yaml file. In the case of the Object detection on images app, it calls two bricks: web_ui and object_detection.
name: Detect objects on images
icon: 🏞️
description: Object detection in the browser
bricks:
- arduino:web_ui
- arduino:object_detectionThe web_ui brick documentation explains its function: it creates a user interface by running a web server, using HTML/CSS/JavaScript as the frontend, and communicating with the main application via REST API or WebSocket. The underlying mechanism of the web_ui brick is not new, combining FastAPI, Uvicorn, and Socket.IO, but building this stack from scratch would be a significant undertaking for a beginner.
The implementation of the web_ui brick in the Object detection on images code is split into two parts:
- The Python side, which binds the code responsible for responding to requests from the JavaScript frontend.
from arduino.app_bricks.web_ui import WebUI
def on_detect_objects(client_id, data):
...
# send response back
ui.send_message('detection_result', response)
ui = WebUI()
# bind code for requests
ui.on_message('detect_objects', on_detect_objects)- The JavaScript side (found in the app.js file), which initiates the request. It uses Socket.io to send a detection_objects request. Once the Python code finishes processing, it sends a message encapsulating the data back, triggering an asynchronous callback function on the JavaScript side.
// callback for button clicked events
detectButton.addEventListener('click', runDetection);
function runDetection() {
...
sendDetectionRequest();
}
function sendDetectionRequest() {
...
socket.emit('detect_objects', {
image: currentImage,
confidence: confidence
});
}
// callback to process response
socket.on('detection_result', (data) => {
...
});While the web_ui brick simplifies the tedious process of connecting the frontend and backend of the software system, UNO Q users still need to put in effort writing the JavaScript code for the frontend. May be okay for developers, but should be a big job for those who are still in schools.
Exploring the ML/AI bricksNow for the ML/AI core: the object_detection brick handles static image processing. Its API features a detect() method within the ObjectDetection class for analyzing image data (either as a byte stream or PIL format). The output is a list of dictionaries containing the object's class name, bounding box, and confidence score. The documentation doesn't specify the exact model used by the object_detection brick, but based on the running containers, it strongly suggests a model from Edge Impulse.
How about a webcam for live images? My existing USB hub, despite having an external power slot, failed to display output on an HDMI monitor. Initially, I thought this would prevent me from testing the webcam sample apps. However, I realized the documentation didn't explicitly require a monitor.So, I plugged the USB hub into the UNO Q board, then launched the Arduino App Lab. I discovered that the UNO Q and App Lab seem to have a service discovery feature, allowing them to connect even without knowing the board's IP address.
The UNO Q documentation mentions using App Lab in Network Mode, but I didn't realize it would be this seamless. Clicking the app connected just as easily as if a USB cable were attached, with the main requirement being that the computer and the UNO Q board are on the same WiFi network.
When I opened the sample app: Face Detector on Camera, a browser popped up, displaying a webpage with the live camera feed and the detection results. The difference here is the use of the video_object_detection brick. The accuracy of face detection was noticeably more impressive than the static object_detection brick. The only downside was a significant DELAY of several seconds before the image would update.
Glitches, and how to solveOne minor, though annoying, issue I encountered is with the Arduino App Lab's update process. After connecting the board via USB, the App Lab always force the UNO Q to check back with the Arduino server for updates. If you misconfigure the UNO Q's WiFi—for example, by connecting to a network that requires a separate login after the initial connection—the UNO Q's update process will freeze, even if your computer has a stable internet connection. The workaround is a bit indirect: you have to use the adb shell to log in and change the WiFi connection settings via the command line.
sudo nmcli dev wifi connect <WiFi-SSID> password <WiFi-password>For those interested in more details on logging into the UNO Q board, the Edge Impulse website offers excellent resources, including the relevant commands and instructions for tasks like adding SSH and connecting a project to the Edge Impulse platform.
Final Thoughts:I like this UNO Q board. While its raw performance may not match the latest Raspberry Pi 5, its strategic combination of significantly lower power consumption (meaning low heat) and the familiar Arduino-compatible I/O pins carves out a powerful niche. The UNO Q is perfectly positioned to upgrade existing edge devices that require robust ML/AI capabilities. Crucially, its integrated software environment also makes the UNO Q an easier solution for students and beginners to jump straight into ML/AI projects.



Comments