Summary of Key Learning Points (Chapters 1 & 2)
This project is a follow-up to my previous Hackster project, "From Zero to AI Hero: Raspberry Pi and Computer Vision" It details the online course I completed and provides the course syllabus. This course is structured to educate participants on leveraging the capabilities of AI using either the reComputer AI R2130-12 or a Raspberry Pi AI Kit.
The course prerequisites include:
A specific hardware kit which must contain one of the Raspberry Pi 5-based devices. I used the following for this project article:reComputer AI R2130-12 - The World's Most Powerful Raspberry Pi AI Box BUY from SeeedStudio .The necessary AI environment setup to successfully follow the course material.
Other kits can be used:
- Raspberry Pi 5: The core compute module, featuring enhanced PCIe connectivity.-Raspberry Pi -AI Kit: Includes the Hailo-8 NPU, delivering 13 TOPS of AI performance.
- AI HAT+: A high-performance option boosting AI performance up to 26 TOPS.
I took this course using the reComputer AI R230-12. If you're interested in setting up the hardware and configuring the AI development environment for this device, you can refer to my Hackster project, "Test Drive: reComputer AI R2130-12," which provides a detailed guide:
I chose the reComputer AI R2130 for two main reasons:
High-Performance AI Capability: Achieving a high-performance AI workspace with a standard Raspberry Pi 5 requires specific steps: installing the 64-bit Raspberry Pi OS (Bookworm) and integrating a specialized hardware accelerator, like the Hailo-8 Neural Processing Unit (NPU). The 64-bit OS is necessary for modern AI libraries and effective memory management, while the NPU is crucial for shifting heavy AI inference workloads from the general-purpose CPU to a purpose-built dataflow architecture, maximizing performance.
Ease of Use/Pre-Configuration: The reComputer AI R2130 simplifies the process. It is a base Pi 5 that comes pre-configured with the essential advanced hardware accelerators (such as the Hailo-8) and the correct 64-bit software environment. This choice allows users to skip the manual steps of configuring the 64-bit OS and installing an accelerator, which are otherwise mandatory for turning a standard Raspberry Pi 5 into a powerful AI inference machine.
This course is pretty awesome—it's six chapters that walk you through everything from the basic AI theory right into setting up practical environments for huge frameworks like PyTorch, TensorFlow, and OpenCV. A major highlight is diving into how to use the Hailo-8 NPU to really speed up tasks like object detection, pose estimation, and running Large Language Models with Ollama. Included in every section are handy code examples to show off real-time webcam classification and deploying pre-trained models. This document just brings together the key points and notes from finishing up Chapters 1 and 2 to help solidify all the essential knowledge I aquired.
To begin the course, use the following link, which will take you to the course navigation page: https://seeed-projects.github.io/Tutorial-of-AI-Kit-with-Raspberry-Pi-From-Zero-to-Hero/docs/Overview
------------------------------------------Chapter 1 - Introduction to AIChapter 1 is a theory‑focused introduction to core AI concepts: it explains what AI is, how deep and convolutional neural networks work, how computer vision fits in, and briefly introduces Generative AI and large language models
What Chapter 1 covers
- Foundational AI concepts, including an overview of artificial intelligence, its applications, and impact across domains
- The structure and function of Deep Neural Networks (DNNs) as the basis of many modern AI systems.
- Convolutional Neural Networks (CNNs) for image processing and computer vision tasks.
- An introduction to computer vision and how machines interpret visual data.
- A first look at Generative AI and large language models that create content and interact with users.
Key topics Covered :
- Introduction to Artificial Intelligence
- Introduction to Deep Neural Networks (DNN)
- Introduction to Convolutional Neural Networks (CNN)
- Introduction to Computer Vision
- Generative AI (GenAI) / Large Language Models
My Two Cents: Why the Tech Theory Behind Edge AI Matters
The Big Picture: Swapping General AI for Edge-Specific Computer Vision
The world of IoT is totally changing. We're pushing the brainpower—the actual intelligence—right out to the edge. Why? Because we're sick of slow networks and running out of bandwidth. As a Senior Systems Architect, my job is to make sure we can actually deploy complex Deep Neural Networks (DNNs) on small, limited hardware, like a Raspberry Pi 5. To do this right, you have to know how the model's design impacts its performance on that specific piece of hardware.
At its heart, a DNN is just doing tons of high-dimensional matrix math. That's why quantization—chopping down the precision from something like FP32 to INT8 or FP16—is such a massive win. It lets the limited hardware do more math operations (Multiply-Accumulate, or MACs) in parallel, maximizing efficiency without sacrificing too much accuracy.Context Check: CNNs vs. LLMs
It’s crucial to separate specialized visual processing from the bigger umbrella of Generative AI (GenAI), which is any model that creates content. While GenAI covers a lot, Large Language Models (LLMs)—which are transformer-based—are often considered for edge deployment. Here’s a quick rundown of how the old-school Convolutional Neural Networks (CNNs) stack up against these new edge LLMs:
The Secret Sauce: Why Theory Pays Off
So, what’s the real-world value ("So What?") of knowing this theoretical stuff? It's all about getting predictable performance. When architects truly grasp what’s happening layer by layer inside a CNN, they can apply hardware-specific tweaks that guarantee the model will hit the required inference speeds. That's a must-have for any autonomous system operating in the real world. Seriously, understanding these basic theoretical concepts is the absolute first step before you even think about setting up the necessary 64-bit software environment for high-performance edge execution.
------------------------------------------Chapter 2 - Configuring the Raspberry Pi EnvironmentChapter 2 is a hands-on setup chapter that walks through configuring the Raspberry Pi as an AI workspace and installing the main AI frameworks and the Hailo environment needed for later projects.
What Chapter 2 covers
- Focuses on preparing the Raspberry Pi for AI workloads: OS environment, libraries, and accelerators, so it can run vision and deep learning tasks efficiently.
- Emphasizes practical installation and basic usage rather than theory, bridging from Chapter 1 concepts to real code and tools.
Key topics in Chapter 2 (Pages 1-6)
- Introduction to PyTorch in Raspberry Pi Environment – configure PyTorch for training/inference on the edge.
- Introduction to TensorFlow in Raspberry Pi Environment – set up TensorFlow to deploy AI models on a resource‑constrained Pi.
- Introduction to OpenCV in Raspberry Pi Environment – install and use OpenCV for computer vision, from setup to basic functions.
- Introduction to Ultralytics in Raspberry Pi Environment – run Ultralytics YOLO models for object detection and tracking.
- Introduction to Hailo in Raspberry Pi Environment – install and enable the Hailo AI accelerator, including setup and performance benefits.
Introduction_to_Pytorch_in_Raspberry_Pi_Environment
This section, "Introduction to PyTorch in Raspberry Pi Environment," is part of Chapter 2. It covers the fundamentals of PyTorch—what it is, its use cases, and how it compares to TensorFlow—as well as an introduction to QNNPACK. Finally, it provides a walkthrough for setting up a PyTorch environment on the Raspberry Pi and implementing a real-time MobileNetV2 webcam classifier.
What the page covers
- Conceptual introduction to PyTorch: history, dynamic computation graphs, typical users (research labs and companies), and application domains like vision, NLP, and RL.
- PyTorch vs TensorFlow feature comparison in a table: computation graph style, ease of use, community focus, deployment tools, and performance considerations.
- Explanation of QNNPACK as the quantized kernel backend optimized for ARM devices such as Raspberry Pi.
Key topics on this page
- What is PyTorch, brief history (2016 release, ONNX work in 2019, PyTorch Foundation in 2022), and advantages such as dynamic graphs and Pythonic API.
- Dynamic computation graphs: built at runtime, flexible per‑forward‑pass structure, easier debugging.
- Who uses PyTorch: major research institutions (MIT, Stanford, OpenAI, FAIR) and companies (Meta, Tesla, Disney, Microsoft) and the domains where it is applied.
- PyTorch vs TensorFlow comparison table showing differences in graph type, ecosystem, deployment, and typical usage contexts.
- QNNPACK overview and why quantized models plus QNNPACK are well‑suited to low‑power ARM devices like Raspberry Pi.
- Environment setup for classification: creating a venv, installing torch, torchvision, torchaudio, opencv-python, numpy, creating a pytorch folder pytorch_test.py and imagenet-classes.txt.
PyTorch Integration and Optimization
PyTorch remains a pillar of the AI industry due to its dynamic computation graphs, which offer flexibility during the research and development phase. However, for edge deployment on ARM-based architectures like the Pi 5, this flexibility must be augmented with strict optimization to bypass the Python Global Interpreter Lock (GIL) and maximize throughput.
A critical component of this stack is the QNNPACK (Quantized Neural Network PACKage) backend. By explicitly setting torch.backends.quantized.engine = 'qnnpack', we enable a kernel optimized for ARM's SIMD (Single Instruction, Multiple Data) capabilities. This is paired with TorchScript (torch.jit.script), which optimizes the model graph and allows for execution outside the Python runtime—a vital "So What?" for reducing overhead and increasing inference stability.
Analysis of what this code does (pytorch_test.py)
- Initializes PyTorch to use the QNNPACK backend and loads ImageNet class labels from imagenet-classes.txt, so each output index can be mapped to a human‑readable class name.
- Opens the default V4L2 webcam and configures it to capture 224×224 frames at 36 FPS, which matches the input size and desired rate for MobileNetV2 on a Raspberry Pi.
- Defines a preprocessing pipeline that converts frames to tensors and applies ImageNet mean/std normalization, producing the correct input format for the pretrained MobileNetV2 model.
- Loads a quantized MobileNetV2 pretrained on ImageNet (quantize=True), then compiles it with TorchScript (torch.jit.script) to optimize inference on the Pi’s CPU with QNNPACK.
- In a torch.no_grad() loop, continuously captures frames, converts BGR to RGB, preprocesses to a batch, runs inference, applies softmax to get class probabilities, and extracts the top‑3 predictions.
- Overlays those top‑3 predictions on the video stream with cv2.putText, including confidence percentage, and displays the annotated feed in a window titled “Real-time Object Recognition”.
- Tracks frames per second by counting frames and printing FPS once per second, then exits cleanly when the user presses q, releasing the camera and closing the OpenCV window.
High-Level PyTorch Workflow
- Backend Initialization: Explicitly configure the quantized engine to QNNPACK.
- Model Loading: Import a quantized MobileNetV2; verify TorchVision versions to ensure compatibility.
- Graph Optimization: Utilize torch.jit.script to compile the model for high-performance inference.
- Sensor Pipeline: Initialize the webcam (224x224) via OpenCV to match the model’s input layer.
- Inference & Mapping: Transform BGR frames to RGB tensors and map indices to ImageNet labels.
- Environment Persistence: Virtual environment activation (source env/bin/activate) is session-specific. In automated deployment, ensure the environment is invoked via absolute paths in systemd services.
- File Integrity: The imagenet-classes.txt file is essential for human-readable output. The script will fail or return null values without this mapping; it must be sourced from the GitHub repository’s /models/Chapter2/ directory.
While PyTorch offers researchers significant flexibility, it is often contrasted with the highly structured, production-centric pipeline found in the TensorFlow/LiteRT ecosystem.
------------------------------------------Page 2 introduction to TensorFlowIntroduction_to_TensorFlow_in_Raspberry_Pi_Environment
This page introduces TensorFlow concepts (tensors, graphs, Keras, LiteRT/TensorFlow Lite), outlines an end‑to‑end ML pipeline, and then shows how to run a TFLite EfficientNet image classifier in real time on a Raspberry Pi using a webcam.
What the page covers
- Explains what TensorFlow is, how tensors work, how TensorFlow’s computation graph model operates, and how Keras integrates as a high‑level API for building models.
- Describes a typical ML pipeline (data collection, preprocessing, model development, training/evaluation, saving/export) and how to deploy models with LiteRT/TensorFlow Lite on Raspberry Pi, including quantization for edge devices.
Key topics on this page
- Core ideas: tensors, TensorFlow computation graphs, key highlights (versatility, Keras integration, flexible deployment, support for advanced research).
- Keras–TensorFlow relationship and how Keras simplifies TensorFlow for rapid prototyping.
- ML pipeline explanation and a CIFAR‑10 CNN example (via linked Colab notebook TensorFlow_CNN.ipynb).
- LiteRT/TensorFlow Lite overview and why quantization is important for Raspberry Pi.
- Practical Raspberry Pi setup:
Create my_tf_course and a venv, install opencv-contrib-python and tensorflow.
Download EfficientNet TFLite model and imagenet-classes.txt into ~/Desktop/tf_files.
Create tflesson1.py to run live webcam classification and display top‑3 predictions.Code and notebook examples
Notebook: a Colab tutorial TensorFlow_CNN.ipynb for CIFAR‑10 CNN training (linked via “Open In Colab”).
Python: a full script (intended as tflesson1.py) that loads the EfficientNet TFLite model and class labels, runs inference on webcam frames, and overlays the top‑3 predictions on the video feed.
TensorFlow Lite and LiteRT: Streamlining the ML Pipeline
TensorFlow and Keras provide a robust path for rapid prototyping, but the evolution to LiteRT (formerly TensorFlow Lite) is where edge efficiency is won. LiteRT is designed specifically to address the resource constraints of the Pi by providing a streamlined execution environment.
The End-to-End ML Pipeline
The EfficientNet TFLite demo highlights the significance of the LiteRT Interpreter. Its primary architectural advantage is static memory allocation. By calling interpreter.allocate_tensors() at startup, the system reserves all necessary memory upfront, preventing the runtime memory spikes and fragmentation that often crash resource-limited devices. This architectural choice ensures the Pi’s RAM is managed deterministically during sustained inference.
Python code example (tflesson1.py)
EfficientNet TFLite webcam demo
Analysis of what this code does
- Loads a TFLite EfficientNet model and ImageNet label file from ~/Desktop/tf_files, initializes a TFLite Interpreter, and queries input/output tensor details so it knows how to feed data and read predictions.
- Defines preprocess_image to resize frames to 224×224, add a batch dimension, and cast to uint8, matching the expected TFLite input shape and type for the quantized EfficientNet model.
- In get_top_3_predictions, writes the preprocessed image into the interpreter input tensor, calls interpreter.invoke() to run inference, then extracts the output, flattens it, sorts by score, and returns indices and scores of the top‑3 classes.
- Opens the default webcam, continuously captures frames, preprocesses each frame, gets top‑3 predictions, and overlays them on the frame as Top 1/2/3: <label> (<score>) using cv2.putText.
- Shows the annotated video stream in a window titled “Webcam Feed - Top 3 Predictions” and exits cleanly when q is pressed, releasing the camera and destroying all OpenCV windows.
This chapter page is a short, hands-on introduction to using OpenCV on a Raspberry Pi 5, focusing on installation, basic image I/O, simple image processing, and drawing overlays on video.
Scope of the chapter
The page walks through setting up OpenCV in a Python virtual environment on Raspberry Pi OS (Bookworm) and then demonstrates reading images, capturing video, performing basic manipulations, and drawing bounding boxes and text.
Key topics:
- Installing opencv-contrib-python in a virtualenv and verifying with import cv2 and cv2.__version__.
- Reading and displaying an image from disk using OpenCV.
- Capturing live video from a USB camera using cv2.VideoCapture.
- Basic image manipulation operations: grayscale, Gaussian blur, Canny edges, dilation, and erosion.
- Drawing rectangles and overlaying text (including FPS) on a live video stream.
Presence of code / notebooks
The chapter contains multiple Python script examples (Lesson1–Lesson4) and shell commands but no Jupyter notebooks.
All examples are plain .py scripts intended to be run from a virtual environment on the Pi (Thonny or terminal).
The Visual Processing Backbone
OpenCV serves as the "connective tissue" between raw CMOS sensor data and AI model inputs. It handles the essential tasks of pixel manipulation, colorspace conversion, and visual feedback that allow an AI system to interact with the real world.
Functional Capabilities
- Grayscale Conversion: Reduces data dimensionality for models that prioritize shape over color.
- Gaussian Blur: Mitigates high-frequency noise that can trigger false positives in edge detection.
- Canny Edge Detection: Structural analysis using specific threshold parameters (100, 200) to isolate object outlines.
- Dilation & Erosion: Morphological operations used to refine mask structures and eliminate noise artifacts.
For the architect, OpenCV's utility extends to debugging and performance monitoring. By using resizing and stacking (np.hstack / np.vstack), developers can create a "tiled view" to monitor the original feed, grayscale transformation, and edge detection simultaneously. This real-time visualization, combined with FPS overlays, is essential for identifying bottlenecks in the processing pipeline before they reach the inference stage.
Analysis of Python Example Code
1. Reading and displaying an image (Lesson1.py)
- Uses cv2.imread to load an image from a fixed path and checks for errors by testing image is None.
- Displays the loaded image in a window using cv2.imshow and waits indefinitely for a keypress before closing with cv2.destroyAllWindows.
2. Capturing video from USB camera (Lesson2.py)
- Opens the default camera with cv2.VideoCapture(0) and verifies it's ready via isOpened().
- In a loop, it reads frames, displays them in real time, and exits when q is pressed, then releases the capture and closes all windows.
3. Basic image manipulations in a tiled view (Lesson3.py)
- Demonstrates a full basic processing pipeline: color to grayscale, Gaussian blur to reduce noise, Canny edge detection, then dilation and erosion to modify edge structures.
- Uses resizing and stacking (np.hstack, np.vstack) plus grayscale-to-BGR conversion so all images can be visualized together in a 2×3 grid for easy comparison of each transformation.
4. Drawing rectangles and FPS on video (Lesson4.py)
- Continuously grabs frames from a camera and computes instantaneous FPS using the time difference between frames, then overlays that FPS as text on the frame.
- Draws a static green rectangle and label to mimic an object bounding box plus class label, illustrating how to annotate detections in real-time streams.
Introduction_to_Ultralytics_in_Raspberry_Pi_Environment
This page introduces Ultralytics YOLO11 and walks through exporting a YOLO11n model to TFLite and running it for real-time object detection on a Raspberry Pi using a webcam.
What the chapter covers
- Brief intro to Ultralytics, YOLO, and the tasks their models support (detection, segmentation, classification, pose, tracking).
- High-level workflow: train YOLO11n (via a provided Colab notebook) → export to TFLite → deploy and run on Raspberry Pi.
Key topics and steps
- Training/exporting YOLO11n to TFLite (32-bit and 16-bit) using a Colab notebook linked from the page.
- Setting up the Pi environment: project directory, Python virtualenv with --system-site-packages, installing ultralytics and tensorflow, and rebooting.
- Creating a YOLO project folder (Yolo_Files), saving the detection script plus best_float16.tflite and coco.txt, activating the venv, and running the script.
Code / notebook examples
- There is a linked Colab notebook for training and exporting YOLO11n to TFLite.
- On the chapter page itself there is one main Python example, test_yolo_coco.py, which performs live YOLO inference on webcam frames using the TFLite model.
YOLO11: Real-Time Object Detection at the Edge
The YOLO11 (You Only Look Once) architecture represents the state-of-the-art for high-speed object detection. By treating detection as a single regression problem, it bypasses the heavy overhead of multi-stage detectors, making it ideal for the Raspberry Pi 5.
Strategic Trade-offs: Accuracy vs. Latency
To achieve fluid performance on the Pi’s CPU, specific engineering levers must be pulled:
- Frame-Throttling: Implementing count % 3 != 0 logic skips inference on two out of three frames. This maintains a responsive video feed while ensuring the CPU isn't overwhelmed by redundant calculations.
- Resolution Lever (imgsz=224): Reducing the input resolution from the standard 640 to 224 provides a massive performance gain. While this reduces the model's ability to detect small or distant objects, it is a necessary trade-off for maintaining real-time frame rates on CPU-only deployments.
- Model Selection: Utilizing best_float16.tflite balances the need for reduced model size with the higher precision required for stable detection across varying lighting conditions.
Python script: test_yolo_coco.py
Analysis of what the code does
Model and labels setup
- Reads COCO class names from coco.txt into a list, so each prediction’s class ID can be mapped to a human-readable label.
- Instantiates a YOLO model from the quantized TFLite file best_float16.tflite, which is the YOLO11n model exported for efficient inference on the Pi.
Video capture and throttling
- Opens the default camera (VideoCapture(0)) and enters a loop grabbing frames until capture fails or the user quits.
- Uses count % 3 != 0 to skip two out of every three frames, effectively reducing inference frequency to improve perceived performance and FPS on CPU-only hardware.
Running detection and drawing results
- Calls model(frame, conf=0.7, imgsz=224) to run object detection with a confidence threshold of 0.7 on a 224×224 input, trading off accuracy for speed.
- For each detection, extracts bounding box coordinates (box.xyxy), confidence score, and class id, then:
- Draws a green rectangle around the object with cv2.rectangle.
- Writes a label like "person 0.85" just above the box using cv2.putText.
FPS calculation and display
- Computes FPS as 1/(𝑐𝑢𝑟𝑟𝑒𝑛𝑡_𝑡𝑖𝑚𝑒−𝑝𝑟𝑒𝑣_𝑡𝑖𝑚𝑒)1/(current_time−prev_time) each time inference is run, then overlays that FPS value in the top-left corner of the frame.
- Updates prev_time each iteration so the FPS reflects the time between consecutive processed frames.
Display and shutdown
- Shows the annotated frame in a window titled "Webcam" and checks for 'q' to allow clean exit.
- On exit, it releases the camera and destroys OpenCV windows to free resources.
Hailo Hardware Acceleration: Maximizing Throughput
Introduction_to_Hailo_in_Raspberry_Pi_Environment PAGE
This page introduces Hailo’s AI accelerators, their software stack, compatible Raspberry Pi–based hardware, and gives step‑by‑step commands for installing and verifying the Hailo software on Raspberry Pi 5 and reComputer R1000 devices.
What the page covers
- Explains what Hailo is, highlighting its edge‑focused AI processors, Hailo‑8 NPU dataflow architecture, and the Hailo AI Software Suite (Model Build + Runtime environments).
- Describes the Hailo Dataflow Compiler (DFC), HailoRT runtime, and Hailo Model Zoo, including how ONNX/TFLite models are compiled into HEF binaries and deployed on Hailo‑enabled hardware.
Key topics on the page
- Conceptual sections: What is Hailo, key architectural features (dataflow vs Von Neumann), overview of the AI Software Suite, DFC, Runtime Suite, and Model Zoo with GitHub link.
- Hardware preparation: This section describes three PI AI Kits with bullet lists for performance, connectivity, and form factor. Raspberry Pi AI Kit (13 TOPS), AI HAT+ (26 TOPS), reComputer AI R2130‑12 part of the R1000 series
- Installation guides: Installing Hailo software on reComputer R1000: apt update/upgrade, raspi-config PCIe Gen3 setting, hailo-all install, and the same verification commands.
The ultimate performance tier involves shifting from traditional Von Neumann processing to Hailo’s Dataflow architecture. Unlike a CPU that fetches instructions sequentially, the Hailo NPU flows data through a silicon-realized map of the neural network. This allows for massive parallelizing of MAC operations, achieving high TOPS (Tera Operations Per Second) with minimal power draw.
Hailo Hardware Comparison
Technical Verification Guide
To unlock the full potential of the Hailo NPU, the PCIe bus must be optimized:
- PCIe Speed Configuration: Run sudo raspi-config, navigate to Advanced Options → A9 PCIe Speed, and enable Gen 3. This is mandatory; without it, the NPU will be bottlenecked by Gen 2 speeds.
- Software Installation: Deploy the full stack using sudo apt install hailo-all.
- Hardware Verification:
Confirm the PCIe link: lspci | grep Hailo
Verify firmware response: hailortcli fw-control identifyThis excerpt is the reComputer R1000–specific part of the “Introduction to Hailo in Raspberry Pi Environment” page and gives a step‑by‑step procedure to update the system, enable PCIe Gen3, install the Hailo software, and verify that the Hailo‑8L is visible and ready for use
Recomputer-ai-r2130-12 Section
What this section covers
- Guides you through updating the reComputer R1000’s OS packages with sudo apt update and sudo apt full-up, then configuring PCIe Gen3 via sudo raspi-config so the Hailo accelerator can use full bandwidth.
- Shows how to install the hailo-all package, reboot, and then verify both the Hailo firmware and PCIe device presence using hailortcli fw-control identify and lspci | grep Hailo.
- Closes Chapter 2 by stating that the Raspberry Pi environment is now ready for AI and that the next chapter will cover running pretrained and custom models.
Key topics in this section
System preparation: terminal access on reComputer R1000, apt update and full upgrade.
sudo apt update
sudo apt full-upPCIe configuration: using raspi-config → Advanced Options → A9 PCIe Speed → enabling Gen 3, and exiting with Finish.
Hailo software installation: sudo apt install hailo-all, reboot to load drivers and services. Install the Hailo software by running
sudo apt install hailo-all
sudo rebootVerify the installation: checking firmware and software with this command
hailortcli fw-control identifyCheck if the Hailo-8L is connected by confirming that the Hailo‑8L card is on the PCIe bus with this command:
lspci | grep HailoThis section contains shell command “code blocks” only; there are no Python scripts or Jupyter notebooks here.
Any Python‑side inference examples are deferred to later chapters (e.g., custom model deployment and runtime usage), not in this environment‑setup page. Examples with HailoRT or HEF files are deferred to later chapters (notably Chapter 5), which is explicitly mentioned in the architecture section.
------------------------------------------Chapters 1 & 2 Conclusion- By integrating these hardware-specific optimizations, the journey from Chapter 1's mathematical theory concludes in Chapter 2's reality: a high-speed, hardware-accelerated ecosystem capable of professional-grade edge intelligence.
- The Project summarizes the key learning points from the first two chapters of the "PROJECT - Raspberry Pi From Zero to Hero Course," which focuses on deploying AI at the edge using a Raspberry Pi 5-based device reComputer AI R2130-12.
- Chapter 1 introduces the core theory of AI, including DNNs, CNNs, and quantization for efficient edge deployment, emphasizing that foundational knowledge is key to achieving predictable performance.
- Chapter 2 provides a hands-on guide to configuring the Raspberry Pi environment by setting up major AI frameworks like PyTorch, TensorFlow/LiteRT, and OpenCV for real-time vision tasks, with a crucial final step on installing and verifying the Hailo-8 NPU software stack (including enabling PCIe Gen3) to create a high-speed, hardware-accelerated ecosystem for professional-grade edge intelligence.
NEXT STEP
I plan to create another Hackster Project that will completete rest of the 4 chapters.
THANKS for reviewing my Projects and please leave a comment on any questions you might have on the 2 chapters covered in this project






Comments