WIP ( still working on documents and.md files + nice image to quick get going + 3D files
π¦Ύ Building the AI Garage LAB: Local AI Control on Raspberry Pi 5 having AI HAT+ (13T) + Arduino IDE and Ollama for programming real hardware
To use OpenCLaw is optional ( and please understand dangers + RTFM )
This article is to get you going we plan of using this setups for helping us on PMSG smart glasses project to have as PMSGpt to help in coding and make or github beter by direcht feedback of user, Also we will try to add Agent in to PMSG glasses and have data / sensors feedback to user... See PMSG page for dat... See PSMG.ONLINEWeβre also experimenting with robotics and vision systems β and honestly, itβs hard not to love how cool it is to see robot arms come to life. Whether itβs Arduino-compatible arms or the ones from Seeed Studio, programming and controlling them creates an incredibly satisfying feedback loop between code and motion.
To build a strong vision layer without breaking the budget, we recommend picking up a cheap second-hand Microsoft Kinect (make sure you have the correct USB adapter). It gives you depth sensing and basic LiDAR-like capabilities right out of the box.
Adding an affordable LiDAR module can further improve the system. Structured depth data reduces the processing load on ML models and agents, helping them understand space more efficiently. That means you can run smarter vision loops β even on lower-spec hardware. + by taping off vision part lens you get more privacy something we also looking in for PMSG
In short: robotics + depth sensing + local AI = a powerful, affordable experimental lab.
Why We Decided to Build This
In a small garage a system powered by a Raspberry Pi + NPU HAT, something quietly radical begins to take shape. Not a chatbot in a browser window. Not another cloud dashboard.
But a system where artificial intelligence reaches out β and touches the physical world. >.<
With this setup, AI doesnβt stop at suggestions. It moves motors. It reads sensors. It adjusts its own instructions. It learns from what actually happens β not just from what was predicted.
Even development itself changes. GitHub workflows become conversational. The machine reviews code, suggests improvements, and reacts to user feedback β not weeks later, but instantly.
And slowly, almost inevitably, physical AI agents emerge.
Not science fiction robots.
Not billion-dollar infrastructure.
But small, practical systems β built on affordable hardware β that see, think, and act locally.
Most AI today lives in the cloud by adding RPI5+AIHAT+ML you can have it run locally and can have simple openClaw to help out
For our PMSG Smart Glasses (open design wearable platform), we wanted something different:
A local AI assistant β PMSGpt β that can:
Help us write and debug code
- Help us write and debug code
Improve GitHub by analyzing user feedback
- Improve GitHub by analyzing user feedback
Interact with real sensor data
- Interact with real sensor data
Eventually run lightweight agents inside the glasses
- Eventually run lightweight agents inside the glasses
To do this properly, we needed a physical AI playground.
So we built one.
π§ How It WorksThe setup combines:
Raspberry Pi 5
β Main processing unit
AI HAT+ (13T accelerator)
β Handles local AI inference
Raspberry Pi 5 + 8G ram *( better ) Or old gaming Laptop / Mac mini * but RPI rules because of https://connect.raspberrypi.com where you can easy use your wifi and connect remote to desktop / SSH not having to share all you data + works remote + you can add Grove Base Hat for Raspberry Pi on it and give extra sensors / inputs
Ollama & Qwen work great (local LLMs)
β Generates and reviews code
β Assists with debugging
β Interprets sensor feedback
Arduino IDE + Microcontrollers
β Controls motors, sensors, robot arms
We have Arduino Braccio + Arduino Uno R4 WiFi *( via Usb-C and WIFI API endpoints )
Using IDE we also add Xiao platform of PSMG *( soon update ) to make bin flash files for production
Optional Vision Layer
Microsoft Kinect (depth + camera) + USB adapter
- Microsoft Kinect (depth + camera) aka Cheap LiDAR module
To get going Open Kinect use this code: https://github.com/OpenKinect/libfreenect
source: https://github.com/OpenKinect
Pure image AI is heavy.
Depth sensors reduce processing load.
Using Kinect = low-cost LiDAR:
Improves spatial understanding
- Improves spatial understanding
Reduces compute requirements
- Reduces compute requirements
Makes edge AI more practical
- Makes edge AI more practical
Enables robotics experiments
- Enables robotics experiments
Affordable. Hackable. Expandable.
Big Plus: you can only use Lidar part and switch off / tape off vision camera lens part to have some more privacy )
πΆ Connection to PMSG Smart GlassesThis AI grage Lab is not just a robotics...
Itβs the foundation for:
PMSGpt (AI coding assistant for our open hardware ecosystem)
- PMSGpt (AI coding assistant for our open hardware ecosystem)
Sensor-driven wearable AI feedback
- Sensor-driven wearable AI feedback
Agent-based interaction inside smart glasses
- Agent-based interaction inside smart glasses
Local AI that respects privacy
- Local AI that respects privacy
Everything stays hackable and open.
π Open Design PhilosophyPMSG is an open design project.
This AI Lab reflects that mindset:
Local AI
- Local AI
Transparent architecture
- Transparent architecture
Real hardware experimentation
- Real hardware experimentation
Practical, buildable systems
- Practical, buildable systems
Weβre documenting everything β so others can replicate, improve, and extend it.
π‘ What This EnablesAI-assisted robotics
- AI-assisted robotics
Edge vision systems
- Edge vision systems
Smart wearable feedback
- Smart wearable feedback
Self-improving hardware loops
- Self-improving hardware loops
Smarter GitHub workflows
- Smarter GitHub workflows
Physical AI agents
- Physical AI agents










Comments