This project is a vision-only, sensor-free autonomous warehouse system where multiple small robots coordinate to transport parcels from induct stations to destination chutes without human intervention.
Three robots share the same arena, plan paths in real time, avoid collisions, deliver parcels, and return for the next task — all using external cameras and software logic. There are no sensors on the robots themselves. Every decision comes from vision, math, and a central controller.
At its core, this is a junk-drawer experiment in how much intelligence you can move off the robot and into the system.
The idea came from a simple question that sat unfinished for a long time:
Can you build a reliable multi-robot system without piling on hardware?
Instead of adding lidar, encoders, or IMUs, I wanted to see if a system could work using:
- reused cameras,
- printed markers,
- tape on the floor,
- and smarter coordination logic.
This project was revived under tight constraints — limited hardware, reused components, and a lot of trial and error. That made it a perfect fit for the Junk Drawer mindset: build something real with what you already have and figure out the rest along the way.
The entire system runs inside a 7 × 7 ft grid-based arena designed to mimic a simplified warehouse floor.
> Two induct stations act as parcel entry points
> Nine destination chutes receive parcels
> A tape-marked grid provides structured navigation
> Low walls keep robots and parcels contained
A single overhead camera watches the entire arena, acting as the system’s “eye in the sky.”
Each robot is intentionally simple:
- 6 × 6 inch footprint
- Differential drive
- A small tray to carry parcels
- A basic flipping mechanism to drop parcels into chutes
- No onboard sensors
All localization, planning, and decision-making happen offboard. This design choice forced the system to be robust to drift, timing errors, and imperfect vision — problems that can’t be solved by adding more hardware.
The system operates in three main stages:
1. Parcel IdentificationParcels are placed at induct stations and scanned using QR codes. This determines the destination chute before the robot moves.
The overhead camera tracks ArUco markers mounted on each robot. Using this global view, the system:
- Estimates robot positions in real time
- Computes shortest paths on the grid
- Dynamically avoids collisions
- Prevents robots from blocking stations or chutes
All coordination happens centrally, like air traffic control for robots.
When a robot reaches its destination:
- The tray flips to deposit the parcel
- The robot reorients itself
- Returns to an induct station, ready for the next task
This loop repeats continuously.
Please see the video attached in thee drive
Image processing Video : https://drive.google.com/file/d/1DHrphZ1PqZx880B23WV0zS63GIeEPee0/view
OverHead Cam: https://drive.google.com/file/d/14vDFSUkZIGEMv9P9A_TPrQmJq9Ggz9Bl/view
All cameras feed into a central PC, which acts as the warehouse brain
- Vision processing via OpenCV
- Coordination logic in Python
- MQTT for lightweight robot communication
- A simple web dashboard for real-time monitoring
Watching three robots negotiate space without colliding is surprisingly satisfying.
This project didn’t work on the first try — or the fifth.
Some of the challenges included:
- Lighting changes breaking marker detection
- Calibration drift causing misaligned drops
- Tight turns creating temporary gridlock
- Communication delays requiring smarter sequencing
Each failure forced better logic, not more hardware. That constraint-driven iteration is what shaped the final system.
This project grew out of curiosity, constraints, and whatever parts were already on hand. Instead of adding more sensors or hardware, I chose to push software and coordination as far as possible. The Junk Drawer Competition celebrates exactly that spirit — turning limited resources, half-formed ideas, and persistence into something real. This build is proof that creativity often thrives not in perfect setups, but in the messy space where experimentation begins.














Comments