LePaniPuri was born at the Embodied AI Hackathon 2025, organized by Seeed Studio. Our goal was simple but ambitious - to teach robots the cultural art of making Pani Puri, one of India’s most beloved street snacks.
We wanted to explore how embodied intelligence could be applied to a task that is instinctive for humans but rich in perception, coordination, and timing for robots. The result was LePaniPuri - a bimanual robot system powered by Jetson Thor and Groot N1.5, capable of picking, filling, and serving puris with remarkable precision
🤖 Key FeaturesWe used two pairs of SO-101 robotic arms, configured in a leader–follower arrangement.
- The leader arms capture human demonstrations and teleoperation data.
- The follower arms replicate the learned behavior in real-time.
This configuration allowed us to model coordinated bimanual manipulation - one arm holds and positions the puri while the other performs fine tasks like poking holes, stuffing fillings, or pouring flavored water.
⚙️ Jetson Thor SetupFlash Jetson Thor with JetPack 7.0 (L4T 38.2) using instructions here.
After flashing JetPack 7.0, install the whole JetPack component software on the thor:
sudo apt update
sudo apt install nvidia-jetpackMake sure the Docker daemon configuration file is as follow:
sudo apt install -y jq
sudo jq '. + {"default-runtime": "nvidia"}' /etc/docker/daemon.json | \
sudo tee /etc/docker/daemon.json.tmp && \
sudo mv /etc/docker/daemon.json.tmp /etc/docker/daemon.jsonMake sure */etc/docker/daemon.json* file looks like following:
{
"runtimes": {
"nvidia": {
"args": [],
"path": "nvidia-container-runtime"
}
},
"default-runtime": "nvidia"
}Add your username to the docker group:
sudo usermod -aG docker $USER
newgrp dockerOn Jetson Thor, Docker Compose v2 is to be installed which is distributed through Ubuntu 24.04's official apt repository:
sudo apt install docker-compose-v2Getting StartedClone the lepanipuri repository:
git clone git@github.com:robo-trail/lepanipuri.git -b pre-release
cd lepanipuri Fetch lepanipuri/thor-isaac-groot:v0.1 docker image from Docker Hub:
docker pull lepanipuri/thor-isaac-groot:v0.1Since there are 4 SO101 arms, get Serial Short ID of each one:
udevadm info --query=property --name=/dev/tty{YOUR} | grep ID_SERIAL_SHORTOnce you have ID_SERIAL_SHORT of all the 4 arms, modify ./lerobot/setup_usb_ports.sh file your specific ones:
# Define the ID_SERIAL_SHORT values for each device
DEVICE1_ID_SERIAL_SHORT="5AAF270447" # left follower
DEVICE2_ID_SERIAL_SHORT="5AB0179027" # right follower
DEVICE3_ID_SERIAL_SHORT="5A7A015778" # left leader
DEVICE4_ID_SERIAL_SHORT="5AB0181062" # right leaderNow create the symbolic link so that even if the ports change on bootup, you can rely on symbolic link:
./lerobot/setup_usb_ports.shUse run.sh script to attach bash terminal within thor_docker container:
./docker/run.sh -t thorRun the policy server inside one terminal of docker container:
./Isaac-GROOT/policy_server.shIn another terminal inside docker container run policy client:
./Isaac-GROOT/policy_client.sh### 🍛 The Task Flow
The full robot routine unfolds as:
1. **Pick and place puris** — grasp individual puris and position them on a plate.
2. **Poke a hole** — gently pierce the puri using a gripper-mounted pin.
3. **Stuff fillings** — insert a small potato ball inside the puri cavity.
4. **Pour flavored water** — squeeze the bottle to add *spicy or sweet* water, completing the snack.
Each step required precise visual feedback, stable force control, and synchronized arm coordination — all handled through Groot N1.5 and Thor’s onboard GPU.
---
### 📊 Training and Datasets
We plan to release our training dataset and pipeline on **Hugging Face**, along with a **Brev-based training workflow** for retraining or fine-tuning models on new tasks.
This will allow the broader community to experiment with new recipes, manipulation techniques, and model architectures.
---
### 🧩 Challenges We Faced
- Achieving **low-latency teleoperation** across x86 and ARM platforms
- Managing **USB serial consistency** across multiple SO-101 arms
- Synchronizing **policy server–client communication** in real-time
- Ensuring consistent torque calibration across arms for soft manipulation tasks
Each challenge helped refine our deployment strategy, making LePaniPuri a reproducible benchmark for future embodied AI research.
---
🚀 What’s NextWe’re working on:
- Release Sphinx Documentation
- Release Thor Docker for LePaniPuri
- Release dataset on Hugging Face
- Release Brev training workflow---
This project was developed during the Embodied AI Hackathon 2025 by Seeed Studio, powered by NVIDIA Jetson Thor.
We’d like to thank:
- The SO-101 arm creators for their open-source design
- The Groot team for enabling embodied learning
- The Hugging Face LeRobot community for dataset standards
- And the broader open-source ecosystems for making robotics accessible







Comments