In most homes, dining tables quickly become cluttered after meals with dishes, utensils, crumbs, and napkins scattered around. Cleaning up is repetitive, time-consuming, and far from enjoyable.
We asked ourselves: Can we build a home robot that takes care of these chores,not just sweeping floors, but resetting your dining table for the next meal?
That’s how AURA (Autonomous Utility Robot for home and Automated cleaning) was born, a project that blends robotics, imitation learning, simulation, and edge AI to automate practical household tasks.
What can AURA doAURA autonomously:
- Detects and classifies items on a cluttered dining table (e.g., plates, napkins, utensils, food leftovers).
- Segregates used items into appropriate bins (e.g., trash, storage).
- Resets the table neatly for the next dining experience — placing plates, utensils, and napkins in their designated spots.
It’s a proof of concept for a future where robots take care of repetitive tasks, giving humans more time for meaningful work and family.
How it works?1. Data Collection
- Recorded real-world demonstrations using a SO-101 robotic arm in a lead–follower setup
- Collected multi-modal data (RGB, action trajectories, language instructions) for real-world tasks totaling of 410 episodes that spanned across 7 tasks
2. Learning & Training
- Used real-world data for imitation learning.
- Trained using LeRobot and NVIDIA Groot N1.5 for policy optimization.
- Trained also the diffusion encoder
3. Deployment
- Deployed trained policies on NVIDIA AGX Thor, enabling real-time inference at the edge.
- Integrated control with the SO-101 robotic arm and multiple RGB cameras for 3D scene understanding and manipulation.
- Perception: RGB cameras (wrist-mounted, front-mounted and top-mounted) for detection and localization of objects. Wrist camera helps with better graps
- Policy Inference: Nvidia Groot trained imitation learning model infers grasp and placement actions and language grounding based on the human task instruction actions.
- Motion Execution: SO-101 executes high-level actions using trajectory planning.
- Feedback Loop: Observations from the camera refine future actions closing the imitation-learning loop.
- Initial Scene: A cluttered dining table with plates, napkins, and utensils.
- Robot Action: AURA detects used items, removes them, and places fresh utensils.
- Result: A spotless, reset table ready for the next meal.
AURA demonstrates a scalable foundation for general-purpose home assistance from dishwashing to laundry folding and beyond. We envision a future where robots like AURA become everyday companions, quietly handling repetitive chores so humans can focus on what truly matters.
TeamSumanth Nirmal GavarrajuAnup Patel
AcknowledgmentsSpecial thanks to Seeed Studio,HuggingFace and NVIDIA for providing hardware and edge computing support that powered this project.
- Resets the table neatly for the next dining experience — placing plates, utensils, and napkins in their designated spots.
It’s a proof of concept for a future where robots take care of repetitive tasks, giving humans more time for meaningful work and family.







Comments