In the world of robotics and AI, the bridge between theoretical algorithms and the physical world—known as embodied intelligence—has often been complex and expensive to cross. The Hiwonder LeRobot SO-ARM101 changes that. It’s not just another robotic arm kit; it’s a fully open-source, dual-arm platform designed from the ground up for imitation learning, seamlessly integrated into the Hugging Face LeRobot ecosystem.
If you're holding this kit, you're holding a key to one of the most accessible gateways into real-world AI robotics. This guide will walk you through unboxing, setup, and running your first AI-powered task.
1. Before You Begin: Kit Overview & PrerequisitesFirst, understand what you have. The SO-ARM101 is a leader-follower system. You physically teach the Leader Arm, and the Follower Arm learns to mimic the task autonomously.
Step 1: Install Miniconda
Windows System Installation
- Windows System Installation
① Download the Miniconda Package
Miniconda Official Installer
Locate Miniconda3-py311_25.7.0-2-Windows-x86_64.exe and download it to your computer.
③ Change the Package Source
anaconda | Mirror Site Help | Tsinghua Open Source Mirror
Click the link to access the Miniconda software repository and locate the third-party source highlighted in the figure below.
Press Win + R to open the Control Panel, type cmd, and open the terminal.
Enter the command in the terminal to generate .condarc file.
For the complete guide, you can check Hiwonder LeRobot tutorials3. The Magic: Your First Imitation Learning Project
With setup complete, let’s make the robot learn a simple pick-and-place task.
Phase 1: Data Collection – Teaching by Doing
Concept: You will use the Leader Arm to perform the task 20-30 times. The system records everything.
Process:
1. Position the target object (e.g., a colored cube) in the workspace.
2. Run the data collection script:
python -m lerobot.record --robot.type=so101_follower --robot.port=COM24 --robot.id=my_awesome_follower_arm --robot.cameras="{ handeye: {type: opencv, index_or_path: 0, width: 640, height: 480, fps: 30}, front: {type: opencv, index_or_path: 1, width: 640, height: 480, fps: 30}}" --teleop.type=so101_leader --teleop.port=COM22 --teleop.id=my_awesome_leader_arm --display_data=true --dataset.repo_id=${HF_USER}/demo --dataset.num_episodes=20 --dataset.single_task="Grab the screwdriver" 3.Physically grab and manipulate the Leader Arm through the complete task: approach, grasp, lift, move, release.
4. The system synchronously records:
All servo joint angles.
Video from the gripper camera (detailed view).
Video from the external camera (contextual view).
5. Repeat for multiple demonstrations. More data = a better model.
Phase 2: Model Training – Creating the "Brain"
Concept: The LeRobot framework uses your demonstration data to train an Action Chunking Transformer (ACT) model, a state-of-the-art imitation learning algorithm.
Process:
- The data is automatically formatted into a dataset.
- Launch training with a command like:
python src/lerobot/scripts/train.py --dataset.repo_id=${HF_USER}/demo --policy.type=act --output_dir=outputs/train/act_so101_test --job_name=act_so101_test --policy.device=cuda --wandb.enable=false --policy.push_to_hub=false3. Training runs on your GPU. You can monitor the loss, which should decrease over time, meaning the model is learning to replicate your actions. Thanks to Hugging Face integration, you could even start from a shared pre-trained model to speed this up.
Concept: Deploy the trained model to the Follower Arm for live inference.
The Moment of Truth:
1. Run the deployment script:
python -m lerobot.record --robot.type=so101_follower --robot.port=COM24 --robot.id=my_awesome_follower_arm --robot.cameras="{ handeye: {type: opencv, index_or_path: 0, width: 640, height: 480, fps: 30}, front: {type: opencv, index_or_path: 1, width: 640, height: 480, fps: 30}}" --display_data=true --dataset.repo_id=${HF_USER}/eval_so101 --dataset.single_task="Grab the screwdriver" --policy.path=outputs/train/act_so101_test/checkpoints/100000/pretrained_model2. The Follower Arm will spring to life. Using only its dual-camera vision to locate the object and its trained model to decide actions, it will attempt to execute the pick-and-place task.
3.It won't be perfect on the first try, but this is embodied AI in action—a physical system perceiving, deciding, and acting.
Use the BusLinker Software: The BusLinker V3.0 GUI is excellent for real-time servo monitoring, testing individual movements, and troubleshooting connection issues.
Common Pitfalls:
- "Servos not found": Check USB permissions on Linux (
sudo chmod 666 /dev/ttyACM0), and double-check power/data cable connections. - Jittery motion: Ensure the mechanical structure is fully tightened and that you're using the correct, stable power supply.
- Training errors: Verify your Conda environment (
lerobot) is active and all dependencies installed.
Where to Go From Here:
- Experiment: Try different tasks (stacking, pushing, drawing).
- Explore Hugging Face Hub: Download community datasets and models to try immediately.
- Dive Deeper: Modify the ACT model architecture, or experiment with reinforcement learning after initial imitation learning.
- Contribute: Share your own datasets and trained models back to the Hugging Face community to help others.
Hiwonder SO-ARM101 demystifies embodied AI. In one weekend, you can go from unboxing to having a robot that learns from you. Its true power lies in its open-source philosophy and deep Hugging Face integration, connecting you directly to a global community of innovators.
This isn't just about building a robot; it's about building the future, one demonstration at a time. Now, go and teach it something amazing. What will you teach your SO-ARM101 first? Share your projects and questions in the comments below!







Comments