Control a robotic arm naturally using just your hand gestures! This project demonstrates how to implement intuitive teleoperation of the AgileX PIPER robotic arm using a depth camera and gesture recognition.
Project OverviewThis project implements a low-cost teleoperation system that allows you to control the position of a PIPER robotic arm's end effector using hand gestures captured by a depth camera. Simply move your hand in 3D space, and the robot arm follows!
Key Features:
- Real-time hand tracking and gesture recognition
- 3D position mapping from hand to robot workspace
- Visual feedback with RViz
- Easy calibration using hand gestures
Note: This implementation currently supports position control only (X, Y, Z coordinates). Orientation control is not included.
Hardware RequirementsEssential Components:- AgileX PIPER Robotic Arm
Depth Camera (choose one):
- Orbbec Petrel (640×400 @ 30fps with aligned depth/RGB)
- Intel RealSense D435 (640×480 @ 30fps with aligned depth/RGB)
- Depth Camera (choose one):Orbbec Petrel (640×400 @ 30fps with aligned depth/RGB)Intel RealSense D435 (640×480 @ 30fps with aligned depth/RGB)
- Ubuntu with ROS Noetic
- ROS Noetic packages (as required by the project)
- MediaPipe library for gesture recognition
- Pinocchio inverse kinematics library
git clone https://github.com/agilexrobotics/AgileX-College.git
cd AgileX-CollegeStep 2: Install MediaPipepip install mediapipeStep 3: Set Up PIPER Robotic Arm- Driver Setup: Follow the instructions at PIPER SDK
- ROS Control Node: Follow the instructions at PIPER ROS
Set up the inverse kinematics environment following the Pinocchio README
Step 5: Build the Packagecatkin_make
source devel/setup.bashHow to UseStep 1: Start the PIPER Robotic Arm- Connect the CAN module to your PC
- Launch the arm motion node:
roslaunch piper_ros piper_arm.launchStep 2: Start Inverse Kinematicsrosrun piper_pinocchio ik_node.pyStep 3: Launch the Depth CameraFor RealSense D435:
roslaunch realsense2_camera rs_camera.launch align_depth:=trueStep 4: Start Gesture Detectionrosrun handpose_det handpose_det.pyStep 5: Move to Custom Home PositionThe system uses a custom "Home" position as the reference point. This is necessary because the default PIPER Home position has joint constraints that limit X and Y axis control.
Send the robot to the custom Home position via the /pos_cmd topic.
rvizIn RViz, you'll see:
- Red dots: Finger joint positions
- Green dot: Palm center
- Red arrow: Teleoperation base coordinate frame (after calibration)
- Hold your hand with five fingers extended (OPEN_HAND gesture)
- Close your hand into a fist (FIST gesture)
- Hold the fist for 3 seconds
- The base coordinate frame is now set at your hand's current position
You'll see a /base_frame_origin pose published and visualized in RViz as a red arrow.
After setting the base frame, simply move your hand in 3D space:
- The robot arm's end effector will mirror your hand movements
- Hand position is mapped relative to the base origin
- Robot position is mapped relative to the custom Home point
1. Hand Detection & Tracking:
- Uses MediaPipe to detect hand landmarks in real-time
- Tracks 21 keypoints per hand
- Calculates palm center from detected landmarks
2. Depth Mapping:
- Converts 2D hand position to 3D world coordinates using depth data
- Camera intrinsics transform pixel coordinates to metric coordinates
3. Gesture Recognition:
- Recognizes hand states (OPEN_HAND, FIST)
- Uses gesture timing for calibration triggers
4. Coordinate Transformation:
- Maps hand movement relative to base frame
- Transforms to robot workspace coordinates
- Publishes position commands to PIPER arm
The system publishes target poses to control the robot:
def publish_target_pose(self, relativeX, relativeY, relativeZ):
targetX = self.endPosX + relativeY
targetY = self.endPosY + relativeX
targetZ = self.endPosZ - relativeZ
targetPose = PosCmd()
targetPose.x = targetX
targetPose.y = targetY
targetPose.z = targetZ
# Position control only
targetPose.pitch = 0
targetPose.yaw = 0
targetPose.roll = 0
targetPose.gripper = 1
targetPose.mode1 = 1
targetPose.mode2 = 0
self.pos_cmd_pub.publish(targetPose)Topics & MessagesPublished Topics:
/pos_cmd- Target position commands (PosCmd)/base_frame_origin- Calibrated base frame (Pose)/marked_image- Annotated camera feed (Image)/hand_markers- Hand landmark visualization (MarkerArray)
Subscribed Topics:
/camera/color/image_raw- RGB camera feed/camera/aligned_depth_to_color/image_raw- Aligned depth image/end_effector_pose- Current end effector position
Best Practices:
- Ensure good lighting for reliable hand detection
- Keep your hand within the camera's depth range (typically 0.3m - 3m)
- Perform calibration in a stable position
- Avoid rapid movements initially
Common Issues:
- If position values exceed limits (>100mm), the system logs a warning
- Make sure depth and RGB images are properly aligned
- Check CAN bus connection if arm doesn't respond
Potential improvements to this project:
- Add orientation control (roll, pitch, yaw)
- Implement gripper control via pinch gesture
- Add safety boundaries and collision detection
- Support multi-hand control for dual-arm systems
- Create custom gesture library for different commands
- GitHub Repository:AgileX College
- PIPER SDK:Documentation
- PIPER ROS:Documentation



Comments