In this project, we'll explore how to combine computer vision with robotic arm control. We'll build a system that uses a depth camera to detect colored blocks and curves, then processes this data to guide an AgileX PIPER robotic arm through precise manipulation tasks. This is a great starting point for anyone interested in vision-based robotics!
What You'll Learn- Color block and curve detection using OpenCV
- 3D coordinate extraction from depth camera data
- Point cloud processing with curve fitting and interpolation
- Integrating computer vision with ROS and robotic arm control
- Path planning and end-effector trajectory control
- Orbbec Petrel (aligned depth + RGB images: 640×400@30fps)
- Intel RealSense D435 (optional - aligned depth + RGB images: 640×480@30fps)
- AgileX PIPER Robotic Arm
Compile and install PCL with the on_nurbs
option. Refer to the official PCL documentation for Linux compilation:
# Make sure to include -Don_nurbs=ON during CMake configuration
2. PIPER Manipulator DriverCheck out the driver setup guide: https://github.com/agilexrobotics/piper_sdk/blob/1_0_0_beta/README(ZH).MD
3. PIPER ROS ControlFor ROS integration, see: https://github.com/agilexrobotics/piper_ros/blob/noetic/README.MD
Getting StartedStep 1: Launch the Depth CameraStart the camera driver (using Orbbec Petrel as an example):
roslaunch astra_camera dabai_dc1.launch
Step 2: Color Block DetectionStart the color detection node:
rosrun cube_det cube_det
Two windows will appear:
- hsv_image: Adjust HSV sliders to fine-tune color detection
- origin_image: Click on target colors to automatically extract them
Click on your target color in the origin_image
window to automatically search and extract it:
The 3D coordinates of detected blocks will be visualized in RViz:
Launch the curve detection node:
roslaunch astra_camera dabai_dc1.launch
Start the curve detection:
rosrun cube_det line_det
The system automatically handles reflection-induced gaps in the curve by using curve fitting and interpolation to create smooth, continuous trajectories:
Find and activate the CAN bus connection:
# Locate the CAN port
./find_all_can_port.sh
# Activate the connection
./can_activate.sh
Launch the PIPER control node:
roslaunch piper start_single_piper.launch
Step 5: Configure Inverse KinematicsSet up Pinocchio for IK calculations (see the Pinocchio README for details):
python piper_pinocchio.py
Step 6: Set Home PositionDefine the manipulator's home position:
rostopic pub /pos_cmd piper_msgs/PosCmd "{
x: -0.344,
y: 0.0,
z: 0.110,
roll: 0.0,
pitch: 0.0,
yaw: 0.0,
gripper: 0.0,
mode1: 1,
mode2: 0
}"
Step 7: Visualize the PathAdd the RViz plugin to visualize the /line_path
topic:
You should now see the generated end-effector trajectory:
Run the path execution script:
rosrun cube_det path_confer.py
The script supports three keyboard commands:
- r: Record current frame's point cloud, transform from
camera_color_frame_optical
toarm_base
coordinates, and generate control path - s: Send one point from the path to the manipulator per keypress
- p: Continuously publish the entire path to the manipulator
Press r to confirm and generate the trajectory /transformed_cloud
:
Press s or p to start manipulator motion along the trajectory!
Next StepsThis project demonstrates the fundamentals of vision-guided manipulation. You can extend it by:
- Adding multiple object tracking
- Implementing grasp planning algorithms
- Combining with force feedback control
- Scaling to different robotic platforms
Feel free to adapt this approach to your own projects and share your results!
Happy building! 🚀
Comments