OpenClawPi is a modular skill set repository focused on the rapid integration and reuse of core robot functions. Covering key scenarios such as robotic arm control, grasping, visual perception, and voice interaction, it provides out-of-the-box skill components for secondary robot development and application deployment.
I. Quick StartOpenClaw DeploymentVisit the OpenClaw official website: https://openclaw.ai/
Execute the one-click installation command:
curl -fsSL https://openclaw.ai/install.sh | bashNext, configure OpenClaw:
1. Select βYESβ
2. Select βQuickStartβ
3. Select βUpdate valuesβ
4. Select your provider (recommended: free options like Qwen, OpenRouter, or Ollama)
5. Select the company model you wish to use.
6. Select a default model.
7. Select the APP you will connect to OpenClaw.
8. Select a web search provider.
9. Select skills (not required for now).
10. Check all Hook options.
11. Select βrestartβ.
12. Select βWeb UIβ.
git clone https://github.com/vanstrong12138/OpenClawPi.git2. Prompt the Agent to Learn SkillsUsing the vision skill as an example:
User: Please learn vl_vision_skillπ¦ Skill Modules OverviewThis article demonstrates the identification, segmentation, pose generation, and grasping of arbitrary objects using SAM3 and pose generation tools.
Repositories- GraspGen: https://github.com/vanstrong12138/GraspGen
- Agilex-College: https://github.com/agilexrobotics/Agilex-College/tree/master
- x86 Desktop Platform
- NVIDIA GPU with at least 16GB VRAM
- Intel RealSense Camera
- OS: Ubuntu 24.04
- Middleware: ROS Jazzy
- GPU: RTX 5090
- NVIDIA Driver: Version 570.195.03
- CUDA: Version 12.8
1.Install NVIDIA Graphics Driver
sudo apt update
sudo apt upgrade
sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt update
sudo apt install nvidia-driver-570
# Restart
reboot2.Install CUDA Toolkit 12.8
- Go to the NVIDIA Official Website to download the CUDA runfile.
- Execute the installation command:
wget https://developer.download.nvidia.com/compute/cuda/12.8.1/local_installers/cuda_12.8.1_570.124.06_linux.run
sudo sh cuda_12.8.1_570.124.06_linux.run- During installation, uncheck the first option (βdriverβ) since the driver was installed in the previous step.
3.Add Environment Variables
echo 'export PATH=/usr/local/cuda-12.8/bin:$PATH' >> ~/.bashrc
echo 'export LD_LIBRARY_PATH=/usr/local/cuda-12.8/lib64:$LD_LIBRARY_PATH' >> ~/.bashrc
source ~/.bashrc4.Verify InstallationExecute nvcc -V to check CUDA information.
nvcc -V5.Install cuDNN
- Download the cuDNN tar file from the NVIDIA Official Website. After extracting, copy the files.
- Execute the following commands to copy cuDNN to the CUDA directory:
sudo cp cuda/include/cudnn*.h /usr/local/cuda/include
sudo cp cuda/lib/libcudnn* /usr/local/cuda/lib64
sudo chmod a+r /usr/local/cuda/include/cudnn*.h /usr/local/cuda/lib64/libcudnn*6.Install TensorRTDownload the TensorRT tar file from the NVIDIA Official Website.
- Extract and move TensorRT to the
/usr/localdirectory:
# Extract
tar -xvf TensorRT-10.16.0.72.Linux.x86_64-gnu.cuda-12.9.tar.gz
# Enter directory
cd TensorRT-10.16.0.72.Linux.x86_64-gnu.cuda-12.9/
# Move to /usr/local
sudo mv TensorRT-10.16.0.72/ /usr/local/- Test TensorRT Installation:
# Enter MNIST sample directory
cd /usr/local/TensorRT-10.16.0.72/samples/sampleOnnxMNIST
# Compile
make
# Run the executable found in bin
cd /usr/local/TensorRT-10.16.0.72/bin
./sample_onnx_mnistSAM3 Deployment- Python: 3.12 or higher
- PyTorch: 2.7 or higher
- CUDA: Compatible GPU with CUDA 12.6 or higher
1.Create Conda Virtual Environment
conda create -n sam3 python=3.12
conda deactivate
conda activate sam32.Install PyTorch and Dependencies
# For 50-series GPUs, CUDA 12.8 and Torch 2.8 are recommended
# Downgrade numpy to <1.23 if necessary
pip install torch==2.8.0 torchvision==0.23.0 torchaudio==2.8.0 --index-url https://download.pytorch.org/whl/cu128
cd sam3
pip install -e .3.Model Download
- Submit the form to gain download access on HuggingFace: https://huggingface.co/facebook/sam3
- Or search via local mirror sites.
The project outputs target_pose (end-effector pose), which can be manually adapted for different robotic arms.
1.Example: PiPER Robotic Arm
pip install python-can
git clone https://github.com/agilexrobotics/pyAgxArm.git
cd pyAgxArm
pip install .CloningClone this project to your local machine:
cd YOUR_PATH
git clone -b ros2_jazzy_version https://github.com/AgilexRobotics/GraspGen.gitRunning the Project1.Grasping Node
python YOUR_PATH/sam3/realsense-sam.py --prompt "Target Object Name in English"2.Grasping Task Execution Controls
A = Zero-force mode (Master arm) | D = Normal mode + Record pose | S = Return to home
X = Replay pose | Q = Open gripper | E = Close gripper | P = Pointcloud/Grasp
T = Change prompt | G = Issue grasp command | Esc = Exit3.Automatic Grasping Task
python YOUR_PATH/sam3/realsense-sam.py --prompt "Target Object Name" --auto


Comments