Part 1 — The Genius Taxi Driver
Part 2 — OLLAMA — Getting Started Guide for AMD GPUs
Part 3 — LANGCHAIN — Getting Started Guide for AMD GPUs
Part 4 — Implementing Agentic AI in ROS2 for Robotics Control
Part 5 — Evaluating Tool Awareness of LLMs for robotic control
Overview of our Agentic AI ROS2 NodeIn the previous article, Part 3 we implemented a generic ROS2 node with agentic AI functionality.
We implemented all the functionality, expect for the robot-specific tools.
In this article, we will implement those robot-specific tools for two use cases:
- turtlesim (the simplest of all robots)
- robotic arm (the most complex use case I am working with)
The LANGCHAIN implementation for the turtlesim agent will implement the following robot tools:
- move_forward(float: distance) => str
- rotate(float : angle) => str
- get_pose() => str
The LANGCHAIN implementation defines these robot tools, and defines a system prompt specific to the turtlesim agent:
This tool-enabled LLM is implemented in the ros2_ai_agent_turtlesim ROS2 node:
In not done so already, start by cloning the llm-robot-control repository:
git clone https://github.com/AlbertaBeef/llm-robot-control
cd llm-robot-controlWe need to install the required python package (langchain 0.3, ollama, openai), which are defined in the requirements.txt file.
python3 install -r requirements.txtThen build and install the ros2_ai_agent ROS2 package on a ROS2 Jazzy enabled linux machine:
cd ros2_ai_agent
rosdep update && rosdep install — ignore-src — from-paths . -y
colcon build
source install/setup.bashDon’t forget to launch the OLLAMA server (Refer to Part 2 for details on installing OLLAMA, and launching the server):
ollama serveWe can now launch the “turtlesim” version of the agent.
In one terminal, launch the agent:
ros2 launch ros2_ai_agent start_turtlesim_agent.launch.pyThis will execute the agent, which will again report its parameters and the system prompt it used for the LLM:
ROS2 AI Agent has been startedllm_api : “ollama”llm_model : “gpt-oss:20b”use_basic_tools : “True”use_generic_tools : “True”use_robot_tools : “True”system_prompt : “You are a turtle control assistant for ROS 2 turtlesim.You can check ROS 2 system status using these commands:— get_ros_distro(): Get the current ROS distribution name — get_domain_id(): Get the current ROS_DOMAIN_IDYou can check ROS 2 system status using these commands: — list_topics(): List all available ROS 2 topics— list_nodes(): List all running ROS 2 nodes — list_services(): List all available ROS 2 services — list_actions(): List all available ROS 2 actionsYou can control the turtle using these commands:— move_forward(distance): Move turtle forward by specified distance— rotate(angle): Rotate turtle by specified angle in degrees— get_pose(): Get current position and orientation of turtleReturn only the necessary actions and their results. e.gHuman: What ROS distribution am I using?AI: Current ROS distribution: humble
Human: What is my ROS domain ID?AI: Current ROS domain ID: 0Human: Show me all running nodesAI: Here are the running ROS 2 nodes: [node list]Human: Move the turtle forward 2 unitsAI: Moving forward 2 units“
In a second terminal, publish a prompt to the \llm_prompt topic:
ros2 topic pub -1 /llm_prompt std_msgs/msg/String “{data: ‘Get the current position of the turtle.’}”
ros2 topic pub -1 /llm_prompt std_msgs/msg/String “{data: ‘Advance the turtle 10.0 units.’}”
ros2 topic pub -1 /llm_prompt std_msgs/msg/String “{data: ‘Turn the turtle 90 degres to the right.’}”Experiment with the parameters, enabling groups of tools on/off, or specifying other LLMs.
Experiment with more complex prompts.
Are you able to draw a 5-point star ?
The LANGCHAIN implementation for the robotic arm agent will implement the following robot tools:
- get_current_pose() => str
- move_to_pose( int: x, int: y, int: z ) => str
- move_to_named_target( str: target ) => str
The LANGCHAIN implementation defines these robot tools, and defines a system prompt specific to the robotic arm agent:
This tool-enabled LLM is implemented in the ros2_ai_agent_turtlesim ROS2 node:
In not done so already, clone the llm-robot-control repository, and build the ros2_ai_agent package, as already described for the Turtlesim Agent.
The robotic arm agent requires the ur_simulation_gz package, from Universal Robots, which can be installed as follows:
apt install ros-jazzy-ur
apt install ros-jazzy-ur-simulation-gzWe can now launch the “robotic arm” version of the agent.
In one terminal, launch the agent:
ros2 launch ros2_ai_agent start_robotic_arm_agent.launch.pyThe agent will report its parameters and system prompt, as follows:
ROS2 AI Agent for UR MoveIt2 has been startedllm_api : “ollama”llm_model : “gpt-oss:20b”use_basic_tools : “True”use_generic_tools : “True”use_robot_tools : “True”
Waiting for move_action server…Action server is available!
system_prompt : “You are a UR robot control assistant using MoveIt 2.You can check ROS 2 system status using these commands:— get_ros_distro(): Get the current ROS distribution name— get_domain_id(): Get the current ROS_DOMAIN_IDYou can check ROS 2 system status using these commands:— list_topics(): List all available ROS 2 topics— list_nodes(): List all running ROS 2 nodes— list_services(): List all available ROS 2 services— list_actions(): List all available ROS 2 actionsYou can control the robot using these commands:— move_to_pose(x, y, z): Move end effector to specified x, y, z coordinates— get_current_pose(): Get current position of the end effector— move_to_named_target(target_name): Move to predefined position (home, up)
Return only the necessary actions and their results. e.gHuman: What ROS distribution am I using?AI: Current ROS distribution: humble
Human: What is my ROS domain ID?AI: Current ROS domain ID: 0Human: Show me all running nodesAI: Here are the running ROS 2 nodes: [node list]Human: Move the end effector to position x=0.5, y=0.0, z=0.5AI: Moving end effector to position x: 0.5, y: 0.0, z: 0.5
Human: Move robot to home positionAI: Moving robot to home position“
In a second terminal, publish a prompt to the \llm_prompt topic:
ros2 topic pub -1 /llm_prompt std_msgs/msg/String “{data: ‘List the ROS topics.’}”
ros2 topic pub -1 /llm_prompt std_msgs/msg/String “{data: ‘Move the robot to home position’}”
ros2 topic pub -1 /llm_prompt std_msgs/msg/String “{data: ‘Get the current gripper pose’}”
ros2 topic pub -1 /llm_prompt std_msgs/msg/String “{data: ‘Get the current position of end effector and reduce z value by 0.2’}”Experiment with the parameters, enabling groups of tools on/off, or specifying other LLMs.
Experiment with more complex prompts.
What works ? What doesn’t work ? Share your results in the comments …
ConclusionIn this article, we augmented our generic agentic AI ROS2 node with robot-specific tools.
In the next article, we will implement a strategy to evaluate these agentic AI ROS2 nodes automatically.
Version History2025/11/24 — Initial version





Comments