This article is the third in a series exploring how to control robots with LLMs:
Part 1 — The Genius Taxi Driver
Part 2 — OLLAMA — Getting Started Guide for AMD GPUs
Part 3 — LANGCHAIN — Getting Started Guide for AMD GPUs
Part 4 — Implementing Agentic AI in ROS2 for Robotics Control
Part 5 — Evaluating Tool Awareness of LLMs for robotic control
LANGCHAIN OverviewLANGCHAIN provides an unified API for the vast variety of LLMs in the industry, including OLLAMA for locally run LLMs, as well as API access to cloud-based LLMs such as OpenAI, Anthropic, etc…
It goes well beyond this unified API, acting as a middleware between LLMs and other features such as:
- system and user prompt management
- context memory management
- tool integration
- etc…
LangChain offers several execution paradigms, including:
- chain
- agent
The “chain” paradigm allows the definition of linear workflows for simpler use cases. In this paradigm, processing steps, and their order, are pre-determined.
In the following diagram, the pre-processing could be as simple as formatting the prompt to a template, or more sophisticated as document retrieval for RAG applications. The post-processing could be output parsing for the final response.
The “agent” paradigm allows the definition of more complex use cases. In this case, the LLM decides if and when to call external “tools” that allow it to interact with the external world, and iterate until it has found a suitable output response.
This is the paradigm we will be using.
Why LANGCHAIN ?LANGCHAIN is not the only framework providing an abstraction layer for various LLMs, as well as tools for integrating data sources, and external APIs.
- AutoGen, from Microsoft
- AISuite, from Andrew Ng
- etc…
There comes a point when a choice has to be made, and I have chosen LANGCHAIN for the following reasons:
- longer history
- larger adoption
Also, LANGCHAIN has been used to implement additional functionality specific to robotics applications:
- ROSA (NASA JPL) : https://github.com/nasa-jpl/rosa
- RAI (RobotecAI) : https://github.com/RobotecAI/rai/
LANGCHAIN can be installed using pip install, as shown below:
pip3 install langchainAdditional [optional] packages can also be installed:
pip3 install langchain-core langchain-community
pip3 install langchain-openai langchain-ollamaCreating our Agentic AI nodeThe goal of this article is to understand which open-source LLMs have “tool awareness” for a ROS2-based robotics application. In order to evaluate this, we need to create agentic AI nodes to perform the tests. In this article, we will start with a generic agent.
Our generic agent, called ros2_ai_agent, will be built with LANGCHAIN as follows:
The LLM will be implemented with one of the following integrations:
- OpenAI — baseline, that we will be comparing against
- OLLAMA — our target open-source models being evaluated
The following parameters can be used to specify which API and LLM to use:
- llm_api:=openai|ollama (default == ollama)
- llm_model:={model}:{variant} (default == gpt-oss:20b)
We will also provide a system prompt that instructs the LLM which agent it is asked to be, and which tools are at its disposal.
I have read somewhere that an LLM can get confused if we provide it with too many tools. For this reason, my implementation will be configurable with the following parameters:
- use_basic_tools:=True|False : wether (or not) to include the basic tools (get_ros_distro(), get_domain_id())
- use_generic_tools:=True|False : wether (or not) to include the generic tools (list_topics(), list_nodes(), list_services(), list_actions())
- use_robot_tools:=True|False : wether (or not) to include the robot tools
This tool-enabled LLM is implemented in the ros2_ai_agent ROS2 node:
The ros2_ai_agent will subscribe to the following topic for its input:
- llm_prompt
It will also interact with a robot, which will be illustrated and described in the next article.
In order to have its “tool awareness” evaluated, it will also publish the following two topics:
- llm_tool_calls
- llm_output
We can experiment with these concepts using the following source code that accompanies this article. Start by cloning the llm-robot-control repository:
git clone https://github.com/AlbertaBeef/llm-robot-control
cd llm-robot-controlWe need to install the required python package (landchain 0.3, ollama, openai), which are defined in the requirements.txt file.
python3 install -r requirements.txtThen build and install the ros2_ai_agent ROS2 package on a ROS2 Jazzy enabled linux machine:
cd ros2_ai_agent
rosdep update && rosdep install — ignore-src — from-paths. -y
colcon build
source install/setup.bashDon’t forget to launch the OLLAMA server (Refer to Part 2 for details on installing OLLAMA, and launching the server):
ollama serveWe can now launch the “generic” version of the agent (without robot-specific tools), in a separate terminal, using the following ROS2 command:
ros2 launch ros2_ai_agent start_generic_agent.launch.pyThis will launch our ros2_ai_agent, which will report the values of its parameters, as well as the system prompt it used for the LLM:
ROS2 AI Agent has been started
llm_api : “ollama”
llm_model : “gpt-oss:20b”
use_basic_tools : “True”
use_generic_tools : “True”
use_robot_tools : “False”
system_prompt : “
You are a ROS 2 system information assistant.
You can check ROS 2 system status using these commands:
— get_ros_distro(): Get the current ROS distribution name
— get_domain_id(): Get the current ROS_DOMAIN_ID
You can check ROS 2 system status using these commands:
— list_topics(): List all available ROS 2 topics
— list_nodes(): List all running ROS 2 nodes
— list_services(): List all available ROS 2 services
— list_actions(): List all available ROS 2 actions
Return only the necessary actions and their results. e.g
Human: What ROS distribution am I using?
AI: Current ROS distribution: humble
Human: What is my ROS domain ID?
AI: Current ROS domain ID: 0
Human: Show me all running nodes
AI: Here are the running ROS 2 nodes: [node list]“
In a second terminal, publish one of the following prompts to the \llm_prompt topic:
ros2 topic pub -1 /llm_prompt std_msgs/msg/String “{data: ‘What ROS distribution am I using ?’}”
ros2 topic pub -1 /llm_prompt std_msgs/msg/String “{data: ‘What is my ROS domain ID ?’}”
ros2 topic pub -1 /llm_prompt std_msgs/msg/String “{data: ‘List the ROS topics.’}”
ros2 topic pub -1 /llm_prompt std_msgs/msg/String “{data: ‘List the ROS nodes.’}”
ros2 topic pub -1 /llm_prompt std_msgs/msg/String “{data: ‘List the ROS services.’}”
ros2 topic pub -1 /llm_prompt std_msgs/msg/String “{data: ‘List the ROS actions.’}”By default, the LLM used is llm_api:=ollama and llm_model:=gpt-oss:20b. We could have launched the ros2_ai_agent explicitly as follows:
ros2 launch ros2_ai_agent start_generic_agent.launch.py llm_api:=ollama llm_model:=gpt-oss:20bWhen working, the “List the ROS nodes.” prompt will generate the following output from the agent:
> Entering new AgentExecutor chain…Invoking: `list_nodes` with `{}`Running ROS 2 nodes:/ros2_ai_agent_turtlesim
Here are the running ROS 2 nodes: [/ros2_ai_agent_turtlesim]> Finished chain.
Output: Here are the running ROS 2 nodes: [/ros2_ai_agent_turtlesim]
We can also launch the agent using the OpenAI API (this requires that you have your OPENAI_API_KEY defined as an environment variable):
ros2 launch ros2_ai_agent start_generic_agent.launch.py llm_api:=openai llm_model:=gpt-4o-miniIncorrect ExecutionWe can also launch the agent with other open-source models available with OLLAMA:
ros2 launch ros2_ai_agent start_generic_agent.launch.py llm_api:=ollama llm_model:=qwen2.5-coder:7bWhen not working, the “List the ROS nodes.” prompt will generate the following incorrect output from the agent:
> Entering new AgentExecutor chain…{“name”: “list_nodes”,“arguments”: {}}> Finished chain.
Output: {“name”: “list_nodes”,“arguments”: {}}
The LLM knows that in order to get the response, a function called list_nodes needs to be called, without arguments. However, the function is NOT called. This, in fact, corresponds to how things were done before tools were integrated to LLMs. They would output a json structure listing which actions to carry out. So, although, we could interpret this json structure, call the functions ourselves, then feed the output back to the LLM, this is not the goal for our “Tool Awareness” exploration.
Unsupported LLMsDespite being identified by OLLAMA as supporting tools, the deepseek-r1 models return the following error:
- [ros2_ai_agent_turtlesim]: Error processing prompt: registry.ollama.ai/library/deepseek-r1:8b does not support tools (status code: 400)
I do not know what is causing this, but will bench these LLMs for now …
ConclusionIn this article, we provided an overview of LANGCHAIN, how to install it on a linux machine equipped with an AMD Radeon Pro W7900 GPU, and implemented Agentic AI functionality in a ROS2 node.
In the next article, we will expand our Agentic AI ROS2 node with additional robot specific tools for the following two use cases:
- turtlesim (the simplest of all robots)
- robotic arm (the most complex use case I am working with)
I want to take the time to acknowledge the following learning resources that were indispensable for this exploration journey.
Another resource that was of indispensable value for this exploration, was Chapter 14 of the “Mastering ROS2 for Robotics Applications” book, by Lentin Joseph and Jonathan Cacase:
- Mastering ROS2 for Robotics Applications
https://www.packtpub.com/en-us/product/mastering-ros-2-for-robotics-programming-9781836209003
The the source code for the book is available on github:
Version History2025/11/17 — Initial version
- implementation supports langchain 0.3
- does not support deepseek-r1





Comments