AI Agentic Protocols: Part 1 — Model Context Protocol (MCP)

Dive into AI Agentic Protocols! Part 1 covers MCP, the de facto standard for AI models to find and use external capabilities.

Welcome to the "AI Agentic Protocols" learning series on Hackster! In this series, we'll explore the emerging protocols that enable AI agents' interactions with their environment and each other. Each part of this series will demystify a key protocol, starting with the MCP.

A companion GitHub repo will contain code samples in Python to jump-start practical testing of the described AI agentic protocols.

The Model Context Protocol

MCP is a communication protocol that was developed by Anthropic to allow AI models to discover and utilise external tools and resources. It provides a standardised way for an AI agent to understand what external capabilities are available to it and how to invoke them.

Client-server architecture

MCP operates on a client-server architecture:

  • MCP server: This component hosts and exposes a set of "tools" (functions) and "resources" (data) that an AI agent can access. The server registers these capabilities with the MCP framework, making them discoverable.
  • MCP client: This component acts as the interface for the AI agent. It connects to the MCP server, discovers the available tools and resources, and then translates the AI agent's requests into calls to these tools.

High-level implementation in the home automation example:

In the companion GitHub repo, you can find Python files for the MCP server (MCPServer_HomeAutomation.py) and MCP client (MCPClient_GradioUI.py) implementation, to control a simulated smart home environment through an AI agent.

MCPServer_HomeAutomation.py: This file defines the MCP server for home automation. It initialises the FastMCP() class to set up the server instance.

mcp = FastMCP("Home Automation")
  • Functions decorated with @mcp.tool() are exposed as callable tools to the AI agent. These functions contain the logic to interact with and change the state of the simulated devices.
@mcp.tool()
def list_devices() -> str:
  • A function decorated with @mcp.resource(..) is exposed as a resource, allowing the AI agent to retrieve current device information in JSON format.
@mcp.resource("home://device_status")
def get_device_status() -> str:
  • A prompt template is defined using @mcp.prompt(), guiding the AI agent to generate a detailed home status report.
@mcp.prompt("home_status_report")
def home_status_prompt() -> str:

MCPClient_GradioUI.py: This file acts as the MCP client, providing a Gradio Web user interface for interaction with your home automation solution.

  • The Web UI manages the lifecycle of the MCP server process, allowing it to be started and stopped.
  • The create_agent function configures the AI agent. When the MCP server is running, the agent is able to discover and use the home automation tools.
async def create_agent(mcp_servers=None):
...

agent = Agent(
name = "Home Assistant",
instructions = instructions,
model = OpenAIChatCompletionsModel(
model = AOAI_DEPLOYMENT,
openai_client = aoai_client,
),
mcp_servers = mcp_servers or [],
)
  • The process_user_input function is where the core interaction happens. It takes user input, passes it to the AI agent, and if the MCP server is active, can then decide to call the appropriate tools (e.g., to turn on the light).
async def process_user_input(user_input, history):
...

with trace(workflow_name="Conversation", group_id=current_thread_id):
if previous_result:
input_messages = previous_result.to_input_list() +
[{"role": "user", "content": user_input}]
result = await Runner.run(starting_agent=agent, input=input_messages)

The provided sample code utilises the GPT-4.1 model from Azure OpenAI. You can adjust the LLM (large language model) implementation to an AI model of your choice, as long as its SDK supports the use of MCP. The Web UI is built with Gradio, and interactions with the MCP-enabled AI model may look like this:

Demo of the above Home Automation UI can be found on this YouTube video.

Summary

In this first part of our series, we've introduced the Model Context Protocol as a mechanism for AI agents to extend their capabilities by interacting with external tools and resources.

We've explored its client-server architecture, where the MCP server hosts callable functions and accessible data, and the MCP client enables the AI agent to discover and utilise these.

Through the home automation example, we saw how the server exposes device control functions as tools and device status as a resource, while the client facilitates the AI agent's use of these capabilities to respond to user requests and manage the smart home.

This foundation in MCP sets the stage for understanding how AI agents can move beyond simple conversational abilities to perform complex, real-world tasks.

Laziz Turakulov
Newbie Hackster
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles