Chapter 10: Model Context Protocol

第 10 章:模型上下文协议 (MCP)

To enable LLMs to function effectively as agents, their capabilities must extend beyond multimodal generation. Interaction with the external environment is necessary, including access to current data, utilization of external software, and execution of specific operational tasks. The Model Context Protocol (MCP) addresses this need by providing a standardized interface for LLMs to interface with external resources. This protocol serves as a key mechanism to facilitate consistent and predictable integration. 要使 LLM 作为智能体运作,其能力必须超越多模态生成。与外部环境交互不可或缺,包括访问实时数据、使用外部软件以及执行特定操作任务。模型上下文协议(Model Context Protocol,简称 MCP)通过提供标准化接口使 LLM 能与外部资源交互,是实现一致性和可预测性集成的关键机制。

MCP Pattern Overview

MCP 模式概述

Imagine a universal adapter that allows any LLM to plug into any external system, database, or tool without a custom integration for each one. That’s essentially what the Model Context Protocol (MCP) is. It’s an open standard designed to standardize how LLMs like Gemini, OpenAI’s GPT models, Mixtral, and Claude communicate with external applications, data sources, and tools. Think of it as a universal connection mechanism that simplifies how LLMs obtain context, execute actions, and interact with various systems. 想象一个通用适配器,允许任何 LLM 连接到任何外部系统、数据库或工具,无需为每个连接进行自定义集成。这本质上就是模型上下文协议(MCP)的功能。它作为开放标准,旨在标准化 Gemini、OpenAI 的 GPT 模型、Mixtral 和 Claude 等 LLM 与外部应用程序、数据源和工具的通信方式。可将其视为通用连接机制,简化 LLM 获取上下文、执行操作以及与各类系统交互的方式。 MCP operates on a client-server architecture. It defines how different elements—data (referred to as resources), interactive templates (which are essentially prompts), and actionable functions (known as tools)—are exposed by an MCP server. These are then consumed by an MCP client, which could be an LLM host application or an AI agent itself. This standardized approach dramatically reduces the complexity of integrating LLMs into diverse operational environments. MCP 基于客户端-服务器架构运行。它定义了不同元素——数据(称为资源)、交互模板(本质是提示)和可操作函数(称为工具)——如何由 MCP 服务器公开。这些元素随后由 MCP 客户端使用,客户端可以是 LLM 宿主应用程序或 AI 智能体本身。这种标准化方法显著降低了将 LLM 集成到多样化操作环境中的复杂性。 However, MCP is a contract for an “agentic interface,” and its effectiveness depends heavily on the design of the underlying APIs it exposes. There is a risk that developers simply wrap pre-existing, legacy APIs without modification, which can be suboptimal for an agent. For example, if a ticketing system’s API only allows retrieving full ticket details one by one, an agent asked to summarize high-priority tickets will be slow and inaccurate at high volumes. To be truly effective, the underlying API should be improved with deterministic features like filtering and sorting to help the non-deterministic agent work efficiently. This highlights that agents do not magically replace deterministic workflows; they often require stronger deterministic support to succeed. 然而,MCP 是一份”智能体接口”契约,其有效性很大程度上取决于它所公开的底层 API 的设计。存在这样的风险:开发者只是简单地包装现有的遗留 API 而不做任何修改,这对智能体来说可能并不是最优的。例如,如果一个票务系统的 API 只允许逐个检索完整的票务详情,那么当被要求总结高优先级票务时,智能体在处理大量数据时会变得缓慢且不准确。要真正发挥效用,底层 API 应该通过过滤和排序等确定性功能进行改进,以帮助非确定性的智能体高效工作;这凸显了智能体无法神奇地替代确定性工作流,它们通常需要更强的确定性支持才能取得成功。 Furthermore, MCP can wrap an API whose input or output is still not inherently understandable by the agent. An API is only useful if its data format is agent-friendly, a guarantee that MCP itself does not enforce. For instance, creating an MCP server for a document store that returns files as PDFs is mostly useless if the consuming agent cannot parse PDF content. The better approach would be to first create an API that returns a textual version of the document, such as Markdown, which the agent can actually read and process. This demonstrates that developers must consider not just the connection, but the nature of the data being exchanged to ensure true compatibility. 此外,MCP 可以包装那些输入或输出对智能体来说仍然不是固有可理解的 API。一个 API 只有在其数据格式对智能体友好时才有用,而 MCP 本身并不保证这一点。例如,为一个返回 PDF 文件的文档存储创建 MCP 服务器,如果使用该服务的智能体无法解析 PDF 内容,那么这基本上是无用的。更好的方法是首先创建一个返回文档文本版本(如 Markdown)的 API,这样智能体才能真正读取和处理。这表明开发者必须考虑的不只是连接,还有所交换数据的性质,以确保真正的兼容性。

MCP vs. Tool Function Calling

MCP 与工具函数调用

The Model Context Protocol (MCP) and tool function calling are distinct mechanisms that enable LLMs to interact with external capabilities (including tools) and execute actions. While both serve to extend LLM capabilities beyond text generation, they differ in their approach and level of abstraction. 模型上下文协议(MCP)和工具函数调用是使 LLM 能与外部能力(含工具)交互并执行操作的不同机制。虽然两者都服务于扩展 LLM 超越文本生成的能力,但它们在方法和抽象级别上存在差异。 Tool function calling can be thought of as a direct request from an LLM to a specific, pre-defined tool or function. Note that in this context we use the words “tool” and “function” interchangeably. This interaction is characterized by a one-to-one communication model, where the LLM formats a request based on its understanding of a user’s intent requiring external action. The application code then executes this request and returns the result to the LLM. This process is often proprietary and varies across different LLM providers. 工具函数调用可以被视为 LLM 对特定预定义工具或函数的直接请求。请注意,在此上下文中我们交替使用”工具”和”函数”这两个词。这种交互采用一对一的通信模型,LLM 根据其对需要外部操作用户意图的理解来格式化请求。然后应用程序代码执行此请求并将结果返回给 LLM。这个过程通常是专有的,并且在不同的 LLM 提供商之间存在差异。 In contrast, the Model Context Protocol (MCP) operates as a standardized interface for LLMs to discover, communicate with, and utilize external capabilities. It functions as an open protocol that facilitates interaction with a wide range of tools and systems, aiming to establish an ecosystem where any compliant tool can be accessed by any compliant LLM. This fosters interoperability, composability and reusability across different systems and implementations. By adopting a federated model, we significantly improve interoperability and unlock the value of existing assets. This strategy allows us to bring disparate and legacy services into a modern ecosystem simply by wrapping them in an MCP-compliant interface. These services continue to operate independently, but can now be composed into new applications and workflows, with their collaboration orchestrated by LLMs. This fosters agility and reusability without requiring costly rewrites of foundational systems. 相比之下,模型上下文协议(MCP)作为 LLM 发现、通信和使用外部能力的标准化接口运行。它作为一个开放协议,促进与各种工具和系统的交互,旨在建立一个任何兼容工具都可以被任何兼容 LLM 访问的生态系统。这促进了不同系统和实现之间的互操作性、可组合性和可重用性。通过采用联合模型,我们显著提升了互操作性并释放了现有资产的价值。这种策略允许我们通过简单地用符合 MCP 接口的包装器包装它们,将分散和遗留的服务引入现代生态系统。这些服务继续独立运行,但现在可以组合到新的应用程序和工作流中,它们的协作由 LLM 编排。这促进了敏捷性和可重用性,而无需对基础系统进行昂贵的重写。 Here’s a breakdown of the fundamental distinctions between MCP and tool function calling: 以下是 MCP 与工具函数调用的基本区别: | Feature | Tool Function Calling | Model Context Protocol (MCP) | | —– | —– | —– | | Standardization | Proprietary and vendor-specific. The format and implementation differ across LLM providers. | An open, standardized protocol, promoting interoperability between different LLMs and tools. | | Scope | A direct mechanism for an LLM to request the execution of a specific, predefined function. | A broader framework for how LLMs and external tools discover and communicate with each other. | | Architecture | A one-to-one interaction between the LLM and the application’s tool-handling logic. | A client-server architecture where LLM-powered applications (clients) can connect to and utilize various MCP servers (tools). | | Discovery | The LLM is explicitly told which tools are available within the context of a specific conversation. | Enables dynamic discovery of available tools. An MCP client can query a server to see what capabilities it offers. | | Reusability | Tool integrations are often tightly coupled with the specific application and LLM being used. | Promotes the development of reusable, standalone “MCP servers” that can be accessed by any compliant application. | | 特性 | 工具函数调用 | 模型上下文协议(MCP) | | —– | —– | —– | | 标准化 | 专有和供应商特定。格式和实现在不同 LLM 提供商间各异 | 开放标准化协议,促进不同 LLM 和工具间互操作性 | | 范围 | LLM 请求执行特定预定义函数的直接机制 | 更广泛框架,定义 LLM 和外部工具如何相互发现和通信 | | 架构 | LLM 与应用程序工具处理逻辑间的一对一交互 | 客户端-服务器架构,LLM 驱动应用程序(客户端)可连接并使用各种 MCP 服务器(工具) | | 发现 | LLM 被明确告知特定对话上下文中哪些工具可用 | 支持动态发现可用工具。MCP 客户端可查询服务器以查看其提供能力 | | 可重用性 | 工具集成通常与所用特定应用程序和 LLM 紧密耦合 | 促进开发可重用独立”MCP 服务器”,可被任何兼容应用程序访问 | Think of tool function calling as giving an AI a specific set of custom-built tools, like a particular wrench and screwdriver. This is efficient for a workshop with a fixed set of tasks. MCP (Model Context Protocol), on the other hand, is like creating a universal, standardized power outlet system. It doesn’t provide the tools itself, but it allows any compliant tool from any manufacturer to plug in and work, enabling a dynamic and ever-expanding workshop. 可以将工具函数调用想象为给 AI 一组特定的定制工具,比如特定的扳手和螺丝刀。这对于具有固定任务集的车间来说是高效的。另一方面,MCP(模型上下文协议)就像创建一个通用的标准化电源插座系统。它本身不提供工具,但允许任何制造商的任何兼容工具插入并工作,从而实现一个动态且不断扩展的车间。 In short, function calling provides direct access to a few specific functions, while MCP is the standardized communication framework that lets LLMs discover and use a vast range of external resources. For simple applications, specific tools are enough; for complex, interconnected AI systems that need to adapt, a universal standard like MCP is essential. 简而言之,函数调用提供对少数特定函数的直接访问,而 MCP 是让 LLM 发现和使用广泛外部资源的标准化通信框架。对于简单应用程序,特定工具就足够了;对于需要适应的复杂互联 AI 系统,像 MCP 这样的通用标准是必不可少的。

Additional considerations for MCP

MCP 的其他考虑因素

While MCP presents a powerful framework, a thorough evaluation requires considering several crucial aspects that influence its suitability for a given use case. Let’s see some aspects in more details: 虽然 MCP 提供了强大框架,但全面评估需考虑影响其适用性的几个关键方面。让我们详细探讨某些方面:

Create a reliable absolute path to a folder named ‘mcp_managed_files’

within the same directory as this agent script.

This ensures the agent works out-of-the-box for demonstration.

For production, you would point this to a more persistent and secure location.

TARGET_FOLDER_PATH = os.path.join(os.path.dirname(os.path.abspath(file)), “mcp_managed_files”)

Ensure the target directory exists before the agent needs it.

os.makedirs(TARGET_FOLDER_PATH, exist_ok=True)

root_agent = LlmAgent( model=’gemini-2.0-flash’, name=’filesystem_assistant_agent’, instruction=( ‘Help the user manage their files. You can list files, read files, and write files. ‘ f’You are operating in the following directory: {TARGET_FOLDER_PATH}’ ), tools=[ MCPToolset( connection_params=StdioServerParameters( command=’npx’, args=[ “-y”, # Argument for npx to auto-confirm install “@modelcontextprotocol/server-filesystem”, # This MUST be an absolute path to a folder. TARGET_FOLDER_PATH, ], ), # Optional: You can filter which tools from the MCP server are exposed. # For example, to only allow reading: # tool_filter=[‘list_directory’, ‘read_file’] ) ], )

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
```python
import os
from google.adk.agents import LlmAgent
from google.adk.tools.mcp_tool.mcp_toolset import MCPToolset, StdioServerParameters

## 创建一个可靠的绝对路径,指向名为 'mcp_managed_files' 的文件夹
## 该文件夹位于此智能体所在的同一目录中。
## 这确保了智能体即用地进行演示。
## 对于生产环境,您需要将此路径指向一个更持久和安全的位置。
TARGET_FOLDER_PATH = os.path.join(os.path.dirname(os.path.abspath(__file__)), "mcp_managed_files")

## 在智能体之前确保目标目录存在。
os.makedirs(TARGET_FOLDER_PATH, exist_ok=True)

root_agent = LlmAgent(
    model='gemini-2.0-flash',
    name='filesystem_assistant_agent',
    instruction=(
        'Help the user manage their files. You can list files, read files, and write files. '
        f'You are operating in the following directory: {TARGET_FOLDER_PATH}'
    ),
    tools=[
        MCPToolset(
            connection_params=StdioServerParameters(
                command='npx',
                args=[
                    "-y",  # npx 的参数,用于自动确认安装
                    "@modelcontextprotocol/server-filesystem",
                    # 这必须是文件夹的绝对路径。
                    TARGET_FOLDER_PATH,
                ],
            ),
            # 可选:您可以过滤从 MCP 服务器公开的工具。
            # 例如,仅允许读取:
            # tool_filter=['list_directory', 'read_file']
        )
    ],
)

`npx` (Node Package Execute), bundled with npm (Node Package Manager) versions 5.2.0 and later, is a utility that enables direct execution of Node.js packages from the npm registry. This eliminates the need for global installation. In essence, `npx` serves as an npm package runner, and it is commonly used to run many community MCP servers, which are distributed as Node.js packages.

npx(Node Package Execute)与 npm(Node Package Manager)5.2.0 及更高版本捆绑在一起,是一个实用程序,可以直接从 npm 注册表执行 Node.js 包。这消除了全局安装的需要。本质上,npx 作为 npm 包运行器,它通常用于运行许多社区 MCP 服务器,这些服务器作为 Node.js 包分发。 Creating an __init__.py file is necessary to ensure the agent.py file is recognized as part of a discoverable Python package for the Agent Development Kit (ADK). This file should reside in the same directory as agent.py. 创建 init.py 文件是必要的,以确保 agent.py 文件被识别为智能体开发工具包(ADK)的可发现 Python 包的一部分。此文件应与 agent.py 位于同一目录中。

1
2
## ./adk_agent_samples/mcp_agent/__init__.py
from . import agent
1
2
## ./adk_agent_samples/mcp_agent/__init__.py
from . import agent

Certainly, other supported commands are available for use. For example, connecting to python3 can be achieved as follows:

当然,还可以使用其他受支持的命令。例如,可以按如下方式连接到 python3:

1
2
3
4
5
6
7
8
9
10
connection_params = StdioConnectionParams(
 server_params={
     "command": "python3",
     "args": ["./agent/mcp_server.py"],
     "env": {
       "SERVICE_ACCOUNT_PATH":SERVICE_ACCOUNT_PATH,
       "DRIVE_FOLDER_ID": DRIVE_FOLDER_ID
     }
 }
)
1
2
3
4
5
6
7
8
9
10
connection_params = StdioConnectionParams(
    server_params={
        "command": "python3",
        "args": ["./agent/mcp_server.py"],
        "env": {
            "SERVICE_ACCOUNT_PATH": SERVICE_ACCOUNT_PATH,
            "DRIVE_FOLDER_ID": DRIVE_FOLDER_ID
        }
    }
)

UVX, in the context of Python, refers to a command-line tool that utilizes uv to execute commands in a temporary, isolated Python environment. Essentially, it allows you to run Python tools and packages without needing to install them globally or within your project’s environment. You can run it via the MCP server.

在 Python 的上下文中,UVX 是指一个命令行工具,它利用 uv 在临时的、隔离的 Python 环境中执行命令。本质上,它允许您运行 Python 工具和包,而无需全局安装或在项目环境中安装它们。您可以通过 MCP 服务器运行它。

1
2
3
4
5
6
7
8
9
10
connection_params = StdioConnectionParams(
 server_params={
   "command": "uvx",
   "args": ["mcp-google-sheets@latest"],
   "env": {
     "SERVICE_ACCOUNT_PATH":SERVICE_ACCOUNT_PATH,
     "DRIVE_FOLDER_ID": DRIVE_FOLDER_ID
   }
 }
)
1
2
3
4
5
6
7
8
9
10
connection_params = StdioConnectionParams(
    server_params={
        "command": "uvx",
        "args": ["mcp-google-sheets@latest"],
        "env": {
            "SERVICE_ACCOUNT_PATH": SERVICE_ACCOUNT_PATH,
            "DRIVE_FOLDER_ID": DRIVE_FOLDER_ID
        }
    }
)

Once the MCP Server is created, the next step is to connect to it.

创建 MCP 服务器后,下一步是连接到它。

Connecting the MCP Server with ADK Web

使用 ADK Web 连接 MCP 服务器

To begin, execute ‘adk web’. Navigate to the parent directory of mcp_agent (e.g., adk_agent_samples) in your terminal and run: 首先,执行 ‘adk web’。在终端中导航到 mcp_agent 的父目录(例如,adk_agent_samples)并运行:

1
2
cd ./adk_agent_samples # Or your equivalent parent directory
adk web
1
2
cd ./adk_agent_samples # 或您的等效父目录
adk web

Once the ADK Web UI has loaded in your browser, select the `filesystem_assistant_agent` from the agent menu. Next, experiment with prompts such as:

ADK Web UI 在浏览器中加载后,从智能体中选择 filesystem_assistant_agent。接下来,尝试以下提示:

1. Make sure you have FastMCP installed:

pip install fastmcp

from fastmcp import FastMCP, Client

Initialize the FastMCP server.

mcp_server = FastMCP()

Define a simple tool function.

The @mcp_server.tool decorator registers this Python function as an MCP tool.

The docstring becomes the tool’s description for the LLM.

@mcp_server.tool def greet(name: str) -> str: “”” Generates a personalized greeting. Args: name: The name of the person to greet. Returns: A greeting string. “”” return f”Hello, {name}! Nice to meet you.”

Or if you want to run it from the script:

if name == “main”: mcp_server.run( transport=”http”, host=”127.0.0.1”, port=8000 )

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
```python
## fastmcp_server.py
## 此脚本演示如何使用 FastMCP 创建一个简单的 MCP 服务器。
## 它公开一个生成问候语的单一工具。

## 1. 确保您已安装 FastMCP:
## pip install fastmcp

from fastmcp import FastMCP, Client

## 初始化 FastMCP 服务器。
mcp_server = FastMCP()

## 定义一个简单的工具函数。
## `@mcp_server.tool` 装饰器将此 Python 函数注册为 MCP 工具。
## 文档字符串成为 LLM 的工具描述。
@mcp_server.tool
def greet(name: str) -> str:
    """
    生成个性化的问候语。

    参数:
        name: 要问候的人的名字。

    返回:
        问候语字符串。
    """
    return f"Hello, {name}! Nice to meet you."

## 或者如果您想从脚本运行它:
if __name__ == "__main__":
    mcp_server.run(
        transport="http",
        host="127.0.0.1",
        port=8000
    )

This Python script defines a single function called greet, which takes a person’s name and returns a personalized greeting. The @tool() decorator above this function automatically registers it as a tool that an AI or another program can use. The function’s documentation string and type hints are used by FastMCP to tell the Agent how the tool works, what inputs it needs, and what it will return.

这个 Python 脚本定义了一个名为 greet 的单一函数,它接受一个人的名字并返回个性化的问候语。此函数上方的 @tool() 装饰器自动将其注册为 AI 或其他程序可以使用的工具。函数的文档字符串和类型提示被 FastMCP 用来告诉智能体该工具的工作原理、需要什么输入以及它将返回什么。 When the script is executed, it starts the FastMCP server, which listens for requests on localhost:8000. This makes the greet function available as a network service. An agent could then be configured to connect to this server and use the greet tool to generate greetings as part of a larger task. The server runs continuously until it is manually stopped. 当脚本执行时,它启动 FastMCP 服务器,该服务器在 localhost:8000 上监听请求。这使得 greet 函数作为网络服务可用。然后可以将智能体配置为连接到此服务器,并使用 greet 工具生成问候语,作为更大任务的一部分。服务器持续运行,直到手动停止。

Consuming the FastMCP Server with an ADK Agent

使用 ADK 智能体消费 FastMCP 服务器

An ADK agent can be set up as an MCP client to use a running FastMCP server. This requires configuring HttpServerParameters with the FastMCP server’s network address, which is usually http://localhost:8000. 可以将 ADK 智能体设置为 MCP 客户端,以使用正在运行的 FastMCP 服务器。这需要使用 FastMCP 服务器的网络地址配置 HttpServerParameters,通常是 http://localhost:8000。 A tool_filter parameter can be included to restrict the agent’s tool usage to specific tools offered by the server, such as ‘greet’. When prompted with a request like “Greet John Doe,” the agent’s embedded LLM identifies the ‘greet’ tool available via MCP, invokes it with the argument “John Doe,” and returns the server’s response. This process demonstrates the integration of user-defined tools exposed through MCP with an ADK agent. 可以包含 tool_filter 参数以限制智能体对服务器提供的特定工具的使用,例如 ‘greet’。当提示”Greet John Doe”等请求时,智能体的嵌入式 LLM 识别通过 MCP 可用的 ‘greet’ 工具,使用参数”John Doe”调用它,并返回服务器的响应。此过程演示了通过 MCP 公开的用户定义工具与 ADK 智能体的集成。 To establish this configuration, an agent file (e.g., agent.py located in ./adk_agent_samples/fastmcp_client_agent/) is required. This file will instantiate an ADK agent and use HttpServerParameters to establish a connection with the operational FastMCP server. 要建立此配置,需要一个智能体文件(例如,位于 ./adk_agent_samples/fastmcp_client_agent/ 的 agent.py)。此文件将实例化一个 ADK 智能体,并使用 HttpServerParameters 与正在运行的 FastMCP 服务器建立连接。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
## ./adk_agent_samples/fastmcp_client_agent/agent.py
import os
from google.adk.agents import LlmAgent
from google.adk.tools.mcp_tool.mcp_toolset import MCPToolset, HttpServerParameters

## Define the FastMCP server's address.
## Make sure your fastmcp_server.py (defined previously) is running on this port.
FASTMCP_SERVER_URL = "http://localhost:8000"

root_agent = LlmAgent(
   model='gemini-2.0-flash', # Or your preferred model
   name='fastmcp_greeter_agent',
   instruction='You are a friendly assistant that can greet people by their name. Use the "greet" tool.',
   tools=[
       MCPToolset(
           connection_params=HttpServerParameters(
               url=FASTMCP_SERVER_URL,
           ),
           # Optional: Filter which tools from the MCP server are exposed
           # For this example, we're expecting only 'greet'
           tool_filter=['greet']
       )
   ],
)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
## ./adk_agent_samples/fastmcp_client_agent/agent.py
import os
from google.adk.agents import LlmAgent
from google.adk.tools.mcp_tool.mcp_toolset import MCPToolset, HttpServerParameters

## 定义 FastMCP 服务器的地址。
## 确保您的 fastmcp_server.py(之前定义的)正在此端口上运行。
FASTMCP_SERVER_URL = "http://localhost:8000"

root_agent = LlmAgent(
    model='gemini-2.0-flash',  # 或您首选的模型
    name='fastmcp_greeter_agent',
    instruction='You are a friendly assistant that can greet people by their name. Use the "greet" tool.',
    tools=[
        MCPToolset(
            connection_params=HttpServerParameters(
                url=FASTMCP_SERVER_URL,
            ),
            # 可选:过滤从 MCP 服务器公开的工具
            # 对于此示例,我们只期望 'greet'
            tool_filter=['greet']
        )
    ],
)

The script defines an Agent named fastmcp_greeter_agent that uses a Gemini language model. It’s given a specific instruction to act as a friendly assistant whose purpose is to greet people. Crucially, the code equips this agent with a tool to perform its task. It configures an MCPToolset to connect to a separate server running on localhost:8000, which is expected to be the FastMCP server from the previous example. The agent is specifically granted access to the greet tool hosted on that server. In essence, this code sets up the client side of the system, creating an intelligent agent that understands its goal is to greet people and knows exactly which external tool to use to accomplish it.

该脚本定义了一个名为 fastmcp_greeter_agent 的智能体,它使用 Gemini 语言模型。它被赋予特定的指令,作为一个友好的助手,其目的是问候人们。至关重要的是,该代码为此智能体配备了执行其任务的工具。它配置了一个 MCPToolset 来连接到在 localhost:8000 上运行的独立服务器,该服务器应该是前面示例中的 FastMCP 服务器。智能体被明确授予访问该服务器上托管的 greet 工具的权限。本质上,此代码设置了系统的客户端,创建了一个智能智能体,它理解其目标是问候人们,并确切地知道使用哪个外部工具来完成它。 Creating an __init__.py file within the fastmcp_client_agent directory is necessary. This ensures the agent is recognized as a discoverable Python package for the ADK. 在 fastmcp_client_agent 目录中创建 init.py 文件是必要的。这确保了智能体被识别为 ADK 的可发现 Python 包。 To begin, open a new terminal and run `python fastmcp_server.py` to start the FastMCP server. Next, go to the parent directory of `fastmcp_client_agent` (for example, `adk_agent_samples`) in your terminal and execute `adk web`. Once the ADK Web UI loads in your browser, select the `fastmcp_greeter_agent` from the agent menu. You can then test it by entering a prompt like “Greet John Doe.” The agent will use the `greet` tool on your FastMCP server to create a response. 首先,打开一个新终端并运行 python fastmcp_server.py 来启动 FastMCP 服务器。接下来,在终端中转到 fastmcp_client_agent 的父目录(例如,adk_agent_samples)并执行 adk web。一旦 ADK Web UI 在浏览器中加载,从智能体中选择 fastmcp_greeter_agent。然后可以通过输入”Greet John Doe”等提示来测试它。智能体将使用 FastMCP 服务器上的 greet 工具创建响应。

At a Glance

概览

What: To function as effective agents, LLMs must move beyond simple text generation. They require the ability to interact with the external environment to access current data and utilize external software. Without a standardized communication method, each integration between an LLM and an external tool or data source becomes a custom, complex, and non-reusable effort. This ad-hoc approach hinders scalability and makes building complex, interconnected AI systems difficult and inefficient. 是什么:要作为有效的智能体,LLM 必须超越简单的文本生成。它们需要与外部环境交互的能力,以访问当前数据并使用外部软件。如果没有标准化的通信方法,LLM 与外部工具或数据源之间的每次集成都将成为定制、复杂且不可重用的工作。这种临时方法阻碍了可扩展性,并使构建复杂、互联的 AI 系统变得困难且低效。 Why: The Model Context Protocol (MCP) offers a standardized solution by acting as a universal interface between LLMs and external systems. It establishes an open, standardized protocol that defines how external capabilities are discovered and used. Operating on a client-server model, MCP allows servers to expose tools, data resources, and interactive prompts to any compliant client. LLM-powered applications act as these clients, dynamically discovering and interacting with available resources in a predictable manner. This standardized approach fosters an ecosystem of interoperable and reusable components, dramatically simplifying the development of complex agentic workflows. 为什么:模型上下文协议(MCP)通过充当 LLM 和外部系统之间的通用接口,提供了标准化的解决方案。它建立了一个开放的标准化协议,定义了如何发现和使用外部能力。基于客户端-服务器模型运行,MCP 允许服务器向任何兼容的客户端公开工具、数据资源和交互式提示。LLM 驱动的应用程序充当这些客户端,以可预测的方式动态发现和与可用资源交互。这种标准化方法促进了可互操作和可重用组件的生态系统,显著简化了复杂智能体工作流的开发。 Rule of thumb: Use the Model Context Protocol (MCP) when building complex, scalable, or enterprise-grade agentic systems that need to interact with a diverse and evolving set of external tools, data sources, and APIs. It is ideal when interoperability between different LLMs and tools is a priority, and when agents require the ability to dynamically discover new capabilities without being redeployed. For simpler applications with a fixed and limited number of predefined functions, direct tool function calling may be sufficient. 经验法则:在构建需要与各种不断发展的外部工具、数据源和 API 交互的复杂、可扩展或企业级智能体系统时,使用模型上下文协议(MCP)。当不同 LLM 和工具之间的互操作性是优先考虑事项时,以及当智能体需要动态发现新能力而无需重新部署时,它是理想选择。对于具有固定有限数量预定义函数的简单应用程序,直接工具函数调用可能就足够了。 Visual summary

Fig.1: Model Context protocol

可视化摘要

图 1:模型上下文协议

Key Takeaways

These are the key takeaways:

关键要点

以下是本章核心要点:

Conclusion

The Model Context Protocol (MCP) is an open standard that facilitates communication between Large Language Models (LLMs) and external systems. It employs a client-server architecture, enabling LLMs to access resources, utilize prompts, and execute actions through standardized tools. MCP allows LLMs to interact with databases, manage generative media workflows, control IoT devices, and automate financial services. Practical examples demonstrate setting up agents to communicate with MCP servers, including filesystem servers and servers built with FastMCP, illustrating its integration with the Agent Development Kit (ADK). MCP is a key component for developing interactive AI agents that extend beyond basic language capabilities.

结论

模型上下文协议(MCP)是一个开放标准,促进大型语言模型(LLM)与外部系统之间的通信。它采用客户端-服务器架构,使 LLM 能够通过标准化工具访问资源、使用提示和执行操作。MCP 允许 LLM 与数据库交互、管理生成媒体工作流、控制物联网设备以及自动化金融服务。实际示例演示了设置智能体与 MCP 服务器通信的方法,包括文件系统服务器和使用 FastMCP 构建的服务器,说明了其与智能体开发工具包(ADK)的集成。MCP 是开发超越基本语言能力的交互式 AI 智能体的关键组件。

References

  1. Model Context Protocol (MCP) Documentation. (Latest). Model Context Protocol (MCP). https://google.github.io/adk-docs/mcp/
  2. FastMCP Documentation. FastMCP. https://github.com/jlowin/fastmcp
  3. MCP Tools for Genmedia Services. MCP Tools for Genmedia Services. https://google.github.io/adk-docs/mcp/#mcp-servers-for-google-cloud-genmedia
  4. MCP Toolbox for Databases Documentation. (Latest). MCP Toolbox for Databases. https://google.github.io/adk-docs/mcp/databases/

    参考文献

  5. Model Context Protocol (MCP) Documentation. (Latest). Model Context Protocol (MCP). https://google.github.io/adk-docs/mcp/
  6. FastMCP Documentation. FastMCP. https://github.com/jlowin/fastmcp
  7. MCP Tools for Genmedia Services. MCP Tools for Genmedia Services. https://google.github.io/adk-docs/mcp/#mcp-servers-for-google-cloud-genmedia
  8. MCP Toolbox for Databases Documentation. (Latest). MCP Toolbox for Databases. https://google.github.io/adk-docs/mcp/databases/