You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/misc/mcp-tools.qmd
+38-77Lines changed: 38 additions & 77 deletions
Original file line number
Diff line number
Diff line change
@@ -3,105 +3,67 @@ title: MCP tools
3
3
callout-appearance: simple
4
4
---
5
5
6
-
[Model Context Protocol (MCP)](https://modelcontextprotocol.io) provides a standard
6
+
[Model Context Protocol (MCP)](https://modelcontextprotocol.io) provides a standard
7
7
way to build services that LLMs can use to gain context.
8
-
This includes a standard way to provide [tools](../get-started/tools.qmd) (i.e., functions) for an LLM to call from another program or machine.
9
-
There are now [many useful MCP server implementations available](https://glama.ai/mcp/servers) to help extend the capabilities of your chat application with minimal effort.
10
-
11
-
In this article, you'll learn how to both register existing MCP tools with chatlas as well as author your own custom MCP tools.
8
+
Most significantly, MCP provides a standard way to serve [tools](../get-started/tools.qmd) (i.e., functions) for an LLM to call from another program or machine.
9
+
As a result, there are now [many useful MCP server implementations available](https://github.com/punkpeye/awesome-mcp-servers?tab=readme-ov-file#server-implementations) to help extend the capabilities of your chat application.
10
+
In this article, you'll learn the basics of implementing and using MCP tools in chatlas.
12
11
13
12
14
13
::: callout-note
15
14
## Prerequisites
16
15
17
-
To leverage MCP tools from chatlas, you'll want to install the `mcp`extra.
16
+
To leverage MCP tools from chatlas, you'll need to install the `mcp`library.
18
17
19
18
```bash
20
19
pip install 'chatlas[mcp]'
21
20
```
22
21
:::
23
22
24
23
25
-
## Registering tools
24
+
## Basic usage
26
25
27
-
### Quick start {#quick-start}
26
+
Chatlas provides two ways to register MCP tools: [`.register_mcp_tools_http_stream_async()`](../reference/Chat.qmd#register_mcp_tools_http_stream_async) and [`.register_mcp_tools_stdio_async()`](../reference/Chat.qmd#register_mcp_tools_stdio_async).
28
27
29
-
Let's start with a practical example: using the [MCP Fetch server](https://github.com/modelcontextprotocol/servers/tree/main/src/fetch) to give an LLM the ability to fetch and read web pages.
30
-
This server is maintained by Anthropic and can be run via `uvx` (which comes with [uv](https://docs.astral.sh/uv/)).
31
28
32
-
For simplicity and convenience, we'll use the [`.register_mcp_tools_stdio_async()`](../reference/Chat.qmd#register_mcp_tools_stdio_async) method to both run the MCP Fetch server locally and register its tools with our `ChatOpenAI` instance:
29
+
The main difference is how they interact with the MCP server: the former connects to an already running HTTP server, while the latter executes a system command to run the server locally.
30
+
Roughly speaking, usage looks something like this:
33
31
34
-
```python
35
-
import asyncio
36
-
from chatlas import ChatOpenAI
32
+
::: panel-tabset
37
33
38
-
asyncdefmain():
39
-
chat = ChatOpenAI()
40
-
await chat.register_mcp_tools_stdio_async(
41
-
command="uvx",
42
-
args=["mcp-server-fetch"],
43
-
)
44
-
await chat.chat_async(
45
-
"Summarize the first paragraph of https://en.wikipedia.org/wiki/Python_(programming_language)"
Python is a high-level, general-purpose programming language known for its emphasis on code readability through significant indentation. It supports multiple programming paradigms including structured, object-oriented, and functional programming, and is dynamically typed with garbage collection.
59
-
:::
60
-
61
-
::: callout-tip
62
-
### Built-in fetch/search tools
63
-
64
-
For providers with native web fetch support (Claude, Google), consider using [`tool_web_fetch()`](../reference/tool_web_fetch.qmd) instead -- it's simpler and doesn't require MCP setup.
65
-
Similarly, [`tool_web_search()`](../reference/tool_web_search.qmd) provides native web search for OpenAI, Claude, and Google.
66
-
:::
67
-
68
-
69
-
### Basic usage {#basic-usage}
70
-
71
-
Chatlas provides three ways to register MCP tools:
The main difference is how they communicate with the MCP server: the former (Stdio) executes a system command to run the server locally, while the latter (HTTP) connects to an already running HTTP server.
37
+
from chatlas import ChatOpenAI
78
38
79
-
This makes the Stdio method more ergonomic for local development and testing. For instance, recall the example above, which runs `uvx mcp-server-fetch` locally to provide web fetching capabilities to the chat instance:
39
+
chat = ChatOpenAI()
80
40
81
-
```python
82
-
# Run a server via uvx, npx, or any other command
83
-
await chat.register_mcp_tools_stdio_async(
84
-
command="uvx",
85
-
args=["mcp-server-fetch"],
41
+
# Assuming you have an MCP server running at the specified URL
42
+
await chat.register_mcp_tools_http_stream_async(
43
+
url="http://localhost:8000/mcp",
86
44
)
87
45
```
88
46
89
-
On the other hand, the HTTP method is better for production environments where the server is hosted remotely or in a longer-running process.
90
-
For example, if you have an MCP server already running at `http://localhost:8000/mcp`, you can connect to it as follows:
47
+
### Stdio (Standard Input/Output)
91
48
92
49
```python
93
-
# Connect to a server already running at the specified URL
94
-
await chat.register_mcp_tools_http_stream_async(
95
-
url="http://localhost:8000/mcp",
50
+
from chatlas import ChatOpenAI
51
+
52
+
chat = ChatOpenAI()
53
+
54
+
# Assuming my_mcp_server.py is a valid MCP server script
55
+
await chat.register_mcp_tools_stdio_async(
56
+
command="mcp",
57
+
args=["run", "my_mcp_server.py"],
96
58
)
97
59
```
98
60
99
-
Later on in this article, you'll learn
61
+
:::
100
62
101
63
::: callout-warning
102
64
### Async methods
103
65
104
-
For performance reasons, the methods for registering MCP tools are asynchronous, so you'll need to use `await` when calling them.
66
+
For performance reasons, the methods for registering MCP tools are asynchronous, so you'll need to use `await` when calling them.
105
67
In some environments, such as Jupyter notebooks and the [Positron IDE](https://positron.posit.co/) console, you can simply use `await` directly (as is done above).
106
68
However, in other environments, you may need to wrap your code in an `async` function and use `asyncio.run()` to execute it.
107
69
The examples below use `asyncio.run()` to run the asynchronous code, but you can adapt them to your environment as needed.
@@ -118,14 +80,13 @@ Note that these methods work by:
118
80
### Cleanup
119
81
120
82
When you no longer need the MCP tools, it's important to clean up the connection to the MCP server, as well `Chat`'s tool state.
121
-
This is done by calling [`.cleanup_mcp_tools()`](../reference/Chat.qmd#cleanup_mcp_tools) at the end of your chat session (the examples demonstrate how to do this).
83
+
This is done by calling [`.cleanup_mcp_tools()`](../reference/Chat.qmd#cleanup_mcp_tools) at the end of your chat session (the examples demonstrate how to do this).
122
84
:::
123
85
124
86
125
-
## Authoring tools
87
+
## Basic example
126
88
127
-
If existing MCP servers don't meet your needs, you can implement your own.
128
-
Let's walk through a full-fledged example, including implementing a simple MCP server.
89
+
Let's walk through a full-fledged example of using MCP tools in chatlas, including implementing our own MCP server.
129
90
130
91
### Basic server {#basic-server}
131
92
@@ -149,7 +110,7 @@ The `mcp` library provides a CLI tool to run the MCP server over HTTP transport.
149
110
As long as you have `mcp` installed, and the [server above](#basic-server) saved as `my_mcp_server.py`, this can be done as follows:
150
111
151
112
```bash
152
-
$ mcp run -t sse my_mcp_server.py
113
+
$ mcp run -t sse my_mcp_server.py
153
114
INFO: Started server process [19144]
154
115
INFO: Waiting for application startup.
155
116
INFO: Application startup complete.
@@ -180,12 +141,12 @@ asyncio.run(do_chat("What is 5 - 3?"))
180
141
::: chatlas-response-container
181
142
182
143
```python
183
-
# 🔧 tool request
144
+
# 🔧 tool request
184
145
add(x=5, y=-3)
185
146
```
186
147
187
148
```python
188
-
# ✅ tool result
149
+
# ✅ tool result
189
150
2
190
151
```
191
152
@@ -225,27 +186,27 @@ asyncio.run(do_chat("What is 5 - 3?"))
225
186
::: chatlas-response-container
226
187
227
188
```python
228
-
# 🔧 tool request
189
+
# 🔧 tool request
229
190
add(x=5, y=-3)
230
191
```
231
192
232
193
```python
233
-
# ✅ tool result
194
+
# ✅ tool result
234
195
2
235
196
```
236
197
237
198
5 - 3 equals 2.
238
199
:::
239
200
240
201
241
-
## Advanced example: Code execution
202
+
## Motivating example
242
203
243
204
Let's look at a more compelling use case for MCP tools: code execution.
244
205
A tool that can execute code and return the results is a powerful way to extend the capabilities of an LLM.
245
206
This way, LLMs can generate code based on natural language prompts (which they are quite good at!) and then execute that code to get precise and reliable results from data (which LLMs are not so good at!).
246
207
However, allowing an LLM to execute arbitrary code is risky, as the generated code could potentially be destructive, harmful, or even malicious.
247
208
248
-
To mitigate these risks, it's important to implement safeguards around code execution.
209
+
To mitigate these risks, it's important to implement safeguards around code execution.
249
210
This can include running code in isolated environments, restricting access to sensitive resources, and carefully validating and sanitizing inputs to the code execution tool.
250
211
One such implementation is Pydantic's [Run Python MCP server](https://github.com/pydantic/pydantic-ai/tree/main/mcp-run-python), which provides a sandboxed environment for executing Python code safely via [Pyodide](https://pyodide.org/en/stable/) and [Deno](https://deno.com/).
251
212
@@ -281,4 +242,4 @@ async def _(user_input: str):
281
242
await chat.append_message_stream(stream)
282
243
```
283
244
284
-
{class="shadow rounded"}
245
+
{class="shadow rounded"}
0 commit comments