Skip to content

Commit efda95c

Browse files
cpsievertclaude
andcommitted
Revert MCP docs changes (split into separate PR)
🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]>
1 parent 94b6723 commit efda95c

File tree

1 file changed

+38
-77
lines changed

1 file changed

+38
-77
lines changed

docs/misc/mcp-tools.qmd

Lines changed: 38 additions & 77 deletions
Original file line numberDiff line numberDiff line change
@@ -3,105 +3,67 @@ title: MCP tools
33
callout-appearance: simple
44
---
55

6-
[Model Context Protocol (MCP)](https://modelcontextprotocol.io) provides a standard
6+
[Model Context Protocol (MCP)](https://modelcontextprotocol.io) provides a standard
77
way to build services that LLMs can use to gain context.
8-
This includes a standard way to provide [tools](../get-started/tools.qmd) (i.e., functions) for an LLM to call from another program or machine.
9-
There are now [many useful MCP server implementations available](https://glama.ai/mcp/servers) to help extend the capabilities of your chat application with minimal effort.
10-
11-
In this article, you'll learn how to both register existing MCP tools with chatlas as well as author your own custom MCP tools.
8+
Most significantly, MCP provides a standard way to serve [tools](../get-started/tools.qmd) (i.e., functions) for an LLM to call from another program or machine.
9+
As a result, there are now [many useful MCP server implementations available](https://github.com/punkpeye/awesome-mcp-servers?tab=readme-ov-file#server-implementations) to help extend the capabilities of your chat application.
10+
In this article, you'll learn the basics of implementing and using MCP tools in chatlas.
1211

1312

1413
::: callout-note
1514
## Prerequisites
1615

17-
To leverage MCP tools from chatlas, you'll want to install the `mcp` extra.
16+
To leverage MCP tools from chatlas, you'll need to install the `mcp` library.
1817

1918
```bash
2019
pip install 'chatlas[mcp]'
2120
```
2221
:::
2322

2423

25-
## Registering tools
24+
## Basic usage
2625

27-
### Quick start {#quick-start}
26+
Chatlas provides two ways to register MCP tools: [`.register_mcp_tools_http_stream_async()`](../reference/Chat.qmd#register_mcp_tools_http_stream_async) and [`.register_mcp_tools_stdio_async()`](../reference/Chat.qmd#register_mcp_tools_stdio_async).
2827

29-
Let's start with a practical example: using the [MCP Fetch server](https://github.com/modelcontextprotocol/servers/tree/main/src/fetch) to give an LLM the ability to fetch and read web pages.
30-
This server is maintained by Anthropic and can be run via `uvx` (which comes with [uv](https://docs.astral.sh/uv/)).
3128

32-
For simplicity and convenience, we'll use the [`.register_mcp_tools_stdio_async()`](../reference/Chat.qmd#register_mcp_tools_stdio_async) method to both run the MCP Fetch server locally and register its tools with our `ChatOpenAI` instance:
29+
The main difference is how they interact with the MCP server: the former connects to an already running HTTP server, while the latter executes a system command to run the server locally.
30+
Roughly speaking, usage looks something like this:
3331

34-
```python
35-
import asyncio
36-
from chatlas import ChatOpenAI
32+
::: panel-tabset
3733

38-
async def main():
39-
chat = ChatOpenAI()
40-
await chat.register_mcp_tools_stdio_async(
41-
command="uvx",
42-
args=["mcp-server-fetch"],
43-
)
44-
await chat.chat_async(
45-
"Summarize the first paragraph of https://en.wikipedia.org/wiki/Python_(programming_language)"
46-
)
47-
await chat.cleanup_mcp_tools()
34+
### Streaming HTTP
4835

49-
asyncio.run(main())
50-
```
51-
52-
::: chatlas-response-container
5336
```python
54-
# 🔧 tool request
55-
fetch(url="https://en.wikipedia.org/wiki/Python_(programming_language)")
56-
```
57-
58-
Python is a high-level, general-purpose programming language known for its emphasis on code readability through significant indentation. It supports multiple programming paradigms including structured, object-oriented, and functional programming, and is dynamically typed with garbage collection.
59-
:::
60-
61-
::: callout-tip
62-
### Built-in fetch/search tools
63-
64-
For providers with native web fetch support (Claude, Google), consider using [`tool_web_fetch()`](../reference/tool_web_fetch.qmd) instead -- it's simpler and doesn't require MCP setup.
65-
Similarly, [`tool_web_search()`](../reference/tool_web_search.qmd) provides native web search for OpenAI, Claude, and Google.
66-
:::
67-
68-
69-
### Basic usage {#basic-usage}
70-
71-
Chatlas provides three ways to register MCP tools:
72-
73-
1. Stdio ([`.register_mcp_tools_stdio_async()`](../reference/Chat.qmd#register_mcp_tools_stdio_async))
74-
2. Streamble HTTP [`.register_mcp_tools_http_stream_async()`](../reference/Chat.qmd#register_mcp_tools_http_stream_async).
75-
3. JSON configuration file [`.register_mcp_tools_from_config_async()`](../reference/Chat.qmd#register_mcp_tools_from_config_async).
76-
77-
The main difference is how they communicate with the MCP server: the former (Stdio) executes a system command to run the server locally, while the latter (HTTP) connects to an already running HTTP server.
37+
from chatlas import ChatOpenAI
7838

79-
This makes the Stdio method more ergonomic for local development and testing. For instance, recall the example above, which runs `uvx mcp-server-fetch` locally to provide web fetching capabilities to the chat instance:
39+
chat = ChatOpenAI()
8040

81-
```python
82-
# Run a server via uvx, npx, or any other command
83-
await chat.register_mcp_tools_stdio_async(
84-
command="uvx",
85-
args=["mcp-server-fetch"],
41+
# Assuming you have an MCP server running at the specified URL
42+
await chat.register_mcp_tools_http_stream_async(
43+
url="http://localhost:8000/mcp",
8644
)
8745
```
8846

89-
On the other hand, the HTTP method is better for production environments where the server is hosted remotely or in a longer-running process.
90-
For example, if you have an MCP server already running at `http://localhost:8000/mcp`, you can connect to it as follows:
47+
### Stdio (Standard Input/Output)
9148

9249
```python
93-
# Connect to a server already running at the specified URL
94-
await chat.register_mcp_tools_http_stream_async(
95-
url="http://localhost:8000/mcp",
50+
from chatlas import ChatOpenAI
51+
52+
chat = ChatOpenAI()
53+
54+
# Assuming my_mcp_server.py is a valid MCP server script
55+
await chat.register_mcp_tools_stdio_async(
56+
command="mcp",
57+
args=["run", "my_mcp_server.py"],
9658
)
9759
```
9860

99-
Later on in this article, you'll learn
61+
:::
10062

10163
::: callout-warning
10264
### Async methods
10365

104-
For performance reasons, the methods for registering MCP tools are asynchronous, so you'll need to use `await` when calling them.
66+
For performance reasons, the methods for registering MCP tools are asynchronous, so you'll need to use `await` when calling them.
10567
In some environments, such as Jupyter notebooks and the [Positron IDE](https://positron.posit.co/) console, you can simply use `await` directly (as is done above).
10668
However, in other environments, you may need to wrap your code in an `async` function and use `asyncio.run()` to execute it.
10769
The examples below use `asyncio.run()` to run the asynchronous code, but you can adapt them to your environment as needed.
@@ -118,14 +80,13 @@ Note that these methods work by:
11880
### Cleanup
11981

12082
When you no longer need the MCP tools, it's important to clean up the connection to the MCP server, as well `Chat`'s tool state.
121-
This is done by calling [`.cleanup_mcp_tools()`](../reference/Chat.qmd#cleanup_mcp_tools) at the end of your chat session (the examples demonstrate how to do this).
83+
This is done by calling [`.cleanup_mcp_tools()`](../reference/Chat.qmd#cleanup_mcp_tools) at the end of your chat session (the examples demonstrate how to do this).
12284
:::
12385

12486

125-
## Authoring tools
87+
## Basic example
12688

127-
If existing MCP servers don't meet your needs, you can implement your own.
128-
Let's walk through a full-fledged example, including implementing a simple MCP server.
89+
Let's walk through a full-fledged example of using MCP tools in chatlas, including implementing our own MCP server.
12990

13091
### Basic server {#basic-server}
13192

@@ -149,7 +110,7 @@ The `mcp` library provides a CLI tool to run the MCP server over HTTP transport.
149110
As long as you have `mcp` installed, and the [server above](#basic-server) saved as `my_mcp_server.py`, this can be done as follows:
150111

151112
```bash
152-
$ mcp run -t sse my_mcp_server.py
113+
$ mcp run -t sse my_mcp_server.py
153114
INFO: Started server process [19144]
154115
INFO: Waiting for application startup.
155116
INFO: Application startup complete.
@@ -180,12 +141,12 @@ asyncio.run(do_chat("What is 5 - 3?"))
180141
::: chatlas-response-container
181142

182143
```python
183-
# 🔧 tool request
144+
# 🔧 tool request
184145
add(x=5, y=-3)
185146
```
186147

187148
```python
188-
# ✅ tool result
149+
# ✅ tool result
189150
2
190151
```
191152

@@ -225,27 +186,27 @@ asyncio.run(do_chat("What is 5 - 3?"))
225186
::: chatlas-response-container
226187

227188
```python
228-
# 🔧 tool request
189+
# 🔧 tool request
229190
add(x=5, y=-3)
230191
```
231192

232193
```python
233-
# ✅ tool result
194+
# ✅ tool result
234195
2
235196
```
236197

237198
5 - 3 equals 2.
238199
:::
239200

240201

241-
## Advanced example: Code execution
202+
## Motivating example
242203

243204
Let's look at a more compelling use case for MCP tools: code execution.
244205
A tool that can execute code and return the results is a powerful way to extend the capabilities of an LLM.
245206
This way, LLMs can generate code based on natural language prompts (which they are quite good at!) and then execute that code to get precise and reliable results from data (which LLMs are not so good at!).
246207
However, allowing an LLM to execute arbitrary code is risky, as the generated code could potentially be destructive, harmful, or even malicious.
247208

248-
To mitigate these risks, it's important to implement safeguards around code execution.
209+
To mitigate these risks, it's important to implement safeguards around code execution.
249210
This can include running code in isolated environments, restricting access to sensitive resources, and carefully validating and sanitizing inputs to the code execution tool.
250211
One such implementation is Pydantic's [Run Python MCP server](https://github.com/pydantic/pydantic-ai/tree/main/mcp-run-python), which provides a sandboxed environment for executing Python code safely via [Pyodide](https://pyodide.org/en/stable/) and [Deno](https://deno.com/).
251212

@@ -281,4 +242,4 @@ async def _(user_input: str):
281242
await chat.append_message_stream(stream)
282243
```
283244

284-
![Screenshot of a LLM executing Python code via a tool call in a Shiny chatbot](../images/shiny-mcp-run-python.png){class="shadow rounded"}
245+
![Screenshot of a LLM executing Python code via a tool call in a Shiny chatbot](../images/shiny-mcp-run-python.png){class="shadow rounded"}

0 commit comments

Comments
 (0)