A Model Context Protocol (MCP) server for PostgreSQL databases with enhanced capabilities for AI agents.
More info on the pg-mcp project here:
PG-MCP is a server implementation of the Model Context Protocol for PostgreSQL databases. It provides a comprehensive API for AI agents to discover, connect to, query, and understand PostgreSQL databases through MCP's resource-oriented architecture.
This implementation builds upon and extends the reference Postgres MCP implementation with several key enhancements:
- Dual Transport Support: Supports both SSE (Server-Sent Events) for server mode and stdio for local development
- Multi-database Support: Connect to multiple PostgreSQL databases simultaneously
- Rich Catalog Information: Extracts and exposes table/column descriptions from the database catalog
- Extension Context: Provides detailed YAML-based knowledge about PostgreSQL extensions like PostGIS and pgvector
- Query Explanation: Includes a dedicated tool for analyzing query execution plans
- Robust Connection Management: Proper lifecycle for database connections with secure connection ID handling
- Connect Tool: Register PostgreSQL connection strings and get a secure connection ID
- Disconnect Tool: Explicitly close database connections when done
- Connection Pooling: Efficient connection management with pooling
- pg_query: Execute read-only SQL queries using a connection ID
- pg_explain: Analyze query execution plans in JSON format
- List schemas with descriptions
- List tables with descriptions and row counts
- Get column details with data types and descriptions
- View table constraints and indexes
- Explore database extensions
- Sample table data (with pagination)
- Get approximate row counts
Built-in contextual information for PostgreSQL extensions like:
- PostGIS: Spatial data types, functions, and examples
- pgvector: Vector similarity search functions and best practices
Additional extensions can be easily added via YAML config files.
- Python 3.13+ (for manual installation)
- PostgreSQL database(s)
- Docker (for Docker installation)
# Pull the latest image from GitHub Container Registry
docker pull ghcr.io/growthspace-engineering/pg-mcp-server:latest
# Run the container
docker run -d \
--name pg-mcp \
-p 8000:8000 \
-e LOG_LEVEL=DEBUG \
ghcr.io/growthspace-engineering/pg-mcp-server:latestOr use a specific version:
docker pull ghcr.io/growthspace-engineering/pg-mcp-server:0.1.0
docker run -d --name pg-mcp -p 8000:8000 ghcr.io/growthspace-engineering/pg-mcp-server:0.1.0Note: If the package is private, you'll need to authenticate with GitHub Container Registry first:
echo $GITHUB_TOKEN | docker login ghcr.io -u USERNAME --password-stdin# Clone the repository
git clone https://github.com/growthspace-engineering/pg-mcp-server.git
cd pg-mcp-server
# Build and run with Docker Compose
docker-compose up -d# Clone the repository
git clone https://github.com/growthspace-engineering/pg-mcp-server.git
cd pg-mcp-server
# Install dependencies and create a virtual environment ( .venv )
uv sync
# Activate the virtual environment
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Run the server (SSE mode)
python -m server.app
# Or run in stdio mode
python -m server.stdioFor local development with Cursor, you can use stdio mode which doesn't require Docker or a separate server process. See CURSOR_SETUP.md for detailed instructions.
Quick setup:
- Install
uvif you don't have it:curl -LsSf https://astral.sh/uv/install.sh | sh - Add to your
~/.cursor/mcp.json:
{
"mcpServers": {
"pg-mcp-server": {
"command": "uvx",
"args": [
"--from",
"git+https://github.com/growthspace-engineering/pg-mcp-server.git",
"python",
"-m",
"server.stdio"
],
"env": {
"LOG_LEVEL": "DEBUG"
}
}
}
}The repository includes test scripts to verify server functionality:
# Basic server functionality test
python test.py "postgresql://username:password@hostname:port/database"
# Claude-powered natural language to SQL conversion
python example-clients/claude_cli.py "Show me the top 5 customers by total sales"The claude_cli.py script requires environment variables:
# .env file
DATABASE_URL=postgresql://username:password@hostname:port/database
ANTHROPIC_API_KEY=your-anthropic-api-key
PG_MCP_URL=http://localhost:8000/sse
Example prompt for use with agents:
Use the PostgreSQL MCP server to analyze the database.
Available tools:
- connect: Register a database connection string and get a connection ID
- disconnect: Close a database connection
- pg_query: Execute SQL queries using a connection ID
- pg_explain: Get query execution plans
You can explore schema resources via:
pgmcp://{conn_id}/schemas
pgmcp://{conn_id}/schemas/{schema}/tables
pgmcp://{conn_id}/schemas/{schema}/tables/{table}/columns
A comprehensive database description is available at this resource:
pgmcp://{conn_id}/
This server is built on:
- MCP: The Model Context Protocol foundation
- FastMCP: Python library for MCP
- asyncpg: Asynchronous PostgreSQL client
- YAML: For extension context information
- The server runs in read-only mode by default (enforced via transaction settings)
- Connection details are never exposed in resource URLs, only opaque connection IDs
- Database credentials only need to be sent once during the initial connection
Contributions are welcome! Areas for expansion:
- Additional PostgreSQL extension context files
- More schema introspection resources
- Query optimization suggestions