This monorepo hosts the THELEADAI development stack that powers the LeadAI Trust Framework. It brings together:
- A Next.js front-end (
apps/web) with an AI chatbot experience. - A FastAPI core service (
apps/core-svc) that serves projects, scorecards, trends, and admin data. - A Model Context Protocol (MCP) server (
apps/mcp) that delivers retrieval-augmented generation (RAG) over company documents. - A Docker-first infrastructure layer (PostgreSQL, MinIO S3, Redis, Qdrant vector DB) plus Ollama for local LLM chat/embeddings.
The stack runs on Windows-native tooling and PowerShell automation to give you a one-command developer spin-up.
- Stack Overview
- Prerequisites
- Local Infrastructure
- Application Services
- Using the MCP AI Chatbot
- Scripts & Automation
- Environment Variables
- Troubleshooting
- Reference Docs
- Front-end (
apps/web): Next.js 14 app with authenticated dashboards,/health, and/chatfor the AI assistant. - Core API (
apps/core-svc): FastAPI service exposing structured data (projects, scorecards, admin metadata) and backing the UI. - MCP / AI Agent (
apps/mcp):- Express + FastMCP bridge exposing MCP-compatible tool endpoints over HTTP.
- Integrates Ollama (chat + embeddings) and Qdrant (vector search) to deliver responses with citations.
- Publishes document resources so any MCP-capable client (Claude Desktop, VS Code, etc.) can connect.
- Data & Storage:
- PostgreSQL 15 (
docker-compose.ymlservicepostgres/ containerleadai-postgres). - Redis 7 (
leadai-redis) for caching and queues. - MinIO (S3-compatible) object storage with built-in console (
leadai-minio). - Qdrant vector database (
qdrant_instance, started viascripts/start_all.ps1or standalone Docker). - Ollama for local LLM hosting (models:
llama3.1:8b,nomic-embed-textby default).
- PostgreSQL 15 (
- Windows 11 / PowerShell 7 (scripts target PowerShell; Windows Terminal optional).
- Docker Desktop with WSL2 backend enabled.
- Node.js 18+ and
pnpm(runcorepack enableafter installing Node). - Python 3.11+ with
uv/pipfor the FastAPI service virtual environment. - Ollama installed locally (
ollama serveshould listen onlocalhost:11434). - Git LFS if you need to sync binary docs that the chatbot references.
- Optional: Qdrant CLI tools for advanced management.
Tip: Run
scripts\pre_startup.ps1after installing prerequisites—it checks Docker, volumes, Ollama, and required ports.
The repo ships with docker-compose.yml that provisions:
postgres: postgres://xxxxx:xxxxx@localhost:5432/leadai
redis: redis://localhost:6379
minio: http://localhost:9000 (console on 9001)Start them with:
docker compose up -d # spins up Postgres, Redis, MinIOUse scripts\start_all.ps1 for a complete bring-up:
- Ensures Docker Desktop is running.
- Creates persistent volumes (
pgdata,minio,qdrant-storage). - Starts/rehydrates Docker containers:
leadai_pg(Postgres, optional toggle-SkipPostgres).leadai_minioandleadai_redisfrom compose.qdrant_instance(HTTP 6333, gRPC 6334).
- Launches Ollama if needed.
- Opens dedicated terminals for:
- FastAPI core service (
uvicornon :8001). - MCP server (
pnpm --filter ./apps/mcp devon :8787). - Next.js web (
pnpm --filter ./apps/web devon :3000).
- FastAPI core service (
Shut everything down with scripts\stop_all.ps1.
If you prefer manual control:
docker run -d --name qdrant_instance `
-p 6333:6333 -p 6334:6334 `
-v qdrant-storage:/qdrant/storage `
qdrant/qdrantThe MCP server expects QDRANT_URL=http://localhost:6333.
cd apps/core-svc
python -m venv .venv
. .\.venv\Scripts\activate
pip install -r requirements.txt
uvicorn app.main:app --reload --port 8001Configuration relies on DATABASE_URL (set automatically by start_all.ps1). The service exposes /healthz.
cd apps/mcp
pnpm install
cp .env.example .env # configure Ollama & Qdrant URLs
pnpm dev # http://localhost:8787Endpoints follow MCP tool semantics: POST /tools/ingest.upsert, POST /tools/chat.answer, GET /tools/admin.status, etc. Health checks surface Qdrant/Ollama status and the active collection.
cd apps/web
pnpm install
cp .env.example .env.local
pnpm dev --port 3000Key routes:
/– main dashboard./chat– AI assistant front-end consuming MCP server./health– diagnostics page confirming Postgres, Redis, MinIO, and Qdrant connectivity.
Ensure NEXT_PUBLIC_MCP_SERVER_URL points to the MCP server base URL.
- Prepare source documents: place
.pdf,.docx,.xlsx,.txt,.csvfiles inside directories covered byMCP_DOC_ROOTS. - Ingest via MCP:
- Scan directories:
POST /tools/ingest.scan. - Upsert documents:
POST /tools/ingest.upsert. - Delete documents:
POST /tools/ingest.delete.
- Scan directories:
- Chat:
- Visit
http://localhost:3000/chat, choose documents to include, and ask questions. - Responses include citations back to the original files via signed MinIO URLs.
- Visit
- External MCP clients:
- Point them to
http://localhost:8787. - Tools:
ingest.*,retriever.search,chat.answer,watch.*. - Resources:
/resources/doc?path=...,/resources/chunk/<doc_hash>/<chunk_id>.
- Point them to
See docs/mcp-chatbot-setup.md for end-to-end instructions, including recommended Ollama models and heuristics for chunking.
scripts/pre_startup.ps1: Verifies Docker, Ollama, Qdrant volumes, and lists blocking issues before launch.scripts/start_all.ps1: One-stop orchestration for infrastructure + app processes (described above).scripts/stop_all.ps1: Stops Docker containers (leadai_pg,qdrant_instance, etc.) and closes spawned terminals.scripts/backup/*.ps1: Utilities for exporting/importing database snapshots (add your scripts here).
Run scripts from the repo root (C:\apps\_TheLeadAI) with PowerShell.
| Variable | Component | Default | Notes |
|---|---|---|---|
DATABASE_URL |
Core API | postgresql://xxxxxx:xxxxxx@localhost:5432/leadai |
Injected by start_all.ps1. |
OLLAMA_URL |
MCP | http://localhost:11434 |
Ensure Ollama is serving and models are pulled. |
OLLAMA_CHAT_MODEL |
MCP | llama3.1:8b |
Set in apps/mcp/.env. |
OLLAMA_EMBED_MODEL |
MCP | nomic-embed-text |
Must match the embedding dimension (768). |
QDRANT_URL |
MCP/Web | http://localhost:6333 |
Qdrant REST endpoint. |
QDRANT_COLLECTION |
MCP/Web | leadai_docs |
Created lazily on first ingest. |
NEXT_PUBLIC_MCP_SERVER_URL |
Web | http://localhost:8787 |
Used by /chat UI. |
MINIO_ROOT_USER / MINIO_ROOT_PASSWORD |
Docker | minioadmin |
Change for production. |
Store secrets outside version control (e.g., .env, Windows Credential Manager, or Azure Key Vault).
- Qdrant connection errors: confirm container is running (
docker ps), ports 6333/6334 are free, andQDRANT_URLmatches. - Embedding dimension mismatch: ensure
OLLAMA_EMBED_MODELequals the model you pulled (default 768 dims). - MinIO access denied: log into the console at
http://localhost:9001usingminioadmin/minioadmin, create the bucket expected by the app, and regenerate credentials. - Postgres schema migrations: run Alembic migrations from
apps/core-svc/alembicafter adjusting the database URL. - Ports already in use: re-run
scripts/pre_startup.ps1to identify conflicts; supply-SkipPostgresif using an external DB. - Chatbot missing documents: confirm the paths fall under
MCP_DOC_ROOTSand re-runingest.upsert. Use/tools/admin.statusfor ingestion logs.
docs/mcp-chatbot-setup.md– Detailed MCP + Qdrant + Ollama configuration.docs/LeadAI_dev/– Product specifications, UI flows, and draft content.scripts/– Windows automation for local development.apps/core-svc/alembic/– Database migration history.
This README replaces the previous ODT-based document to ensure Markdown renders correctly on GitHub and in editors.
Happy building! Let the PowerShell launcher run the heavy lifting while you iterate on the core services and MCP-powered chatbot.