This repository will be made publicly available and will serve as a hub for various n8n-related projects. Our goal is to provide a comprehensive resource for both beginners and experienced users looking to leverage the power of n8n.
This repository offers multiple Docker Compose configurations to suit different needs:
A streamlined workflow automation environment with core components:
- n8n: Workflow automation engine with webhook support
- Postgres: Relational database for persistent storage of workflows and data
- Redis: In-memory data store for queue management and improved performance
Best for: General automation workflows, production deployments, and standard n8n implementations.
A comprehensive environment with built-in AI capabilities:
- n8n: Core workflow automation engine
- Postgres: Relational database for workflow data storage
- Redis: In-memory data store for queues and caching
- Ollama: Local LLM runtime for on-premise AI capabilities without external API dependencies
- Qdrant: Vector database for semantic search and AI-powered similarity queries
Best for: AI-enhanced workflows, semantic search implementations, and projects requiring local LLM processing.
A specialized environment for Retrieval-Augmented Generation (RAG) implementations:
- n8n: Core workflow automation engine
- Postgres: Relational database for workflow data storage
- Redis: In-memory data store and database for LightRAG
- LightRAG: Retrieval-Augmented Generation system for AI-powered document search and knowledge retrieval
Best for: Knowledge-intensive workflows, document processing, graph-based RAG implementations, and integration with OpenAI models.
Below is a brief introduction to each component and its purpose:
- n8n: Workflow automation tool for connecting apps and automating tasks.
- Ollama: Local LLM (Large Language Model) runtime for AI-powered features and chatbots.
- Postgres: Relational database for storing workflow data and application state.
- Redis: In-memory data store for caching, queues, and fast data access.
- Qdrant: Vector database for semantic search and AI-powered similarity queries.
- LightRAG: Retrieval-Augmented Generation system with graph representation for AI-powered document search and knowledge retrieval, using OpenAI models.
Each component is included in the docker-compose setups to provide a complete environment for building, testing, and running advanced automation workflows with AI and data capabilities.
To use any of the provided Docker Compose configurations:
-
Navigate to the desired setup directory:
cd local-docker-options/01_n8n-postgres-redis # OR cd local-docker-options/02_n8n-postgres-redis-ollama-qdrant # OR cd local-docker-options/03_n8n-postgres-redis-lightrag
-
Create the required data directories (see each directory's README for specific folders)
For Standard Setup:
mkdir -p n8n_data postgres_data redis_data
For AI-Enhanced Setup:
mkdir -p shared n8n_data postgres_data redis_data qdrant_data ollama_data
For LightRAG Setup:
mkdir -p n8n_data postgres_data redis_data mkdir -p lightrag_data/rag_storage lightrag_data/inputs lightrag_data/tiktoken
-
Create a
.envfile with the necessary environment variables (see README in each directory for details) -
Start the services:
docker-compose up -d
-
Access the n8n web interface at
http://localhost:5678(or your configured port) -
To stop the services:
docker-compose down
Each configuration directory contains its own README with more detailed information about the specific setup and environment variables needed.
- Docker and Docker Compose installed on your system
- Minimum 4GB RAM (8GB recommended for AI-enhanced setup)
- At least 10GB of free disk space (more if using multiple LLM models with Ollama)
- Internet connection for initial container downloads
- Standard Setup: Perfect for building automations that connect various services, process data, or create custom API workflows.
- AI-Enhanced Setup: Ideal for:
- Automated content generation
- Document analysis and summarization
- Sentiment analysis
- Knowledge base creation with semantic search
- Text classification and extraction
- AI-powered decision making
- LightRAG Setup: Specialized for:
- Graph-based knowledge representation
- Advanced document retrieval systems
- Question-answering over documents
- Integration with OpenAI models for LLM and embeddings
- Custom RAG implementations via n8n API integration
- Interactive knowledge exploration via LightRAG UI
Contributions to this repository are welcome. Please feel free to submit pull requests or open issues for bugs, feature requests, or documentation improvements.