Skip to content

This repository will be made publicly available and will serve as a hub for various n8n-related projects. Our goal is to provide a comprehensive resource for both beginners and experienced users looking to leverage the power of n8n.

Notifications You must be signed in to change notification settings

ExcelWithVasi/n8n-bootcamp

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 

Repository files navigation

n8n Bootcamp Repository

This repository will be made publicly available and will serve as a hub for various n8n-related projects. Our goal is to provide a comprehensive resource for both beginners and experienced users looking to leverage the power of n8n.

Available Docker Compose Flavors

This repository offers multiple Docker Compose configurations to suit different needs:

1. Standard Setup (01_n8n-postgres-redis)

A streamlined workflow automation environment with core components:

  • n8n: Workflow automation engine with webhook support
  • Postgres: Relational database for persistent storage of workflows and data
  • Redis: In-memory data store for queue management and improved performance

Best for: General automation workflows, production deployments, and standard n8n implementations.

2. AI-Enhanced Setup (02_n8n-postgres-redis-ollama-qdrant)

A comprehensive environment with built-in AI capabilities:

  • n8n: Core workflow automation engine
  • Postgres: Relational database for workflow data storage
  • Redis: In-memory data store for queues and caching
  • Ollama: Local LLM runtime for on-premise AI capabilities without external API dependencies
  • Qdrant: Vector database for semantic search and AI-powered similarity queries

Best for: AI-enhanced workflows, semantic search implementations, and projects requiring local LLM processing.

3. LightRAG Setup (03_n8n-postgres-redis-lightrag)

A specialized environment for Retrieval-Augmented Generation (RAG) implementations:

  • n8n: Core workflow automation engine
  • Postgres: Relational database for workflow data storage
  • Redis: In-memory data store and database for LightRAG
  • LightRAG: Retrieval-Augmented Generation system for AI-powered document search and knowledge retrieval

Best for: Knowledge-intensive workflows, document processing, graph-based RAG implementations, and integration with OpenAI models.

Component Overview

Below is a brief introduction to each component and its purpose:

  • n8n: Workflow automation tool for connecting apps and automating tasks.
  • Ollama: Local LLM (Large Language Model) runtime for AI-powered features and chatbots.
  • Postgres: Relational database for storing workflow data and application state.
  • Redis: In-memory data store for caching, queues, and fast data access.
  • Qdrant: Vector database for semantic search and AI-powered similarity queries.
  • LightRAG: Retrieval-Augmented Generation system with graph representation for AI-powered document search and knowledge retrieval, using OpenAI models.

Each component is included in the docker-compose setups to provide a complete environment for building, testing, and running advanced automation workflows with AI and data capabilities.

Getting Started

To use any of the provided Docker Compose configurations:

  1. Navigate to the desired setup directory:

    cd local-docker-options/01_n8n-postgres-redis
    # OR
    cd local-docker-options/02_n8n-postgres-redis-ollama-qdrant
    # OR
    cd local-docker-options/03_n8n-postgres-redis-lightrag
  2. Create the required data directories (see each directory's README for specific folders)

    For Standard Setup:

    mkdir -p n8n_data postgres_data redis_data

    For AI-Enhanced Setup:

    mkdir -p shared n8n_data postgres_data redis_data qdrant_data ollama_data

    For LightRAG Setup:

    mkdir -p n8n_data postgres_data redis_data
    mkdir -p lightrag_data/rag_storage lightrag_data/inputs lightrag_data/tiktoken
  3. Create a .env file with the necessary environment variables (see README in each directory for details)

  4. Start the services:

    docker-compose up -d
  5. Access the n8n web interface at http://localhost:5678 (or your configured port)

  6. To stop the services:

    docker-compose down

Each configuration directory contains its own README with more detailed information about the specific setup and environment variables needed.

System Requirements

  • Docker and Docker Compose installed on your system
  • Minimum 4GB RAM (8GB recommended for AI-enhanced setup)
  • At least 10GB of free disk space (more if using multiple LLM models with Ollama)
  • Internet connection for initial container downloads

Use Cases

  • Standard Setup: Perfect for building automations that connect various services, process data, or create custom API workflows.
  • AI-Enhanced Setup: Ideal for:
    • Automated content generation
    • Document analysis and summarization
    • Sentiment analysis
    • Knowledge base creation with semantic search
    • Text classification and extraction
    • AI-powered decision making
  • LightRAG Setup: Specialized for:
    • Graph-based knowledge representation
    • Advanced document retrieval systems
    • Question-answering over documents
    • Integration with OpenAI models for LLM and embeddings
    • Custom RAG implementations via n8n API integration
    • Interactive knowledge exploration via LightRAG UI

Contributing

Contributions to this repository are welcome. Please feel free to submit pull requests or open issues for bugs, feature requests, or documentation improvements.

About

This repository will be made publicly available and will serve as a hub for various n8n-related projects. Our goal is to provide a comprehensive resource for both beginners and experienced users looking to leverage the power of n8n.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published