This repository demonstrates industry-grade usage of Celery with FastAPI and Redis, built step-by-step from basics to advanced production concepts.
The project focuses on reliability, scalability, and real-world deployment practices, exactly how Celery is used in modern backend systems.
This repo is designed to clearly show hands-on Celery expertise to recruiters and backend engineering teams.
- Python 3.12
- FastAPI (Async API layer)
- Celery 5.6.0
- Redis (Broker & Result Backend)
- Docker & Docker Compose
- Uvicorn
- Linux (Ubuntu)
PythonCeleryTutorial/
├── app/
│ ├── __init__.py
│ ├── main.py # FastAPI application
│ ├── celery_app.py # Celery app & configuration
│ └── tasks/
│ ├── __init__.py
│ ├── simple_tasks.py
│ ├── retry_tasks.py
│ ├── rate_limit_tasks.py
│ ├── chain_tasks.py
│ ├── chord_tasks.py
│ └── beat_tasks.py
├── Dockerfile
├── docker-compose.yml
├── requirements.txt
└── README.md
- Celery architecture (Producer, Broker, Worker, Backend)
- Redis as broker & result backend
- Celery app initialization (production-safe config)
- Task registration & discovery (explicit imports)
- Running Celery workers
- Async task execution
- FastAPI → Celery integration
- Task status & result polling API
- Task retries with exponential backoff & jitter
- Idempotent task design
- Late acknowledgements (
acks_late) for crash safety - Worker crashes & delivery guarantees
- Prefetch multiplier & fair scheduling
- Worker concurrency tuning
- Multiple queues & task routing
- Priority-based worker design
- Task chaining, groups & chords
- Task time limits & graceful cancellation
- Rate limiting & throttling
- Monitoring with logs, events & Flower
- Celery Beat for periodic & cron-like jobs
- Persistent Beat scheduler (restart-safe)
- Graceful worker shutdown & SIGTERM handling
- Zero task loss during deployments
- Dockerized production setup
- Separate containers for:
- FastAPI API
- Celery Worker
- Celery Beat
- Redis
- Environment-based configuration
- Docker networking best practices
Client
|
FastAPI (Container)
|
Redis (Broker)
|
Celery Worker (Container)
|
Redis (Result Backend)
Each component runs in its own container, following the one-process-per-container rule.
docker-compose up --build- FastAPI: http://localhost:8000
- Flower (optional): http://localhost:5555
- Non-blocking async APIs
- Background processing with Celery
- Safe retries & failure handling
- Fair task scheduling
- Graceful shutdowns
- Queue-based scaling
- Containerized deployment
- Environment-driven configuration
This project shows real-world Celery usage, not toy examples:
- Matches how companies run Celery in production
- Covers reliability, scaling & deployment
- Demonstrates strong backend engineering fundamentals
Ideal for:
- Backend Engineer roles
- Python / FastAPI roles
- Distributed systems discussions
Built as a hands-on learning & showcase project to demonstrate industry-ready Celery expertise.