This project demonstrates a time-series streaming pipeline using RabbitMQ, MongoDB, Pandas, and Flask.
It simulates how electricity grid data (consumption, production, energy mix) can be ingested, persisted, analyzed, and visualized — a workflow similar to downstream IoT data analytics.
The focus is on systems engineering: message brokers, persistence, analytics, and containerization.
- Message Broker Integration: RabbitMQ streams simulated time-series data.
- Database Persistence: MongoDB stores incoming messages for durability.
- Analytics Engine: Pandas computes export/import hours, energy mix, and hourly consumption patterns.
- Visualization: Matplotlib plots (daily averages, energy distribution pie, hourly consumption).
- Frontend: Flask app with login (hardcoded credentials) and report page.
- Testing: Pytest suite for routes and analytics functions.
- Containerization: Docker + Docker Compose for reproducible deployment.
- Language: Python 3.14
- Libraries: Flask, Pandas, Matplotlib, Pytest, Pika (RabbitMQ), PyMongo
- Message Broker: RabbitMQ
- Database: MongoDB
- Containerization: Docker, Docker Compose
[Producer] → RabbitMQ → [Consumer] → MongoDB → [Analytics (Pandas)] → Flask Dashboard
python app.py