Skip to content

leonfoeck/cell_control

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

⚡ Simulation Control Centre CLI

A command‑line interface (CLI) for orchestrating end‑to‑end power‑grid simulations, KPI aggregation, and exploratory analytics.


Table of contents

  1. Quick start (Docker)
  2. CLI overview
  3. Running native (optional)
  4. Data & result folders
  5. Troubleshooting

Quick start (Docker)

Prerequisites Docker ≥ 20.10 on Linux, macOS, or Windows.

Disk usage A full batch of simulations plus intermediate Parquet files can exceed 10 GB. Make sure you mount a host directory with sufficient space.

1 Build the image

# Inside the project root (where the Dockerfile lives)
docker build -t sim‑centre .

Tip – faster builds Pass --build‑arg PYPI_MIRROR=<url> or enable Docker cache if you rebuild often.

2 Run the interactive CLI

# Linux/macOS
mkdir -p ./data/results

docker run --rm -it \
  -v "$(pwd)/data/results:/app/data/results" \
  sim‑centre
# PowerShell on Windows
mkdir data\results

docker run --rm -it ^
  -v "${PWD}\data\results:/app/data/results" ^
  sim‑centre
  • --rm cleans up the container after exit.
  • -it attaches an interactive TTY so you can use the menu‑driven interface.
  • The /app/data/results volume keeps plots, Parquet dumps, and KPI files on the host.

CLI overview

When you start the container (or run python main.py locally) you’ll see:

╔════════════════════════════════════════╗
║   ⚡  Simulation Control Centre  ⚡   ║
╠════════════════════════════════════════╣
║ 1 – run  **all** simulations           ║
║ 2 – run  **some** simulations          ║
║ 3 – build KPIs/graphs for **all**      ║
║ 4 – build KPIs/graphs for **some**     ║
║ 5 – summary plots from parquet         ║
║ 6 – generate KPI parquet file          ║
║ q – quit                               ║
╚════════════════════════════════════════╝
Option What it does Typical extra prompts
1 Executes every predefined simulation scenario (can be thousands). • Parallel? • Save individual Parquet? • Generate graphs? • Gurobi threads
2 Same as 1 but lets you filter by strategy, benchmark, season, seed, etc. Same as 1 + list filters
3 Reads previously saved raw results and (re)builds KPIs & diagnostic plots for all scenarios. Parallel?
4 As 3, but with filters. Parallel? + list filters
5 Produces publication‑ready summary plots from data/results/kpi_master.parquet. Path to parquet, output folder
6 Scans all individual Parquets calculates KPIs & rewrites kpi_master.parquet from scratch.

Simulation filters

Filter Allowed values
Strategy centralized·decentralized
Failure type comm·device·mixed·comm_distributed·device_distributed·mixed_distributed
Failure % 0·15·30·50
Benchmark cigre·simbench
Season spring·summer·fall·winter
State estimation variant normal·request·response·device·random·WLS
Seeds 0 – 34 (inclusive)

Filter Exceptions

Some Scenarios just get skipped as they dont need a simulation or simulation is not possible

  • decentralized + comm scenarios
  • winter + not normal State estimation variant)
  • cigre + 50% + mixed
  • cigre + distributed

Navigation hints

  • Hit ENTER to accept the default shown in square brackets.
  • Type q, quit, or exit at any prompt to abort gracefully.
  • For list inputs you can paste Python‑style literals (["spring", "summer"]) or space‑separated tokens (spring summer).

Running native (optional)

If you prefer a host install (e.g. for IDE debugging):

python -m venv .venv
source .venv/bin/activate   # Windows: .venv\Scripts\activate
python -m pip install --upgrade pip
pip install -r requirements.txt
python main.py

You still need a valid gurobi.lic either in the project root or pointed to via GRB_LICENSE_FILE.


Data & result folders

.
├── data/
│   ├── aFRR/                     # raw aFRR signal archives (CSV)
│   ├── benchmark_grids/          # static grid definitions (JSON or SimBench CSV)
│   └── results/
│       ├── kpi_master.parquet    # rolled‑up KPI store (one row per scenario)
│       └── simulation_results/   # deeply nested raw outputs
└── scenarios/                    # YAML definitions of failure scenarios

Mounting ./data/results into the container lets you keep heavy artefacts out of the image and rerun analyses without recomputing.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published