Figure 1 | Pipeline overview. (a) DSPy takes HELM's baseline prompt and produces structured prompt variants. (b) HELM evaluates models under each prompt variant. With structured prompting, we observe more robust performance evaluation.
A comprehensive framework for optimizing language model performance on benchmarks using DSPy, enabling more robust evaluation. This toolkit allows automated prompt optimization (APO) for benchmarks from the HELM (Holistic Evaluation of Language Models) ecosystem.
# Clone the repository
git clone https://github.com/StanfordMIMI/dspy-helm.git
cd dspy-helm
# Install dependencies
pip install -r requirements.txtConfigure settings in run.sh:
scenarios=("mmlu_pro" "gpqa" "gsm8k" "wildbench" "medcalc_bench" "medec" "head_qa" "medbullets")
optimizers=("MIPROv2" "BootstrapFewShotWithRandomSearch" "GEPA")
# Language model to be optimized
model=openai/gpt-4o
api_base="your_api_base_here"
api_key="your_api_key_here"
# Teacher language model for proposing instruction candidates (MIPROv2)
prompt_model=openai/gpt-4o
prompt_api_base="your_api_base_here"
prompt_api_key="your_api_key_here"
max_bootstrapped_demos=3
max_labeled_demos=3
num_threads=1Run optimization:
./run.sh| Benchmark | Input → Output | Task |
|---|---|---|
| MMLU-Pro | Reasoning Question → Answer | Multi-Task Reasoning |
| GPQA | Graduate Question → Answer | Graduate-Level QA |
| GSM8K | Math Problem → Solution | Numeric Problem-Solving |
| WildBench | User Conversation → Response | Instruction Following |
| MedCalc-Bench | Patient Note → Computed Value | Computational Reasoning |
| Medec | Medical Narrative → Errors | Error Classification |
| HeadQA | Medical Question → Answer | USMLE-Style QA |
| MedBullets | Medical Question → Answer | USMLE-Style QA |
--max_bootstrapped_demos: Number of bootstrapped demonstrations--max_labeled_demos: Number of labeled demonstrations--num_threads: Parallel processing threads (default: 16)
dspy-helm/
├── main.py # Main optimization script
├── scenarios.py # Benchmark implementations
├── run.sh # Batch optimization runner
├── requirements.txt # Python dependencies
├── agents/ # Optimized DSPy agents
│ ├── medcalc_bench/
│ ├── head_qa/
│ ├── medbullets/
│ └── ...
└── README.md # This file
Optimized agents are automatically saved to the agents/ directory:
agents/
└── {scenario}/
└── {model_name}/
├── MIPROv2.json
├── BootstrapFewShotWithRandomSearch.json
└── GEPA.json
To add a new HELM benchmark:
Implement the scenario class in scenarios.py:
class my_benchmark:
def __init__(self, test_size=0.1, seed=42):
self.test_size = test_size
self.seed = seed
@staticmethod
def make_prompt(row):
return f"Question: {row['question']}\nAnswer:"
@staticmethod
def metric(example, pred, trace=None):
# Your evaluation metric
return dspy.evaluate.metrics.answer_exact_match(example, pred, trace)
def load_data(self):
# Load and return trainset, valset
passThis repository is built using DSPy for prompt optimization.
If you find this repository useful for your work, please cite the following paper:
@article{aali2025structured,
title={Structured Prompting Enables More Robust Evaluation of Language Models},
author={Aali, Asad and Mohsin, Muhammad Ahmed and Bikia, Vasiliki and Singhvi, Arnav and Gaus, Richard and Bedi, Suhana and Cui, Hejie and Fuentes, Miguel and Unell, Alyssa and Mai, Yifan and others},
journal={arXiv preprint arXiv:2511.20836},
year={2025}
}