Skip to content

SAGE Benchmark Suite - Comprehensive evaluation framework for AI data processing pipelines (separated from SAGE monorepo)

License

Notifications You must be signed in to change notification settings

intellistream/sage-benchmark

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

6 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

SAGE Benchmark

Comprehensive benchmarking tools and RAG examples for the SAGE framework

Python Version License

๐Ÿ“‹ Overview

SAGE Benchmark provides a comprehensive suite of benchmarking tools and RAG (Retrieval-Augmented Generation) examples for evaluating SAGE framework performance. This package enables researchers and developers to:

  • Benchmark RAG pipelines with multiple retrieval strategies (dense, sparse, hybrid)
  • Compare vector databases (Milvus, ChromaDB, FAISS) for RAG applications
  • Evaluate multimodal retrieval with text, image, and video data
  • Run reproducible experiments with standardized configurations and metrics

This package is designed for both research experiments and production system evaluation.

โœจ Key Features

  • Multiple RAG Implementations: Dense, sparse, hybrid, and multimodal retrieval
  • Vector Database Support: Milvus, ChromaDB, FAISS integration
  • Experiment Framework: Automated benchmarking with configurable experiments
  • Evaluation Metrics: Comprehensive metrics for RAG performance
  • Sample Data: Included test data for quick start
  • Extensible Design: Easy to add new benchmarks and retrieval methods

๐Ÿ“ฆ Package Structure

sage-benchmark/
โ”œโ”€โ”€ src/
โ”‚   โ””โ”€โ”€ sage/
โ”‚       โ””โ”€โ”€ benchmark/
โ”‚           โ”œโ”€โ”€ __init__.py
โ”‚           โ””โ”€โ”€ benchmark_rag/           # RAG benchmarking
โ”‚               โ”œโ”€โ”€ __init__.py
โ”‚               โ”œโ”€โ”€ implementations/     # RAG implementations
โ”‚               โ”‚   โ”œโ”€โ”€ pipelines/      # RAG pipeline scripts
โ”‚               โ”‚   โ”‚   โ”œโ”€โ”€ qa_dense_retrieval_milvus.py
โ”‚               โ”‚   โ”‚   โ”œโ”€โ”€ qa_sparse_retrieval_milvus.py
โ”‚               โ”‚   โ”‚   โ”œโ”€โ”€ qa_multimodal_fusion.py
โ”‚               โ”‚   โ”‚   โ””โ”€โ”€ ...
โ”‚               โ”‚   โ””โ”€โ”€ tools/          # Supporting tools
โ”‚               โ”‚       โ”œโ”€โ”€ build_chroma_index.py
โ”‚               โ”‚       โ”œโ”€โ”€ build_milvus_dense_index.py
โ”‚               โ”‚       โ””โ”€โ”€ loaders/
โ”‚               โ”œโ”€โ”€ evaluation/          # Experiment framework
โ”‚               โ”‚   โ”œโ”€โ”€ pipeline_experiment.py
โ”‚               โ”‚   โ”œโ”€โ”€ evaluate_results.py
โ”‚               โ”‚   โ””โ”€โ”€ config/
โ”‚               โ”œโ”€โ”€ config/              # RAG configurations
โ”‚               โ””โ”€โ”€ data/                # Test data
โ”‚           # Future benchmarks:
โ”‚           # โ”œโ”€โ”€ benchmark_agent/      # Agent benchmarking
โ”‚           # โ””โ”€โ”€ benchmark_anns/       # ANNS benchmarking
โ”œโ”€โ”€ tests/
โ”œโ”€โ”€ pyproject.toml
โ””โ”€โ”€ README.md

๐Ÿš€ Installation

Install the benchmark package:

pip install -e packages/sage-benchmark

Or with development dependencies:

pip install -e "packages/sage-benchmark[dev]"

Note: The sage.data module is included as a submodule in the package and will be installed automatically. It contains datasets for various benchmarks including LibAMM datasets.

๐Ÿ“Š RAG Benchmarking

The benchmark_rag module provides comprehensive RAG benchmarking capabilities:

RAG Implementations

Various RAG approaches for performance comparison:

Vector Databases:

  • Milvus: Dense, sparse, and hybrid retrieval
  • ChromaDB: Local vector database with simple setup
  • FAISS: Efficient similarity search

Retrieval Methods:

  • Dense retrieval (embeddings-based)
  • Sparse retrieval (BM25, sparse vectors)
  • Hybrid retrieval (combining dense + sparse)
  • Multimodal fusion (text + image + video)

Quick Start

1. Build Vector Index

First, prepare your vector index:

# Build ChromaDB index (simplest)
python -m sage.benchmark.benchmark_rag.implementations.tools.build_chroma_index

# Or build Milvus dense index
python -m sage.benchmark.benchmark_rag.implementations.tools.build_milvus_dense_index

2. Run a RAG Pipeline

Test individual RAG pipelines:

# Dense retrieval with Milvus
python -m sage.benchmark.benchmark_rag.implementations.pipelines.qa_dense_retrieval_milvus

# Sparse retrieval
python -m sage.benchmark.benchmark_rag.implementations.pipelines.qa_sparse_retrieval_milvus

# Hybrid retrieval (dense + sparse)
python -m sage.benchmark.benchmark_rag.implementations.pipelines.qa_hybrid_retrieval_milvus

3. Run Benchmark Experiments

Execute full benchmark suite:

# Run comprehensive benchmark
python -m sage.benchmark.benchmark_rag.evaluation.pipeline_experiment

# Evaluate and generate reports
python -m sage.benchmark.benchmark_rag.evaluation.evaluate_results

4. View Results

Results are saved in benchmark_results/:

  • experiment_TIMESTAMP/ - Individual experiment runs
  • metrics.json - Performance metrics
  • comparison_report.md - Comparison report

๐Ÿ“– Quick Start

Basic Example

from sage.benchmark.benchmark_rag.implementations.pipelines import (
    qa_dense_retrieval_milvus,
)
from sage.benchmark.benchmark_rag.config import load_config

# Load configuration
config = load_config("config_dense_milvus.yaml")

# Run RAG pipeline
results = qa_dense_retrieval_milvus.run_pipeline(query="What is SAGE?", config=config)

# View results
print(f"Retrieved {len(results)} documents")
for doc in results:
    print(f"- {doc.content[:100]}...")

Run Custom Benchmark

from sage.benchmark.benchmark_rag.evaluation import PipelineExperiment

# Define experiment configuration
experiment = PipelineExperiment(
    name="custom_rag_benchmark",
    pipelines=["dense", "sparse", "hybrid"],
    queries=["query1.txt", "query2.txt"],
    metrics=["precision", "recall", "latency"],
)

# Run experiment
results = experiment.run()

# Generate report
experiment.generate_report(results)

Configuration

Configuration files are located in sage/benchmark/benchmark_rag/config/:

  • config_dense_milvus.yaml - Dense retrieval configuration
  • config_sparse_milvus.yaml - Sparse retrieval configuration
  • config_hybrid_milvus.yaml - Hybrid retrieval configuration
  • config_qa_chroma.yaml - ChromaDB configuration

Experiment configurations in sage/benchmark/benchmark_rag/evaluation/config/:

  • experiment_config.yaml - Benchmark experiment settings

๐Ÿ“– Data

Test data is included in the package:

  • Benchmark Data (benchmark_rag/data/):

    • queries.jsonl - Sample queries for testing
    • qa_knowledge_base.* - Knowledge base in multiple formats (txt, md, pdf, docx)
    • sample/ - Additional sample documents for testing
    • sample/ - Additional sample documents
  • Benchmark Config (benchmark_rag/config/):

    • experiment_config.yaml - RAG benchmark configurations

๐Ÿ”ง Development

Running Tests

pytest packages/sage-benchmark/

Code Formatting

# Format code
black packages/sage-benchmark/

# Lint code
ruff check packages/sage-benchmark/

๐Ÿ“š Documentation

For detailed documentation on each component:

  • See src/sage/benchmark/rag/README.md for RAG examples
  • See src/sage/benchmark/benchmark_rag/README.md for benchmark details

๐Ÿ”ฎ Future Components

  • benchmark_agent: Agent system performance benchmarking
  • benchmark_anns: Approximate Nearest Neighbor Search benchmarking
  • benchmark_llm: LLM inference performance benchmarking

๐Ÿค Contributing

This package follows the same contribution guidelines as the main SAGE project. See the main repository's CONTRIBUTING.md.

๐Ÿ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

๐Ÿ”— Related Packages

  • sage-kernel: Core computation engine for running benchmarks
  • sage-libs: RAG components and utilities
  • sage-middleware: Vector database services (Milvus, ChromaDB)
  • sage-common: Common utilities and data types

๐Ÿ“ฎ Support


Part of the SAGE Framework | Main Repository

About

SAGE Benchmark Suite - Comprehensive evaluation framework for AI data processing pipelines (separated from SAGE monorepo)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published