Skip to content

Improvisation of benchmarking scripts #123

@achyu-dev

Description

@achyu-dev

📌 Feature Summary

Write better benchmarking scripts (output destination and names)

💡 Motivation

Currently, our benchmarking scripts save generated CSV files and graphs directly in the same directory as the script. This clutters the folder and makes it harder to organize results, especially when running multiple benchmarks. File names are also inconsistent, making it difficult to distinguish between runs.

🧩 Suggested Implementation

  • Add a configurable output_dir parameter (default: benchmarks/outputs/).
  • Create the directory if it does not exist.
  • Define a naming convention, e.g. {script_name}{date}{time}.{ext}
  • Apply the same convention for graph images and other generated files.

🧠 Estimated Complexity

Medium (Changes multiple files, some model updates)

🔗 Related Issues / PRs

No response

🧠 Additional Notes

  • Add appropriate logging where required.
  • Consider adding an optional --tag CLI argument to append a custom identifier to output filenames for easier tracking of experimental runs.

Metadata

Metadata

Labels

enhancementNew feature or request

Projects

Status

In Progress

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions