Is your feature request related to a problem? Please describe.
FloPy currently has a very small set of benchmarks using pytest-benchmark, including
- init/load/write time for a small test model:
|
def test_model_load_time(function_tmpdir, benchmark): |
- MP7 pathline/endpoint data load time:
|
def test_get_destination_pathline_data( |
It might be worthwhile to a) benchmark a broader set of models/utils, and b) minimize ad hoc code needed to achieve this.
Describe the solution you'd like
Maybe benchmark load/write for all test models provided by a models API as proposed in #1872, as well as any widely used pre/post-processing utils. Could also try ASV — it has been adopted by other projects like numpy, shapely, and pywatershed.
Describe alternatives you've considered
We could just stick with pytest-benchmark and a bit of scripting instead of moving to ASV.
Additional context
This would help quantify performance improvements from the ongoing effort to use pandas for file IO
Is your feature request related to a problem? Please describe.
FloPy currently has a very small set of benchmarks using
pytest-benchmark, includingflopy/autotest/test_modflow.py
Line 1352 in 3e176d0
flopy/autotest/test_modpathfile.py
Line 264 in 3e176d0
It might be worthwhile to a) benchmark a broader set of models/utils, and b) minimize ad hoc code needed to achieve this.
Describe the solution you'd like
Maybe benchmark load/write for all test models provided by a models API as proposed in #1872, as well as any widely used pre/post-processing utils. Could also try ASV — it has been adopted by other projects like numpy, shapely, and pywatershed.
Describe alternatives you've considered
We could just stick with
pytest-benchmarkand a bit of scripting instead of moving to ASV.Additional context
This would help quantify performance improvements from the ongoing effort to use pandas for file IO