-
Notifications
You must be signed in to change notification settings - Fork 7
add pytest-benchmark to benchmarking section #596
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
docs/pages/benchmarking-profiling.md
Outdated
| | [asv](https://asv.readthedocs.io/en/stable/) | A tool for benchmarking Python packages over their lifetime. Allows you to write benchmarks and then run them against every commit in the repository, to identify where performance increased or decreased. Comparative benchmarks can also be run, which can be useful for [running them in CI using GitHub runners](https://labs.quansight.org/blog/2021/08/github-actions-benchmarks). | <span class="label label-green">Best</span> | | ||
| | [pytest-benchmark](https://pytest-benchmark.readthedocs.io/en/stable) | Provides a `benchmark` fixture with full `pytest` integration. A simple, light-weight alternative to `asv`, for cases that don't require tracking performance over many commits. | <span class="label label-yellow">Good</span> | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can also run comparative benchmarks (regression-tests) using pytest-benchmark. We've done this in the glass project
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Although we are a little sceptical of their reliability but I think that matches what the linked blog says
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @connoraird - good to know! I guess it is still not as fully featured as asv's comparison though? i.e. it's more geared towards comparing specific runs / subsets of runs, rather than the full project history?
How about something like:
Provides a benchmark fixture with full pytest integration, and the ability to compare performance between multiple runs. A simple, light-weight alternative to asv.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| | [asv](https://asv.readthedocs.io/en/stable/) | A tool for benchmarking Python packages over their lifetime. Allows you to write benchmarks and then run them against every commit in the repository, to identify where performance increased or decreased. Comparative benchmarks can also be run, which can be useful for [running them in CI using GitHub runners](https://labs.quansight.org/blog/2021/08/github-actions-benchmarks). | <span class="label label-green">Best</span> | | |
| | [pytest-benchmark](https://pytest-benchmark.readthedocs.io/en/stable) | Provides a `benchmark` fixture with full `pytest` integration. A simple, light-weight alternative to `asv`, for cases that don't require tracking performance over many commits. | <span class="label label-yellow">Good</span> | | |
| | [asv](https://asv.readthedocs.io/en/stable/) | A tool for benchmarking Python packages over their lifetime. Allows you to write benchmarks and then run them against every commit in the repository, to identify where performance increased or decreased. Comparative benchmarks can also be run, which can be useful for [running them in CI using GitHub runners](https://labs.quansight.org/blog/2021/08/github-actions-benchmarks). | <span class="label label-green">Best</span> | | |
| | [pytest-benchmark](https://pytest-benchmark.readthedocs.io/en/stable) | Provides a benchmark fixture with full pytest integration, and the ability to compare performance between multiple runs. A simple, light-weight alternative to asv that we've found to be useful. | <span class="label label-yellow">Good</span> | |
Because you said
It was a useful alternative to asv
... feels worth mentioning.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks both - I've updated the text now 👍
|
@all-contributors please add @connoraird for review. |
|
I've put up a pull request to add @connoraird! 🎉 |
Adds @connoraird as a contributor for review. This was requested by samcunliffe [in this comment](#596 (comment)) --------- Co-authored-by: allcontributors[bot] <46447321+allcontributors[bot]@users.noreply.github.com> Co-authored-by: Sam Cunliffe <samcunliffe@users.noreply.github.com>
Adds pytest-benchmark as another benchmarking option.
@ruaridhg and I used this recently on the HEFTIE project to benchmark reading / writing ome-zarr images. It was a useful alternative to
asv, as we didn't need to track performance over time (rather just benchmark specific functions / packages).Potentially @connoraird used this on a project recently too (?) - if so, do double check that you agree with my points here. If you ended up going for another benchmarking package, maybe we can add that in too 😄