-
-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ENH] Simple benchmarking suite. #1510
Conversation
Current coverage is 88.26% (diff: 100%)@@ master #1510 diff @@
==========================================
Files 77 77
Lines 7624 7624
Methods 0 0
Messages 0 0
Branches 0 0
==========================================
Hits 6729 6729
Misses 895 895
Partials 0 0
|
Could this be shaped into a set of performance guarantee tests which always run and fail if some change results in poorer performance over some threshold? |
You say this as if it were a feature? 😃 |
I don't think it can be done, atleast without significant effort. There is no travis-like tool that integrates with github, the only similar one I found was asv, which seems local-only. Running on travis is impossible because it does not provide a consistent performance environment as far as I'm aware. The threshold is the other problem: you have to allow for some noise and, even then, you'd get a bazillion PRs that worsen the performance slightly and then another one that, maybe even because of noise, finally fails, without it even being its fault.
|
# noinspection PyStatementEffect | ||
class BenchBasic(Benchmark): | ||
def setUp(self): | ||
self.setup_test_string = "sttss" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is this?
To ease my benchmarking of pandas code, I created a simple benchmark suite. It's based on
unittest
and istimeit
-like, but does not usetimeit
.I've included some basic tests. See if this is something we want included with Orange. I'm only not sure about the location, for now, I've put this in
/benchmark/
Example output on current master: