-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add benchmarks for some collection operations #109
Conversation
else | ||
g[cstr, tstr, "push!", "overwrite"] = @benchmarkable push!(d, $eltin) setup=(d=copy($c)) | ||
g[cstr, tstr, "push!", "new"] = @benchmarkable push!(d, $eltout) setup=(d=copy($c)) evals=1 | ||
g[cstr, tstr, "pop!", "specified"] = @benchmarkable pop!(d, $(askey(C, eltin))) setup=(d=copy($c)) evals=1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So this line seems to generate a benchmark having evals=2
according to the travis failure, but this can work only if evals==1
. It passes locally, how can I make sure evals
is set to 1?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bump: I'm stuck on this one... is it my misunderstanding of how evals
works, or a current limitation of BaseBenchmarks.jl? should I just disable this test till a solution is found?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can someone help me with this one? otherwise I will disable it until a solution is found.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it makes sense to use evals=1
here. I understand that you are trying to benchmark a single push!
or pop!
, but that operation is too cheap to benchmark a single evaluation rapidly.
Possibly it would be better to just benchmark pop!(push!(d, $eltin), $(askey(C, eltin)))
, so that BenchmarkTools can evaluate it as many times as it wants.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @stevengj . I think i would prefer then to benchmark pop!
on a set of elements one-by-one to not mix measures with the performance of push!
, but your solution is simpler and could be good enough (and does not require evals==1
).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I updated according to your suggestion.
f0d548a
to
907940f
Compare
7aa4e57
to
718d549
Compare
I finally updated the code to fix 0.6 failures, this would be good to go I think. |
Bump |
Bump. |
I don't really know anything about this code and no one else has said anything, so I'll assume it's alright. Thanks! |
Thanks so much! I will finally be able to push an sleeping PR forward. If a problem pops out with these benchmarks, I will try to be reactive to solve it. |
I probably should have rerun CI before merging. It looks like the benchmark tuning is running into errors. I'll investigate. |
Oups sorry. If it's simpler for you, you can revert and I will open a new updated PR tomorrow (it gets late here!) |
It's erroring on this line: https://github.com/JuliaCI/BaseBenchmarks.jl/pull/109/files#diff-3df55ebeb77acc9ed1d8b0ef1bfef2daR188
|
Oh OK, could it be that the tuning runs more evals than CI? you could try to add |
Maybe not everything here makes sense (so cc-ing @JeffBezanson who may have suggestions), I will review again by tomorrow. Also using
setup
with mutable benchmarks is still a bit tricky for me, I'm not sure to have got it right.