Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add benchmarks for some collection operations #109

Merged
merged 9 commits into from
Dec 8, 2017

Conversation

rfourquet
Copy link
Contributor

Maybe not everything here makes sense (so cc-ing @JeffBezanson who may have suggestions), I will review again by tomorrow. Also using setup with mutable benchmarks is still a bit tricky for me, I'm not sure to have got it right.

@rfourquet rfourquet changed the title add set-like operations benchmarks add benchmarks for some collection operations Aug 31, 2017
else
g[cstr, tstr, "push!", "overwrite"] = @benchmarkable push!(d, $eltin) setup=(d=copy($c))
g[cstr, tstr, "push!", "new"] = @benchmarkable push!(d, $eltout) setup=(d=copy($c)) evals=1
g[cstr, tstr, "pop!", "specified"] = @benchmarkable pop!(d, $(askey(C, eltin))) setup=(d=copy($c)) evals=1
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this line seems to generate a benchmark having evals=2 according to the travis failure, but this can work only if evals==1. It passes locally, how can I make sure evals is set to 1?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bump: I'm stuck on this one... is it my misunderstanding of how evals works, or a current limitation of BaseBenchmarks.jl? should I just disable this test till a solution is found?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can someone help me with this one? otherwise I will disable it until a solution is found.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it makes sense to use evals=1 here. I understand that you are trying to benchmark a single push! or pop!, but that operation is too cheap to benchmark a single evaluation rapidly.

Possibly it would be better to just benchmark pop!(push!(d, $eltin), $(askey(C, eltin))), so that BenchmarkTools can evaluate it as many times as it wants.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @stevengj . I think i would prefer then to benchmark pop! on a set of elements one-by-one to not mix measures with the performance of push!, but your solution is simpler and could be good enough (and does not require evals==1).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I updated according to your suggestion.

@rfourquet
Copy link
Contributor Author

I finally updated the code to fix 0.6 failures, this would be good to go I think.

@rfourquet
Copy link
Contributor Author

Bump

@rfourquet
Copy link
Contributor Author

Bump.

@ararslan
Copy link
Member

ararslan commented Dec 8, 2017

I don't really know anything about this code and no one else has said anything, so I'll assume it's alright. Thanks!

@ararslan ararslan merged commit e4f54fc into JuliaCI:master Dec 8, 2017
@rfourquet rfourquet deleted the rf/collection branch December 8, 2017 21:02
@rfourquet
Copy link
Contributor Author

Thanks so much! I will finally be able to push an sleeping PR forward. If a problem pops out with these benchmarks, I will try to be reactive to solve it.

@ararslan
Copy link
Member

ararslan commented Dec 8, 2017

I probably should have rerun CI before merging. It looks like the benchmark tuning is running into errors. I'll investigate.

@rfourquet
Copy link
Contributor Author

Oups sorry. If it's simpler for you, you can revert and I will open a new updated PR tomorrow (it gets late here!)

@ararslan
Copy link
Member

ararslan commented Dec 8, 2017

It's erroring on this line: https://github.com/JuliaCI/BaseBenchmarks.jl/pull/109/files#diff-3df55ebeb77acc9ed1d8b0ef1bfef2daR188

ERROR: LoadError: ArgumentError: dict must be non-empty
Stacktrace:
 [1] pop!(::Dict{String,String}) at ./dict.jl:655
 [2] ##core#11925(::Dict{String,String}) at /home/nanosoldier/.julia/v0.7/BenchmarkTools/src/execution.jl:312
 [3] ##sample#11926(::BenchmarkTools.Parameters) at /home/nanosoldier/.julia/v0.7/BenchmarkTools/src/execution.jl:320
 [4] #_lineartrial#24(::Int64, ::NamedTuple{(),Tuple{}}, ::Function, ::BenchmarkTools.Benchmark{Symbol("##benchmark#119$
4")}, ::BenchmarkTools.Parameters) at /home/nanosoldier/.julia/v0.7/BenchmarkTools/src/execution.jl:88
 [5] (::getfield(Base, Symbol("#inner#4")){NamedTuple{(),Tuple{}},typeof(BenchmarkTools._lineartrial),Tuple{BenchmarkTo$ls.Benchmark{Symbol("##benchmark#11924")},BenchmarkTools.Parameters}})() at ./essentials.jl:649
 [6] #tune!#27(::Bool, ::String, ::NamedTuple{(),Tuple{}}, ::Function, ::BenchmarkTools.Benchmark{Symbol("##benchmark#1$924")}, ::BenchmarkTools.Parameters) at /home/nanosoldier/.julia/v0.7/BenchmarkTools/src/execution.jl:152
 [7] (::getfield(BenchmarkTools, Symbol("#kw##tune!")))(::NamedTuple{(:verbose, :pad),Tuple{Bool,String}}, ::typeof(tun$!), ::BenchmarkTools.Benchmark{Symbol("##benchmark#11924")}) at ./<missing>:0
 [8] macro expansion at ./util.jl:225 [inlined]
 [9] #tune!#26(::Bool, ::String, ::NamedTuple{(),Tuple{}}, ::Function, ::BenchmarkGroup) at /home/nanosoldier/.julia/v0$
7/BenchmarkTools/src/execution.jl:143
 [10] (::getfield(BenchmarkTools, Symbol("#kw##tune!")))(::NamedTuple{(:verbose, :pad),Tuple{Bool,String}}, ::typeof(tu$
e!), ::BenchmarkGroup) at ./<missing>:0
 [11] macro expansion at ./util.jl:225 [inlined]
 [12] #tune!#26(::Bool, ::String, ::NamedTuple{(),Tuple{}}, ::Function, ::BenchmarkGroup) at /home/nanosoldier/.julia/v$
.7/BenchmarkTools/src/execution.jl:143
 [13] (::getfield(BenchmarkTools, Symbol("#kw##tune!")))(::NamedTuple{(:verbose, :pad),Tuple{Bool,String}}, ::typeof(tu$
e!), ::BenchmarkGroup) at ./<missing>:0
...

@rfourquet
Copy link
Contributor Author

Oh OK, could it be that the tuning runs more evals than CI? you could try to add evals=1, or comment out the line till we find a better way. I'm not clear yet how to handle those kind of benchmarks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants