Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tune! on benchmarkable with evals set. #328

Closed
vchuravy opened this issue Aug 26, 2023 · 4 comments
Closed

tune! on benchmarkable with evals set. #328

vchuravy opened this issue Aug 26, 2023 · 4 comments

Comments

@vchuravy
Copy link
Member

The docs state:

Note that the setup and teardown phases are executed for each sample, not each evaluation. Thus, the sorting example above wouldn't produce the intended results if evals/sample > 1 (it'd suffer from the same problem of benchmarking against an already sorted vector).

So looking at the example:

julia> using BenchmarkTools

julia> x = rand(100000);

julia> b = @benchmarkable (@assert !issorted(y); sort!(y)) setup=(y = copy($x)) evals=1
Benchmark(evals=1, seconds=5.0, samples=10000)

julia> run(b) # Succeeds since evals is set to 1 
# ...

julia> tune!(b)
ERROR: AssertionError: !(issorted(y))
Stacktrace:
  [1] var"##core#228"(y::Vector{Float64})
    @ Main ~/.julia/packages/BenchmarkTools/0owsb/src/execution.jl:489
  [2] var"##sample#229"(::Tuple{}, __params::BenchmarkTools.Parameters)
    @ Main ~/.julia/packages/BenchmarkTools/0owsb/src/execution.jl:497
  [3] _lineartrial(b::BenchmarkTools.Benchmark, p::BenchmarkTools.Parameters; maxevals::Int64, kwargs::@Kwargs{})
    @ BenchmarkTools ~/.julia/packages/BenchmarkTools/0owsb/src/execution.jl:161
  [4] _lineartrial(b::BenchmarkTools.Benchmark, p::BenchmarkTools.Parameters)
    @ BenchmarkTools ~/.julia/packages/BenchmarkTools/0owsb/src/execution.jl:152
  [5] #invokelatest#2
    @ BenchmarkTools ./essentials.jl:887 [inlined]
  [6] invokelatest
    @ BenchmarkTools ./essentials.jl:884 [inlined]
  [7] lineartrial
    @ BenchmarkTools ~/.julia/packages/BenchmarkTools/0owsb/src/execution.jl:35 [inlined]
  [8] tune!(b::BenchmarkTools.Benchmark, p::BenchmarkTools.Parameters; progressid::Nothing, nleaves::Float64, ndone::Float64, verbose::Bool, pad::String, kwargs::@Kwargs{})
    @ BenchmarkTools ~/.julia/packages/BenchmarkTools/0owsb/src/execution.jl:251
  [9] tune!
    @ BenchmarkTools ~/.julia/packages/BenchmarkTools/0owsb/src/execution.jl:247 [inlined]
 [10] tune!(b::BenchmarkTools.Benchmark)
    @ BenchmarkTools ~/.julia/packages/BenchmarkTools/0owsb/src/execution.jl:247
 [11] top-level scope
    @ REPL[7]:1

The docs also state:

evals: The number of evaluations per sample. For best results, this should be kept consistent between trials. A good guess for this value can be automatically set on a benchmark via tune!, but using tune! can be less consistent than setting evals manually (which bypasses tuning). Defaults to BenchmarkTools.DEFAULT_PARAMETERS.evals = 1. If the function you study mutates its input, it is probably a good idea to set evals=1 manually.

So why is this a problem? PkgBenchmarks.jl calls tune! on every benchmarkable, thus changing evals leading to false
results when benchmarking mutating functions.

Not sure who maintains PkgBenchmarks these days but cc: @KristofferC @shashi @DilumAluthge

Potential solution would be to add a new parameter tuneable that does indeed skips tuning in all instances.

@gdalle
Copy link
Collaborator

gdalle commented Aug 28, 2023

@vchuravy I think that's precisely what I addressed with #318. And also the reason why I think BenchmarkTools.jl v2.0 is long overdue. But I would appreciate a discussion around it, cause that would mean a breaking release for a very central package.

@gdalle
Copy link
Collaborator

gdalle commented Aug 28, 2023

@gdalle
Copy link
Collaborator

gdalle commented Sep 5, 2023

@vchuravy can you confirm that the fix works for you?

@gdalle
Copy link
Collaborator

gdalle commented Sep 18, 2023

closing since the example works on master

@gdalle gdalle closed this as completed Sep 18, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants