Skip to content

Commit

Permalink
Update benchmark annotations
Browse files Browse the repository at this point in the history
And stop benchmarking the Generator::State re-use, as it
no longer make a sizeable difference.
  • Loading branch information
byroot committed Nov 6, 2024
1 parent 3f950f2 commit f5812d8
Show file tree
Hide file tree
Showing 2 changed files with 21 additions and 26 deletions.
26 changes: 11 additions & 15 deletions benchmark/encoder.rb
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,6 @@
def implementations(ruby_obj)
state = JSON::State.new(JSON.dump_default_options)
{
json_state: ["json (reuse)", proc { state.generate(ruby_obj) }],
json: ["json", proc { JSON.generate(ruby_obj) }],
oj: ["oj", proc { Oj.dump(ruby_obj) }],
}
Expand Down Expand Up @@ -58,27 +57,24 @@ def benchmark_encoding(benchmark_name, ruby_obj, check_expected: true, except: [
# NB: Notes are based on ruby 3.3.4 (2024-07-09 revision be1089c8ec) +YJIT [arm64-darwin23]

# On the first two micro benchmarks, the limitting factor is the fixed cost of initializing the
# generator state. Since `JSON.generate` now lazily allocate the `State` object we're now ~10% faster
# generator state. Since `JSON.generate` now lazily allocate the `State` object we're now ~10-20% faster
# than `Oj.dump`.
benchmark_encoding "small mixed", [1, "string", { a: 1, b: 2 }, [3, 4, 5]]
benchmark_encoding "small nested array", [[1,2,3,4,5]]*10

# On small hash specifically, we're just on par with `Oj.dump`. Would be worth investigating why
# Hash serialization doesn't perform as well as other types.
benchmark_encoding "small hash", { "username" => "jhawthorn", "id" => 123, "event" => "wrote json serializer" }

# On string encoding we're ~20% faster when dealing with mostly ASCII, but ~10% slower when dealing
# with mostly multi-byte characters. This is a tradeoff.
benchmark_encoding "mixed utf8", ([("a" * 5000) + "€" + ("a" * 5000)] * 500), except: %i(json_state)
benchmark_encoding "mostly utf8", ([("€" * 3333)] * 500), except: %i(json_state)
# On string encoding we're ~20% faster when dealing with mostly ASCII, but ~50% slower when dealing
# with mostly multi-byte characters. There's likely some gains left to be had in multi-byte handling.
benchmark_encoding "mixed utf8", ([("a" * 5000) + "€" + ("a" * 5000)] * 500)
benchmark_encoding "mostly utf8", ([("€" * 3333)] * 500)

# On these benchmarks we perform well, we're on par or better.
benchmark_encoding "integers", (1_000_000..1_001_000).to_a, except: %i(json_state)
benchmark_encoding "activitypub.json", JSON.load_file("#{__dir__}/data/activitypub.json"), except: %i(json_state)
benchmark_encoding "citm_catalog.json", JSON.load_file("#{__dir__}/data/citm_catalog.json"), except: %i(json_state)
benchmark_encoding "activitypub.json", JSON.load_file("#{__dir__}/data/activitypub.json")
benchmark_encoding "citm_catalog.json", JSON.load_file("#{__dir__}/data/citm_catalog.json")

# On twitter.json we're still about 10% slower, this is worth investigating.
benchmark_encoding "twitter.json", JSON.load_file("#{__dir__}/data/twitter.json"), except: %i(json_state)
# On twitter.json we're still about 6% slower, this is worth investigating.
benchmark_encoding "twitter.json", JSON.load_file("#{__dir__}/data/twitter.json")

# This benchmark spent the overwhelming majority of its time in `ruby_dtoa`. We rely on Ruby's implementation
# which uses a relatively old version of dtoa.c from David M. Gay.
Expand All @@ -89,8 +85,8 @@ def benchmark_encoding(benchmark_name, ruby_obj, check_expected: true, except: [
# but all these are implemented in C++11 or newer, making it hard if not impossible to include them.
# Short of a pure C99 implementation of these newer algorithms, there isn't much that can be done to match
# Oj speed without losing precision.
benchmark_encoding "canada.json", JSON.load_file("#{__dir__}/data/canada.json"), check_expected: false, except: %i(json_state)
benchmark_encoding "canada.json", JSON.load_file("#{__dir__}/data/canada.json"), check_expected: false

# We're about 10% faster when `to_json` calls are involved, but this wasn't particularly optimized, there might be
# opportunities here.
benchmark_encoding "many #to_json calls", [{object: Object.new, int: 12, float: 54.3, class: Float, time: Time.now, date: Date.today}] * 20, except: %i(json_state)
benchmark_encoding "many #to_json calls", [{object: Object.new, int: 12, float: 54.3, class: Float, time: Time.now, date: Date.today}] * 20
21 changes: 10 additions & 11 deletions benchmark/parser.rb
Original file line number Diff line number Diff line change
Expand Up @@ -28,27 +28,26 @@ def benchmark_parsing(name, json_output)

# NB: Notes are based on ruby 3.3.4 (2024-07-09 revision be1089c8ec) +YJIT [arm64-darwin23]

# Oj::Parser is very significanly faster (1.80x) on the nested array benchmark.
benchmark_parsing "small nested array", JSON.dump([[1,2,3,4,5]]*10)

# Oj::Parser is significanly faster (~1.5x) on the next 4 benchmarks in large part because its
# Oj::Parser is significanly faster (~1.3x) on the next 3 micro-benchmarks in large part because its
# cache is persisted across calls. That's not something we can do with the current API, we'd
# need to expose a stateful API as well, but that's no really desirable.
# Other than that we're faster than regular `Oj.load` by a good margin.
# Other than that we're faster than regular `Oj.load` by a good margin (between 1.3x and 2.4x).
benchmark_parsing "small nested array", JSON.dump([[1,2,3,4,5]]*10)
benchmark_parsing "small hash", JSON.dump({ "username" => "jhawthorn", "id" => 123, "event" => "wrote json serializer" })

benchmark_parsing "test from oj", <<JSON
{"a":"Alpha","b":true,"c":12345,"d":[true,[false,[-123456789,null],3.9676,["Something else.",false],null]],"e":{"zero":null,"one":1,"two":2,"three":[3],"four":[0,1,2,3,4]},"f":null,"h":{"a":{"b":{"c":{"d":{"e":{"f":{"g":null}}}}}}},"i":[[[[[[[null]]]]]]]}
{"a":"Alpha","b":true,"c":12345,"d":[true,[false,[-123456789,null],3.9676,["Something else.",false],null]],
"e":{"zero":null,"one":1,"two":2,"three":[3],"four":[0,1,2,3,4]},"f":null,
"h":{"a":{"b":{"c":{"d":{"e":{"f":{"g":null}}}}}}},"i":[[[[[[[null]]]]]]]}
JSON

# On these macro-benchmarks, we're on par with `Oj::Parser` and significantly
# faster than `Oj.load`.
# On these macro-benchmarks, we're on par with `Oj::Parser`, except `twitter.json` where we're `1.14x` faster,
# And between 1.3x and 1.5x faster than `Oj.load`.
benchmark_parsing "activitypub.json", File.read("#{__dir__}/data/activitypub.json")
benchmark_parsing "twitter.json", File.read("#{__dir__}/data/twitter.json")
benchmark_parsing "citm_catalog.json", File.read("#{__dir__}/data/citm_catalog.json")

# rapidjson is 8x faster thanks to it's much more performant float parser.
# rapidjson is 8x faster thanks to its much more performant float parser.
# Unfortunately, there isn't a lot of existing fast float parsers in pure C,
# and including C++ is problematic.
# Aside from that, we're much faster than other alternatives here.
# Aside from that, we're close to the alternatives here.
benchmark_parsing "float parsing", File.read("#{__dir__}/data/canada.json")

0 comments on commit f5812d8

Please sign in to comment.