Skip to content

feat(testing): Add HiveDistributedBenchmarkRunner for optimizer benchmarks#27344

Merged
feilong-liu merged 1 commit intoprestodb:masterfrom
kaikalur:hive-distributed-benchmark-runner
Mar 16, 2026
Merged

feat(testing): Add HiveDistributedBenchmarkRunner for optimizer benchmarks#27344
feilong-liu merged 1 commit intoprestodb:masterfrom
kaikalur:hive-distributed-benchmark-runner

Conversation

@kaikalur
Copy link
Copy Markdown
Contributor

@kaikalur kaikalur commented Mar 16, 2026

Summary

  • Add HiveDistributedBenchmarkRunner, a reusable utility that runs SQL benchmarks against a Hive-backed DistributedQueryRunner with side-by-side comparison of multiple session property configurations

The runner supports warmup/measured iterations, writes results to a temp file, and includes correctness verification (all scenarios must produce identical results).

Test plan

  • HiveDistributedBenchmarkRunner compiles successfully
  • Runner creates a DistributedQueryRunner and supports multiple named scenarios
  • Correctness verification via runWithVerification works as expected

== NO RELEASE NOTE ==

@kaikalur kaikalur requested a review from a team as a code owner March 16, 2026 17:48
@sourcery-ai
Copy link
Copy Markdown
Contributor

sourcery-ai bot commented Mar 16, 2026

Reviewer's Guide

Introduces a reusable HiveDistributedBenchmarkRunner utility for running and comparing Hive-backed optimizer benchmarks with correctness verification, and a specific BenchmarkGroupingSetsPreAggregation test that benchmarks grouping sets pre-aggregation strategies under different session configurations.

File-Level Changes

Change Details Files
Add a reusable Hive-backed distributed benchmark runner with multi-scenario support and correctness verification.
  • Create HiveDistributedBenchmarkRunner that instantiates a Hive DistributedQueryRunner with TPCH tables and manages its lifecycle via AutoCloseable
  • Support registration of multiple named benchmark scenarios by building derived Sessions from the runner's default session using a SessionConfigurator functional interface
  • Implement run and runWithVerification methods that execute warmup and measured iterations per scenario, compute average latencies, format a comparative summary, and optionally verify result equality across scenarios
  • Record per-run timings and averages in an internal StringBuilder, print them to stdout, and persist them to a temporary file under the system temp directory
  • Provide a helper to compare MaterializedResult outputs by row count and row-set equality, ignoring order
presto-hive/src/test/java/com/facebook/presto/hive/benchmark/HiveDistributedBenchmarkRunner.java
Introduce a concrete benchmark for grouping sets pre-aggregation strategies using the new runner.
  • Define a GROUP BY CUBE benchmark query over the lineitem table that cross-joins with UNNEST to amplify data volume without triggering PushPartialAggregationThroughExchange
  • Add a TestNG-based benchmark method that constructs a HiveDistributedBenchmarkRunner with fixed warmup and measured iterations and configures three optimizer scenarios via system properties: baseline, pre_aggregate_before_groupid, and add_exchange_below_agg
  • Invoke runWithVerification on the runner to ensure all optimizer scenarios produce identical query results while measuring performance
  • Provide a main method to run the benchmark outside TestNG using the same benchmark logic
presto-hive/src/test/java/com/facebook/presto/hive/benchmark/BenchmarkGroupingSetsPreAggregation.java

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@kaikalur kaikalur requested a review from feilong-liu March 16, 2026 17:48
…chmarks

Add a reusable HiveDistributedBenchmarkRunner utility that runs SQL
benchmarks against a Hive-backed DistributedQueryRunner, supporting
side-by-side comparison of multiple session property configurations
with warmup/measured iterations and correctness verification.

Also add BenchmarkGroupingSetsPreAggregation which uses the runner to
compare PreAggregateBeforeGroupId vs AddExchangesBelowPartialAggregation
vs baseline on a GROUP BY CUBE query.
Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 1 issue, and left some high level feedback:

  • In HiveDistributedBenchmarkRunner.run(), baselineAvg is taken from averages.values().iterator().next() without checking that any scenarios were registered; consider throwing a clear exception if scenarios is empty to avoid a confusing NoSuchElementException.
  • The resultsMatch method compares MaterializedResult instances via HashSet of rows, which ignores both ordering and duplicate row counts; if duplicates are possible, consider a multiset-style comparison or using an existing Presto result equality helper instead of HashSet.
  • For the filesystem write in HiveDistributedBenchmarkRunner.run(), consider using local import statements for Files and Paths (rather than fully qualified names) and optionally including a timestamp or query identifier in the filename to avoid overwriting previous benchmark results.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In `HiveDistributedBenchmarkRunner.run()`, `baselineAvg` is taken from `averages.values().iterator().next()` without checking that any scenarios were registered; consider throwing a clear exception if `scenarios` is empty to avoid a confusing `NoSuchElementException`.
- The `resultsMatch` method compares `MaterializedResult` instances via `HashSet` of rows, which ignores both ordering and duplicate row counts; if duplicates are possible, consider a multiset-style comparison or using an existing Presto result equality helper instead of `HashSet`.
- For the filesystem write in `HiveDistributedBenchmarkRunner.run()`, consider using local `import` statements for `Files` and `Paths` (rather than fully qualified names) and optionally including a timestamp or query identifier in the filename to avoid overwriting previous benchmark results.

## Individual Comments

### Comment 1
<location path="presto-hive/src/test/java/com/facebook/presto/hive/benchmark/HiveDistributedBenchmarkRunner.java" line_range="152-156" />
<code_context>
+        return avg;
+    }
+
+    private static boolean resultsMatch(MaterializedResult a, MaterializedResult b)
+    {
+        return a.getMaterializedRows().size() == b.getMaterializedRows().size()
+                && new java.util.HashSet<>(a.getMaterializedRows())
+                        .equals(new java.util.HashSet<>(b.getMaterializedRows()));
+    }
+
</code_context>
<issue_to_address>
**issue (bug_risk):** The correctness check ignores row multiplicity; consider tightening it and adding tests to lock in the intended semantics.

`resultsMatch` compares only row count and `HashSet` equality, so it treats results that differ only in row multiplicities as equal. Since this underpins `runWithVerification` used by benchmarks, this can hide optimizer correctness issues.

Please either:
- Tighten the comparison to reflect the intended contract (e.g., multiset or ordered equality), or
- Clearly document that multiplicities are intentionally ignored and add unit tests that cover differing multiplicities and assert the expected outcome.

This will make the verification contract explicit and guard against accidental changes in future edits.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment on lines +152 to +156
private static boolean resultsMatch(MaterializedResult a, MaterializedResult b)
{
return a.getMaterializedRows().size() == b.getMaterializedRows().size()
&& new java.util.HashSet<>(a.getMaterializedRows())
.equals(new java.util.HashSet<>(b.getMaterializedRows()));
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (bug_risk): The correctness check ignores row multiplicity; consider tightening it and adding tests to lock in the intended semantics.

resultsMatch compares only row count and HashSet equality, so it treats results that differ only in row multiplicities as equal. Since this underpins runWithVerification used by benchmarks, this can hide optimizer correctness issues.

Please either:

  • Tighten the comparison to reflect the intended contract (e.g., multiset or ordered equality), or
  • Clearly document that multiplicities are intentionally ignored and add unit tests that cover differing multiplicities and assert the expected outcome.

This will make the verification contract explicit and guard against accidental changes in future edits.

@kaikalur kaikalur force-pushed the hive-distributed-benchmark-runner branch from 7967758 to 2c6763e Compare March 16, 2026 17:50
@kaikalur kaikalur changed the title feat(benchmark): Add HiveDistributedBenchmarkRunner for optimizer benchmarks feat(testing): Add HiveDistributedBenchmarkRunner for optimizer benchmarks Mar 16, 2026
@feilong-liu feilong-liu merged commit d5988a1 into prestodb:master Mar 16, 2026
116 of 122 checks passed
@ethanyzhang ethanyzhang added the from:Meta PR from Meta label Mar 25, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

from:Meta PR from Meta

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants