Skip to content

Improve TSDB codec benchmarks with full encoder and compression metrics#140299

Merged
salvatore-campagna merged 30 commits intoelastic:mainfrom
salvatore-campagna:benchmarks/tsdb-codec-compression-throughput-metrics
Jan 9, 2026
Merged

Improve TSDB codec benchmarks with full encoder and compression metrics#140299
salvatore-campagna merged 30 commits intoelastic:mainfrom
salvatore-campagna:benchmarks/tsdb-codec-compression-throughput-metrics

Conversation

@salvatore-campagna
Copy link
Contributor

@salvatore-campagna salvatore-campagna commented Jan 7, 2026

The existing TSDB codec benchmarks only measure the bit-packing step (DocValuesForUtil), which doesn't reflect actual codec performance. The real encoder applies multiple compression stages before bit-packing: delta encoding to reduce value magnitudes, minimum offset removal, and GCD computation to find common divisors. This PR updates the benchmarks to use TSDBDocValuesEncoder, which runs the full encoding pipeline and gives us realistic performance numbers.

These benchmarks are designed to capture compression/throughput tradeoffs when evaluating new encoding techniques. By measuring both compression efficiency and encoding speed, we can make informed decisions about whether a new compression algorithm is worth the performance cost. A technique that achieves 20% better compression but runs 50% slower may not be worthwhile for high-throughput indexing workloads. Moreover, measuring throughput at both encode and decode time is important because encoding throughput directly impacts indexing performance, while decoding throughput affects query latency.

With these improvements, we can run the benchmarks against the current encoder implementation to establish baseline results. Future encoder changes can then use these baselines as a reference point for comparison.

This PR introduces two auxiliary counter classes and refactors the benchmark base class:

  • CompressionMetrics: uses @AuxCounters(Type.EVENTS) to report compression statistics including encodedBytesPerValue, compressionRatio (raw size divided by encoded size), encodedBitsPerValue, and overheadRatio (ratio of actual encoded size to theoretical minimum where 1.0 means optimal encoding)
  • ThroughputMetrics: uses @AuxCounters(Type.OPERATIONS) to report encodedBytes and valuesProcessed which JMH normalizes by time to produce bytes/s and values/s rates, enabling fair comparison across different block sizes
  • AbstractTSDBCodecBenchmark: renamed from AbstractDocValuesForUtilBenchmark and refactored using Template Method pattern where subclasses implement run() for the core operation (encode or decode) and getOutput() for blackhole consumption, while benchmark(Blackhole) combines both

Encode benchmarks provide two methods: throughput() for measuring encoding speed with ThroughputMetrics, and compression() for measuring compression efficiency with CompressionMetrics. Decode benchmarks provide only throughput() since compression metrics are a property of the encoded data, not the decoding process.

Benchmarks are parameterized by bitsPerValue at encoding boundaries where ForUtil and DocValuesForUtil switch between different code paths. Testing at these boundaries ensures we catch performance regressions at all the important transitions where the encoder selects different strategies.

Running the benchmarks

# Encode throughput
./gradlew :benchmarks:run --args='.*Encode.*IntegerBenchmark.throughput'

# Encode compression
./gradlew :benchmarks:run --args='.*Encode.*IntegerBenchmark.compression'

# Decode throughput
./gradlew :benchmarks:run --args='.*Decode.*IntegerBenchmark.throughput'

The previous benchmarks only measured the bit-packing step using
DocValuesForUtil. This change switches to TSDBDocValuesEncoder to
measure the full encoding pipeline including delta encoding, offset
removal, GCD compression, and bit packing.

Changes:
- Replace DocValuesForUtil with TSDBDocValuesEncoder
- Add MetricsConfig for setup configuration
- Add CompressionMetrics (@AuxCounters) for benchmark metrics reporting
- Switch to SampleTime mode for latency distribution
- Test at encoding boundaries (1,4,8,9,16,17,24,25,32,33,40,48,56,57,64)
salvatore-campagna and others added 9 commits January 7, 2026 18:43
Add comprehensive javadoc documentation to internal benchmark classes:

- CompressionMetrics: document all metrics fields, usage pattern, and
  JMH auxiliary counters integration
- MetricsConfig: document configuration parameters and injection pattern
- AbstractTSDBCodecBenchmark: document template method pattern and
  encoding pipeline stages
- EncodeBenchmark/DecodeBenchmark: add class-level documentation
MetricsConfig uses @State(Scope.Benchmark), meaning a single instance
is shared across all benchmark threads. While JMH ensures @setup
completes before @benchmark methods run, the happens-before relationship
requires volatile to guarantee visibility of writes to reader threads.

Added volatile modifier to all fields with documentation explaining
the JMH lifecycle and why volatile is sufficient (no synchronization
needed since there is no write contention).
…vadoc

Extract the magic number 64 into a named constant EXTRA_METADATA_SIZE
with documentation explaining its purpose: buffer headroom for encoding
metadata written by TSDBDocValuesEncoder during delta, offset, and GCD
compression steps.

Also simplify and align javadoc across EncodeBenchmark and DecodeBenchmark
for consistency, removing redundant details while keeping essential
information about what each class measures.
…arity

Rename getEncodedBytes() to getEncodedSize() across the benchmark API
to better communicate that this method returns a size value (number of
bytes) rather than the bytes themselves.

Also rename getEncodedBytesPerBlock() to getEncodedSizePerBlock() in
MetricsConfig and CompressionMetrics for consistency.
…y null check

We always pass a non-null reference, hence the null check is not needed.
JMH guarantees @teardown(Level.Iteration) runs after benchmark operations
complete. Since recordOperation is called on every operation, config will
always be set before computeMetrics runs.
*/
public long totalValuesProcessed;

private MetricsConfig config;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't like having this here...I will see if I can move it outside or pass values to the benchmark directly.

salvatore-campagna and others added 7 commits January 8, 2026 15:14
…er passing

Remove the MetricsConfig intermediary class and simplify the benchmark
architecture by passing values directly to CompressionMetrics.recordOperation().

Changes:
- Delete MetricsConfig.java entirely
- Update CompressionMetrics.recordOperation() to accept blockSize, encodedBytes,
  and nominalBits parameters directly instead of a MetricsConfig object
- Simplify all 8 benchmark classes by removing MetricsConfig from method
  signatures and passing values directly from the benchmark context

This eliminates an unnecessary abstraction layer and makes the data flow
more explicit.
…etrics

Change metric fields from public to private and expose them via public
getter methods for better encapsulation. JMH @AuxCounters supports both
public fields and public getters for metric discovery.
Remove throughput metrics that only report raw iteration totals. These
cannot be converted to per-operation metrics without breaking the
compression efficiency metrics. Keep only the meaningful compression
metrics: encodedBytesPerValue, compressionRatio, encodedBitsPerValue,
and overheadRatio.
Add ThroughputMetrics class using JMH @AuxCounters with Type.OPERATIONS
to track bytes/s and values/s throughput rates.

Update all TSDB codec benchmarks to use Mode.Throughput for compatibility
with both metric types.

Encode benchmarks provide two methods:
- throughput() reports encodedBytes and valuesProcessed rates
- compression() reports compressionRatio, encodedBitsPerValue, etc.

Decode benchmarks provide only throughput() since compression metrics
are a property of the encoded data, not the decoding process.

This allows running benchmarks selectively:
- ./gradlew :benchmarks:jmh -Pjmh.includes='.*Encode.*throughput'
- ./gradlew :benchmarks:jmh -Pjmh.includes='.*Encode.*compression'
- ./gradlew :benchmarks:jmh -Pjmh.includes='.*Decode.*throughput'
@salvatore-campagna salvatore-campagna marked this pull request as ready for review January 8, 2026 17:45
@elasticsearchmachine elasticsearchmachine added the needs:triage Requires assignment of a team area label label Jan 8, 2026
@salvatore-campagna salvatore-campagna added :StorageEngine/TSDB You know, for Metrics and removed needs:triage Requires assignment of a team area label labels Jan 8, 2026
@elasticsearchmachine
Copy link
Collaborator

Pinging @elastic/es-storage-engine (Team:StorageEngine)

@salvatore-campagna salvatore-campagna added >test Issues or PRs that are addressing/adding tests and removed Team:StorageEngine labels Jan 8, 2026
Copy link
Member

@martijnvg martijnvg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Salvatore 👍

I do wonder whether we should also benchmark encodeOrdinals and decodeOrdinals? Maybe in a follow up?

@salvatore-campagna
Copy link
Contributor Author

salvatore-campagna commented Jan 9, 2026

Thanks Salvatore 👍

I do wonder whether we should also benchmark encodeOrdinals and decodeOrdinals? Maybe in a follow up?

Thanks @martijnvg . Yes I am planning to benchmark those too. I am just splitting all PRs to make reviewing easier and reduce scope.

salvatore-campagna and others added 9 commits January 9, 2026 11:49
The TSDB encoder mutates the input array in-place during encoding
(subtracting min offset, dividing by GCD, computing deltas). This
caused incorrect compression metrics because after the first encoding,
subsequent operations encoded already-zeroed data.

Changes:
- Store original input array and restore via System.arraycopy
- Make EncodeBenchmark and DecodeBenchmark final classes since they
  use composition pattern and are not designed for inheritance
…ration

Add method-level @WarmUp(iterations=0) and @measurement(iterations=1)
to compression benchmarks. Compression metrics are deterministic since
the same input data always produces the same encoded size, unlike
throughput measurements which vary due to JIT compilation and CPU state.
Change benchmark setup from Level.Iteration to Level.Trial since the input
data is deterministic (fixed seed) and does not need to be regenerated before
each iteration. This reduces setup overhead while maintaining correct behavior.

The setupInvocation() method continues to restore the input array via
System.arraycopy before each benchmark invocation.
Java shift semantics cause 1L << 64 to wrap to 1 instead of producing
a 64-bit range. This made bitsPerValue=64 benchmarks generate only
zeros, skewing results.

Use unbounded nextLong() for 64-bit values to get proper full-range
random numbers.
@salvatore-campagna salvatore-campagna merged commit 7e32f61 into elastic:main Jan 9, 2026
35 checks passed
szybia added a commit to szybia/elasticsearch that referenced this pull request Jan 9, 2026
* upstream/main: (76 commits)
  [Inference API] Get _services skips EIS authorization call if CCM is not configured (elastic#139964)
  Improve TSDB codec benchmarks with full encoder and compression metrics (elastic#140299)
  ESQL: Consolidate test `BlockLoaderContext`s (elastic#140403)
  ESQL: Improve Lookup Join performance with CachedDirectoryReader (elastic#139314)
  ES|QL: Add more examples for the match operator (elastic#139815)
  ESQL: Add timezone to add and sub operators, and ConfigurationAware planning support (elastic#140101)
  ESQL: Updated ToIp tests and generated documentation for map parameters (elastic#139994)
  Disable _delete_by_query and _update_by_query for CCS/stateful (elastic#140301)
  Remove unused method ElasticInferenceService.translateToChunkedResults (elastic#140442)
  logging hot threads on large queue of the management threadpool (elastic#140251)
  Search functions docs cleanup (elastic#140435)
  Unmute 350_point_in_time/point-in-time with index filter (elastic#140443)
  Remove unused methods (elastic#140222)
  Add CPS and `project_routing` support for `_mvt` (elastic#140053)
  Streamline `ShardDeleteResults` collection (elastic#140363)
  Fix Docker build to use --load for single-platform images (elastic#140402)
  Parametrize + test VectorScorerOSQBenchmark (elastic#140354)
  `RecyclerBytesStreamOutput` using absolute offsets (elastic#140303)
  Define bulk float native methods for vector scoring (elastic#139885)
  Make `TimeSeriesAggregate` `TimestampAware` (elastic#140270)
  ...
jimczi pushed a commit to jimczi/elasticsearch that referenced this pull request Jan 12, 2026
…cs (elastic#140299)

## TSDB codec benchmarks: full encoding pipeline & improved metrics

### Core change
This update improves TSDB codec benchmarks by measuring the **entire encoding pipeline** instead of only the bit-packing step.

Benchmarks now use `TSDBDocValuesEncoder` rather than `DocValuesForUtil`, capturing all encoding stages:
- delta encoding  
- offset removal  
- GCD compression  
- bit packing  

This provides realistic compression ratios and performance measurements.

Key changes:
- Replace `DocValuesForUtil` with `TSDBDocValuesEncoder`
- Add detailed compression metrics via JMH `@AuxCounters`
- Use SampleTime mode for latency distribution
- Benchmark encoding boundary sizes (1–64 bits)
- Introduce a named constant for encoder metadata buffer headroom

### Correctness fixes
- Restore the input array before each encode to avoid encoder in-place mutations
- Fix 64-bit value generation to avoid overflow producing invalid benchmark data
- Ensure proper memory visibility for shared benchmark state

### Metrics & benchmark structure
- Separate **compression metrics** from **throughput metrics**
- Compression benchmarks are deterministic and run with a single measurement iteration
- Throughput benchmarks run in `Mode.Throughput` and report bytes/s and values/s
- Decode benchmarks report throughput only (compression is an encode-time property)

### Refactoring & cleanup
- Simplify benchmark APIs and remove unnecessary abstractions
- Improve naming clarity (`getEncodedSize` instead of `getEncodedBytes`)
- Remove redundant null checks and unused metrics
- Improve encapsulation and consistency across benchmark classes
- Add comprehensive Javadoc documenting benchmark design and metrics

### Miscellaneous
- Reduce setup overhead by switching to trial-level initialization
- Minor refactors, cleanups, and CI formatting updates
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

:StorageEngine/TSDB You know, for Metrics Team:StorageEngine >test Issues or PRs that are addressing/adding tests v9.4.0

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants