fix(prune): count unique addresses instead of changesets in RocksDB pruner limiter#21694
Closed
fix(prune): count unique addresses instead of changesets in RocksDB pruner limiter#21694
Conversation
Adds a test that demonstrates the RocksDB pruning limiter issue where the limiter increments per changeset scanned from static files, not per RocksDB shard actually modified. The test creates a scenario with: - 10 blocks × 1000 changesets = 10,000 total changesets - But only 10 unique addresses (high repetition) - Limit set to 100 Expected behavior (after fix): - 10 unique addresses = 10 RocksDB shard operations - Should process all 10 blocks since 10 < 100 limit Current behavior (bug): - Stops after ~100 changesets scanned (< 1 block) - Because limiter counts input scans, not output work This test documents the current buggy behavior. After fixing the limiter to count unique addresses instead of changesets, the test assertion should be updated to verify all blocks are processed. Amp-Thread-ID: https://ampcode.com/threads/T-019c1ca0-1067-7584-a10b-649ca5b1c5cb Co-authored-by: Amp <amp@ampcode.com>
…runer limiter Fixes RETH-292: The RocksDB pruning path was incrementing the limiter for every changeset scanned from static files, not for each unique address (account_history) or (address, slot) pair (storage_history) that corresponds to actual RocksDB shard work. This caused the pruner to stop prematurely when there was high changeset repetition (e.g., popular contracts touched many times per block). Changes: - account_history.rs: Use HashMap::entry() to only increment limiter on first occurrence of each address - storage_history.rs: Use HashMap::entry() to only increment limiter on first occurrence of each (address, slot) pair - Update regression test to verify the fix works Before: 10k changesets with 10 unique addresses, limit=100 → stops at <1 block After: Same scenario → processes all 10 blocks (10 < 100 limit) Amp-Thread-ID: https://ampcode.com/threads/T-019c1ca0-1067-7584-a10b-649ca5b1c5cb Co-authored-by: Amp <amp@ampcode.com>
Member
Author
|
Closing - the underlying issue is mitigated by PR #19141 which sets The real concern (throughput for large backlogs) is tracked in RETH-296. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Fixes RETH-292: The RocksDB pruning path was incrementing the limiter for every changeset scanned from static files, not for each unique address that corresponds to actual RocksDB shard work.
Problem
The pruner limiter counted changesets scanned instead of RocksDB operations performed:
With ~7k changesets/block but fewer unique addresses, the limiter stopped prematurely.
Solution
Use
HashMap::entry()to only increment the limiter on first occurrence of each unique key:Test Results
Changes
account_history.rs: Increment limiter only on new unique addressesstorage_history.rs: Increment limiter only on new unique (address, slot) pairsAll prune tests pass