Skip to content

Perf/simple flat#9854

Open
asdacap wants to merge 15 commits intomasterfrom
perf/simple-flat
Open

Perf/simple flat#9854
asdacap wants to merge 15 commits intomasterfrom
perf/simple-flat

Conversation

@asdacap
Copy link
Contributor

@asdacap asdacap commented Dec 1, 2025

Note

  • Some optimization was removed to make this PR clearer or because its does not improve the performance significantly. These are:
    • Warmup slot from prewarmer thread. This will added in a separate PR as it modifies the worldstate a bit.
    • HintSet. Very hard to notice if it help or not.
    • Custom bloom filter for address and slots. Although faster, its redundant with rocksdb bloom and only improve performance by a tiny amount compared to the complexity it brings.

Design

global_state

  • The global state orchestration lives in FlatDbManager
    • At the back of it is IPersistence which represent the persistence layer.
      • It has a create write batch and reader which rely on rocksdb snapshot.
    • On top of it, there is two layer of Snapshot, backed by SnapshotContent.
      • Snapshot is just written accounts,slots, and trie nodes per range of block.
      • The first layer consists of exactly one block sized snapshot.
      • The second layer are compactions of multiple blocks worth of snaphots. This is called compacted snapshots and it default size is either 32 (16 in the diagram) blocks or 4 blocks. 32 block for full compaction and 4 block for "mid" compaction.
      • In front of it is a TrieNodeCache which is a special TrieNode... cache. It stores the node in a sharded hashtable array. This cache is addressed by both path and hash. Its put in front because its updated on the head instead of on persist. But its reorg safe as the trie node still need keccak to match.

readonly_snapshot_bundle

  • The ReadOnlySnapshotBundle is a bundle of snapshot.
    • Its the reader used by StateReader and the various world state. Its read only, refcounted and shared.
    • The use of the compacted second layer allow the ReadOnlySnapshotBundle to have a small-ish number of layers to traverse which is about 10, instead of 128 on mainnet. You can make it lower at expense of about 50% of a full CPU core and some good amount of memory, but it does not look like it help much.

scope_with_bundle(1)

  • A worldstate scope consist of:
    • SnapshotBundle which consists of:
      • TransientResource which are some pooled large variables.
      • SnapshotContent which are mutable that will be put into a Snapshot on commit.
      • ReadOnlySnapshotBundle which are collection of snapshot and persistence's reader.
    • TrieNodeWarmer which is a dedicated special warmer just for node.
    • Read goes through ReadOnlySnapshotBundle.
    • A special HintGet method is added to the world state scope interface which allow it to trigger the trie node warmer on what key it should warm up.
    • (Not in this PR) Additionally, a WarmUpOutOfScope is added to the existing scope provider interface which allow prewarmer to send key to trie node warmer from outside the scope (prewarmer kinda runs outside the main block processing).

Persistence

  • Technically you can put the persistence to rocksdb, LMDB or Paprika, but so far I cant find a faster configuration than just plain RocksDb.
  • There are three "layout" right now.

Flat

  • Flat layout is the default consisting of a rocksdb database with 7 columns.
    • Metadata
    • Account
      • 14GB uncompressed.
      • Stores the account encoded with slim RLP encoding (same as snap sync). This causes the database to be very small so might as well disable compression.
      • Key is <20-byte-hashed-address-prefix> (first 20 bytes of the hashed address).
    • Storage
      • 62GB compressed.
      • Stores the slot value with leading zeros removed.
      • Key is <4-byte-hashed-address-prefix><32-byte-hashed-slotindex><16-byte-hashed-address-suffix> (52 bytes total). The address hash is split to help RocksDB's comparator skip bytes during comparison and help with index key shortening.
    • StateTopNodes
      • 648M compressed.
      • Stores state trie nodes where path nibble count is 5 or less.
      • Separate from StateNodes to reduce compaction and improve block cache hit rate.
      • Key is <3-byte-path> where the last 4 bits encode the path length.
    • StateNodes
      • 34GB compressed.
      • Stores state trie nodes where path nibble count is 6-15.
      • Key is <8-byte-path> where the last 4 bits encode the path length.
    • StorageNodes
      • 143GB compressed.
      • Stores storage trie nodes where path nibble count is 0-15.
      • Key is <4-byte-hashed-address-prefix><8-byte-path><16-byte-hashed-address-suffix> (28 bytes total).
    • FallbackNodes
      • Stores both state and storage trie nodes where path nibble count is 16+.
      • State nodes key: <0x00><32-byte-path><1-byte-length> (34 bytes total).
      • Storage nodes key: <0x01><4-byte-address-prefix><32-byte-path><1-byte-length><16-byte-address-suffix> (54 bytes total).
  • It assumes a reasonably high memory of at least 32GB on mainnet.
    • Rocksdb partitioned index is disabled, meaning large in memory index.
    • A 1 GB block cache configurable via --FlatDb.BlockCacheSizeBudget is set to the account and storage db at a 30/70 split.
    • For flat columns, optimize_filters_for_hits is set to false to keep last level bloom, which mean even more memory usage.
    • A dedicated 1GB block cache is given to the flat column. (might be removed).
    • Index + bloom take up around 2.3GB on mainnet. This is proportional to the database size.
    • Total database size is 260 GB.

PreimageFlat

  • Is exactly like Flat, but account and storage keys use the raw address/slot instead of their hashes.
  • It is the fastest configuration, but it cannot snap sync.

FlatInTrie

  • The purpose of FlatInTrie layout is to cater for lower spec machine or very large network by relying more on database compression and using a database configuration that consume less memory.
  • To do that, the account and storage data is embedded within StateNodes and StorageNodes (reusing the same columns as trie nodes). The key encoding is the same.
  • There are 5 column in total.
    • Metadata
    • StateTopNodes
    • StateNodes
    • StorageNodes
    • FallbackNodes
  • Rocksdb's partitioned index is turned on and optimize_filters_for_hits is turned on. This significantly reduce memory at the expense of latency.
  • The same 1GB block cache remains but put to the state nodes and storage nodes db.
  • This results in better compression and since partitioned index is enabled for this layout, lower memory usage.
  • In consume around 16MB of fixed index size (partitioned index means most index is lazy loaded as part of block cache). optimize_filters_for_hits means last level SST does not have bloom which slows down null slot check.
  • Freshly imported database size is 193GB. For reference the state db size at this point (its a backup a few months back) is 186GB.
  • However, this comes at the cost of lower performance.

Performance

Screenshot_2026-01-28_12-42-59
  • Graph is Flat, FlatInTrie, HalfPath. Note: halfpath have a bug here, it does not prune reliably.
  • Test is running with 48GB of RAM limit via systemd-run.
  • In general, its about 20% higher mgas/sec than halfpath.

Types of changes

  • New feature (a non-breaking change that adds functionality)
  • Optimization

Testing

Requires testing

  • Yes

If yes, did you write tests?

  • Yes

@benaadams
Copy link
Member

Unexpected old snapshot created. Lease count 7 at    at System.Environment.get_StackTrace()
   at Nethermind.State.Flat.Persistence.RefCountingPersistenceReader..ctor(IPersistenceReader innerReader, ILogger logger) in D:\GitHub\nethermind\src\Nethermind\Nethermind.State\Flat\Persistence\RefCountingPersistenceReader.cs:line 21
   at Nethermind.State.Flat.FlatDiffRepository.LeaseReader() in D:\GitHub\nethermind\src\Nethermind\Nethermind.State\Flat\FlatDiffRepository.cs:line 116
   at Nethermind.State.Flat.FlatDiffRepository.GatherCache(StateId baseBlock, SnapshotBundleUsage usage, Nullable`1 earliestExclusive) in D:\GitHub\nethermind\src\Nethermind\Nethermind.State\Flat\FlatDiffRepository.cs:line 412
   at Nethermind.State.WorldState.BeginScope(BlockHeader baseBlock)
   at Nethermind.Consensus.Processing.BranchProcessor.Process(BlockHeader baseBlock, IReadOnlyList`1 suggestedBlocks, ProcessingOptions options, IBlockTracer blockTracer, CancellationToken token) in D:\GitHub\nethermind\src\Nethermind\Nethermind.Consensus\Processing\BranchProcessor.cs:line 53
   at Nethermind.Consensus.Processing.BlockchainProcessor.ProcessBranch(ProcessingBranch& processingBranch, ProcessingOptions options, IBlockTracer tracer, CancellationToken token, String& error)
   at Nethermind.Consensus.Processing.BlockchainProcessor.Process(Block suggestedBlock, ProcessingOptions options, IBlockTracer tracer, CancellationToken token, String& error) in D:\GitHub\nethermind\src\Nethermind\Nethermind.Consensus\Processing\BlockchainProcessor.cs:line 462
   at Nethermind.Consensus.Processing.BlockchainProcessor.ProcessBlocks() in D:\GitHub\nethermind\src\Nethermind\Nethermind.Consensus\Processing\BlockchainProcessor.cs:line 353
   at Nethermind.Consensus.Processing.BlockchainProcessor.RunProcessingLoop() in D:\GitHub\nethermind\src\Nethermind\Nethermind.Consensus\Processing\BlockchainProcessor.cs:line 332
   at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state)
   at System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1.AsyncStateMachineBox`1.MoveNext()
   at System.Threading.ThreadPoolWorkQueue.Dispatch()
   at System.Threading.PortableThreadPool.WorkerThread.WorkerThreadStart()
   at System.Threading.Thread.StartCallback()

Seems to continue fine; but not happy if restarted

@benaadams
Copy link
Member

Maybe Pruning, unless it doesn't output a compete message?

15 Dec 23:33:49 | Pruning trie cache from 548027472 to 478784872
15 Dec 23:33:49 | Received ForkChoice: 24021312 (0x8cca6c...bbcf1c), Safe: 24021271 (0x80e42d...ac2c55), Finalized: 24021239 (0x773b0e...10b446)
15 Dec 23:33:49 | Synced Chain Head to 24021312 (0x8cca6c...bbcf1c)
15 Dec 23:33:49 | Received New Block:  24021313 (0x7f0082...9a99c3)      | limit    60,000,000    | Extra Data: Titan (titanbuilder.xyz)
15 Dec 23:33:49 | Received ForkChoice: 24021313 (0x7f0082...9a99c3), Safe: 24021271 (0x80e42d...ac2c55), Finalized: 24021239 (0x773b0e...10b446)
15 Dec 23:33:49 | Synced Chain Head to 24021313 (0x7f0082...9a99c3)
15 Dec 23:33:49 | Received New Block:  24021314 (0xe43566...0881b7)      | limit    60,000,000    | Extra Data: BuilderNet (Beaver)
15 Dec 23:33:49 | Received ForkChoice: 24021314 (0xe43566...0881b7), Safe: 24021271 (0x80e42d...ac2c55), Finalized: 24021239 (0x773b0e...10b446)
15 Dec 23:33:49 | Synced Chain Head to 24021314 (0xe43566...0881b7)
15 Dec 23:33:59 | Received New Block:  24021315 (0xd61c3c...72c060)      | limit    60,000,000    | Extra Data: ؃ geth go1.25.4 linux
15 Dec 23:33:59 | Processed      24021311... 24021315   |      157.3 ms  | slot         10,653 ms |⛽ Gas gwei: 0.042 .. 0.267 (0.866) .. 5.000
15 Dec 23:33:59 |  Blocks  x5            125.96 MGas    |      863   txs | calls     19,362 (167) | sload  60,973 | sstore 17,180 | create   6( -1)
15 Dec 23:33:59 |  Block throughput      800.79 MGas/s  |    5,486.5 tps | blobs  24 | exec code cache  39,393 | new      3 | ops   9,990,911
15 Dec 23:33:59 | Received ForkChoice: 24021315 (0xd61c3c...72c060), Safe: 24021271 (0x80e42d...ac2c55), Finalized: 24021239 (0x773b0e...10b446)
15 Dec 23:33:59 | Synced Chain Head to 24021315 (0xd61c3c...72c060)
15 Dec 23:34:13 | Received New Block:  24021316 (0x172e84...537459)      | limit    60,000,000    | Extra Data: BuilderNet (Beaver)
15 Dec 23:34:13 | Processed            24021316         |       76.1 ms  | slot         13,819 ms |⛽ Gas gwei: 0.036 .. 0.043 (0.801) .. 45.000
15 Dec 23:34:13 |  Block mb 0.0158 ETH    59.97 MGas    |      255   txs | calls     12,977 ( 76) | sload  40,991 | sstore 10,158 | create   0
15 Dec 23:34:13 |  Block throughput      788.31 MGas/s  |    3,352.2 tps | blobs   3 | exec code cache  26,115 | new      0 | ops   6,026,633
15 Dec 23:34:13 | Received ForkChoice: 24021316 (0x172e84...537459), Safe: 24021271 (0x80e42d...ac2c55), Finalized: 24021239 (0x773b0e...10b446)
15 Dec 23:34:13 | Synced Chain Head to 24021316 (0x172e84...537459)
15 Dec 23:34:23 | Received New Block:  24021317 (0x7127c9...4ae57c)      | limit    60,000,000    | Extra Data: Titan (titanbuilder.xyz)
15 Dec 23:34:23 | Processed            24021317         |       17.8 ms  | slot         10,689 ms |⛽ Gas gwei: 0.040 .. 0.053 (0.510) .. 45.000
15 Dec 23:34:23 |  Block mb 0.0054 ETH    22.74 MGas    |      195   txs | calls        999 ( 87) | sload   3,323 | sstore    964 | create   4
15 Dec 23:34:23 |  Block throughput     1274.60 MGas/s🔥|   10,928.0 tps | blobs   0 | exec code cache   2,179 | new      5 | ops     759,268
15 Dec 23:34:23 | Received ForkChoice: 24021317 (0x7127c9...4ae57c), Safe: 24021271 (0x80e42d...ac2c55), Finalized: 24021239 (0x773b0e...10b446)
15 Dec 23:34:23 | Synced Chain Head to 24021317 (0x7127c9...4ae57c)
15 Dec 23:34:29 | Peers: 16 | node diversity :  Geth (62 %), Nethermind (38 %)
15 Dec 23:34:36 | Received New Block:  24021318 (0x6cadf3...bfeed6)      | limit    59,941,408 👇 | Extra Data: Titan (titanbuilder.xyz)
15 Dec 23:34:36 | Processed            24021318         |       81.5 ms  | slot         12,405 ms |⛽ Gas gwei: 0.039 .. 0.042 (0.592) .. 3.817
15 Dec 23:34:36 |  Block mb 0.0065 ETH    57.65 MGas    |      199   txs | calls     14,684 ( 23) | sload  47,384 | sstore 13,219 | create   2
15 Dec 23:34:36 |  Block throughput      706.96 MGas/s  |    2,440.3 tps | blobs   6 | exec code cache  29,555 | new      4 | ops   6,655,344
15 Dec 23:34:36 | Received ForkChoice: 24021318 (0x6cadf3...bfeed6), Safe: 24021271 (0x80e42d...ac2c55), Finalized: 24021239 (0x773b0e...10b446)
15 Dec 23:34:36 | Synced Chain Head to 24021318 (0x6cadf3...bfeed6)
15 Dec 23:34:47 | Received New Block:  24021319 (0xadb078...74d1e2)      | limit    59,999,943 👆 | Extra Data: BuilderNet (Flashbots)
15 Dec 23:34:47 | Processed            24021319         |       53.3 ms  | slot         11,783 ms |⛽ Gas gwei: 0.043 .. 0.043 (0.481) .. 12.000
15 Dec 23:34:47 |  Block mb 0.0106 ETH    55.47 MGas    |      257   txs | calls      4,696 (123) | sload   9,860 | sstore  2,437 | create   1
15 Dec 23:34:47 |  Block throughput     1040.13 MGas/s🔥|    4,818.9 tps | blobs   2 | exec code cache   8,225 | new      2 | ops   4,821,305
15 Dec 23:34:48 | Received ForkChoice: 24021319 (0xadb078...74d1e2), Safe: 24021271 (0x80e42d...ac2c55), Finalized: 24021239 (0x773b0e...10b446)
15 Dec 23:34:48 | Synced Chain Head to 24021319 (0xadb078...74d1e2)
15 Dec 23:34:49 | Unexpected old snapshot created. Lease count 7 at    at System.Environment.get_StackTrace()
   at Nethermind.State.Flat.Persistence.RefCountingPersistenceReader..ctor(IPersistenceReader innerReader, ILogger logger) in D:\GitHub\nethermind\src\Nethermind\Nethermind.State\Flat\Persistence\RefCountingPersistenceReader.cs:line 21
   at Nethermind.State.Flat.FlatDiffRepository.LeaseReader() in D:\GitHub\nethermind\src\Nethermind\Nethermind.State\Flat\FlatDiffRepository.cs:line 116
   at Nethermind.State.Flat.FlatDiffRepository.GatherCache(StateId baseBlock, SnapshotBundleUsage usage, Nullable`1 earliestExclusive) in D:\GitHub\nethermind\src\Nethermind\Nethermind.State\Flat\FlatDiffRepository.cs:line 412
   at Nethermind.State.WorldState.BeginScope(BlockHeader baseBlock)
   at Nethermind.Consensus.Processing.BranchProcessor.Process(BlockHeader baseBlock, IReadOnlyList`1 suggestedBlocks, ProcessingOptions options, IBlockTracer blockTracer, CancellationToken token) in D:\GitHub\nethermind\src\Nethermind\Nethermind.Consensus\Processing\BranchProcessor.cs:line 53
   at Nethermind.Consensus.Processing.BlockchainProcessor.ProcessBranch(ProcessingBranch& processingBranch, ProcessingOptions options, IBlockTracer tracer, CancellationToken token, String& error)
   at Nethermind.Consensus.Processing.BlockchainProcessor.Process(Block suggestedBlock, ProcessingOptions options, IBlockTracer tracer, CancellationToken token, String& error) in D:\GitHub\nethermind\src\Nethermind\Nethermind.Consensus\Processing\BlockchainProcessor.cs:line 462
   at Nethermind.Consensus.Processing.BlockchainProcessor.ProcessBlocks() in D:\GitHub\nethermind\src\Nethermind\Nethermind.Consensus\Processing\BlockchainProcessor.cs:line 353
   at Nethermind.Consensus.Processing.BlockchainProcessor.RunProcessingLoop() in D:\GitHub\nethermind\src\Nethermind\Nethermind.Consensus\Processing\BlockchainProcessor.cs:line 332

@benaadams
Copy link
Member

benaadams commented Dec 15, 2025

Different source continuing

15 Dec 23:40:48 | Unexpected old snapshot created. Lease count 6 at    at System.Environment.get_StackTrace()
   at Nethermind.State.Flat.Persistence.RefCountingPersistenceReader..ctor(IPersistenceReader innerReader, ILogger logger) in D:\GitHub\nethermind\src\Nethermind\Nethermind.State\Flat\Persistence\RefCountingPersistenceReader.cs:line 21
   at Nethermind.State.Flat.FlatDiffRepository.LeaseReader() in D:\GitHub\nethermind\src\Nethermind\Nethermind.State\Flat\FlatDiffRepository.cs:line 116
   at Nethermind.State.Flat.FlatDiffRepository.GatherCache(StateId baseBlock, SnapshotBundleUsage usage, Nullable`1 earliestExclusive) in D:\GitHub\nethermind\src\Nethermind\Nethermind.State\Flat\FlatDiffRepository.cs:line 412
   at Nethermind.State.Flat.FlatStateReader.TryGetAccount(BlockHeader baseBlock, Address address, AccountStruct& account) in D:\GitHub\nethermind\src\Nethermind\Nethermind.State\Flat\FlatStateReader.cs:line 26
   at Nethermind.TxPool.TxPool.AccountCache.TryGetAccount(Address address, AccountStruct& account) in D:\GitHub\nethermind\src\Nethermind\Nethermind.TxPool\TxPool.cs:line 1004
   at Nethermind.TxPool.Filters.BalanceZeroFilter.Accept(Transaction tx, TxFilteringState& state, TxHandlingOptions handlingOptions) in D:\GitHub\nethermind\src\Nethermind\Nethermind.TxPool\Filters\BalanceZeroFilter.cs:line 26
   at Nethermind.TxPool.TxPool.FilterTransactions(Transaction tx, TxHandlingOptions handlingOptions, TxFilteringState& state) in D:\GitHub\nethermind\src\Nethermind\Nethermind.TxPool\TxPool.cs:line 599
   at Nethermind.TxPool.TxPool.SubmitTx(Transaction tx, TxHandlingOptions handlingOptions) in D:\GitHub\nethermind\src\Nethermind\Nethermind.TxPool\TxPool.cs:line 521
   at Nethermind.Network.P2P.Subprotocols.Eth.V62.Eth62ProtocolHandler.HandleSlow(ValueTuple`2 request, CancellationToken cancellationToken) in D:\GitHub\nethermind\src\Nethermind\Nethermind.Network\P2P\Subprotocols\Eth\V62\Eth62ProtocolHandler.cs:line 244
   at Nethermind.Network.P2P.Utils.BackgroundTaskSchedulerWrapper.BackgroundTaskFailureHandlerValueTask[TReq](ValueTuple`2 input, CancellationToken cancellationToken) in D:\GitHub\nethermind\src\Nethermind\Nethermind.Network\P2P\Utils\BackgroundTaskSchedulerWrapper.cs:line 51
   at Nethermind.Consensus.Scheduler.BackgroundTaskScheduler.Activity`1.Do(CancellationToken cancellationToken)
   at Nethermind.Consensus.Scheduler.BackgroundTaskScheduler.StartChannel() in D:\GitHub\nethermind\src\Nethermind\Nethermind.Consensus\Scheduler\BackgroundTaskScheduler.cs:line 142
   at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state)
   at System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1.AsyncStateMachineBox`1.MoveNext()
   at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state)
   at System.Threading.Tasks.Task.ExecuteWithThreadLocal(Task& currentTaskSlot, Thread threadPoolThread)
   at Nethermind.Consensus.Scheduler.BackgroundTaskScheduler.BelowNormalPriorityTaskScheduler.ProcessBackgroundTasks(Object _) in D:\GitHub\nethermind\src\Nethermind\Nethermind.Consensus\Scheduler\BackgroundTaskScheduler.cs:line 290
   at System.Threading.Thread.StartCallback()

@asdacap
Copy link
Contributor Author

asdacap commented Dec 15, 2025

Yea, dont worry about that for now. That was for debugging before. The pruning is the trie cache pruning. It uses some shards of concurrent dictionary and clear them shard by shard until the total memory is under budget.

Base automatically changed from feature/worldstate-backend to master December 23, 2025 23:29
@asdacap asdacap force-pushed the perf/simple-flat branch 3 times, most recently from e1ded6d to 0a87340 Compare January 16, 2026 01:14
@asdacap asdacap mentioned this pull request Jan 19, 2026
12 tasks
@asdacap asdacap force-pushed the perf/simple-flat branch 4 times, most recently from a0e4951 to 0deb695 Compare January 23, 2026 12:38
@asdacap asdacap force-pushed the perf/simple-flat branch 2 times, most recently from b1b7a35 to 5ff55a4 Compare January 27, 2026 00:28
@asdacap asdacap marked this pull request as ready for review January 27, 2026 12:58
@asdacap asdacap requested a review from Copilot January 27, 2026 13:27
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR introduces a comprehensive FlatDB storage architecture as an alternative to the existing pruning trie store. The implementation provides ~20% performance improvement over HalfPath with three different persistence layouts (Flat, FlatInTrie, PreimageFlat) to support different hardware configurations.

Changes:

  • Adds complete FlatDB implementation with snapshot-based state management, compaction, and trie node caching
  • Introduces new persistence abstraction layer supporting multiple storage layouts
  • Adds trie warming infrastructure and ring buffer concurrency primitives
  • Integrates FlatDB into DI container with configuration support

Reviewed changes

Copilot reviewed 118 out of 119 changed files in this pull request and generated 63 comments.

Show a summary per file
File Description
Nethermind.State.Flat/* New FlatDB core implementation including snapshots, persistence, scope providers, and trie integration
Nethermind.Trie/* Extended trie functionality with warmup paths, leaf iteration, and child reference control
Nethermind.State/* Modified state management to support pluggable world state backends
Nethermind.Db/* Added FlatDB configuration and database layout enums
Nethermind.Init/Modules/* DI registration for FlatDB components and RocksDB tuning
Nethermind.Core/* New utility classes for ref counting, read-write locks, and concurrency control
Tests Test infrastructure updates and new test suites for FlatDB components

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +96 to +94
foreach (StateId id in _snapshotRepository.GetStatesAtBlockNumber(blockNumber - _compactSize))
{
if (_snapshotRepository.RemoveAndReleaseCompactedKnownState(id))
{
}
}
Copy link

Copilot AI Jan 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This foreach loop implicitly filters its target sequence - consider filtering the sequence explicitly using '.Where(...)'.

Copilot uses AI. Check for mistakes.
Comment on lines 246 to 253
if (path.Length <= ShortenedPathThreshold)
{
return storageNodes.Get(EncodeShortenedStorageNodeKey(stackalloc byte[ShortenedStorageNodesKeyLength], address, in path), flags: flags);
}
else
{
return fallbackNodes.Get(EncodeFullStorageNodeKey(stackalloc byte[FullStorageNodesKeyLength], address, in path), flags: flags);
}
Copy link

Copilot AI Jan 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Both branches of this 'if' statement return - consider using '?' to express intent better.

Copilot uses AI. Check for mistakes.
Comment on lines 232 to 241
if ((ulong)numBlocks <= uint.MaxValue)
{
// FastRange32-style: floor(h1 * numBlocks / 2^32)
block = (((ulong)h1 * (ulong)(uint)numBlocks) >> 32);
}
else
{
// 64-bit multiply-high: floor(x * numBlocks / 2^64)
block = (ulong)(((UInt128)x * (ulong)numBlocks) >> 64);
}
Copy link

Copilot AI Jan 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Both branches of this 'if' statement write to the same variable - consider using '?' to express intent better.

Copilot uses AI. Check for mistakes.
Comment on lines +326 to +333
if (value is null || Bytes.AreEqual(value, StorageTree.ZeroBytes))
{
_changedSlots[(address, index)] = null;
}
else
{
_changedSlots[(address, index)] = SlotValue.FromSpanWithoutLeadingZero(value);
}
Copy link

Copilot AI Jan 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Both branches of this 'if' statement write to the same variable - consider using '?' to express intent better.

Copilot uses AI. Check for mistakes.
Comment on lines 63 to 70
if (address is null)
{
shardIdx = path.Path.Bytes[0];
}
else
{
shardIdx = address.Bytes[0];
}
Copy link

Copilot AI Jan 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Both branches of this 'if' statement write to the same variable - consider using '?' to express intent better.

Copilot uses AI. Check for mistakes.
Comment on lines 54 to 64
if (_printNodes)
{
return $"{_operationName,-25} {percentage.ToString("P2", CultureInfo.InvariantCulture),8} " +
Progress.GetMeter(percentage, 1) +
$" nodes: {workStr,8}";
}
else
{
return $"{_operationName,-25} {percentage.ToString("P2", CultureInfo.InvariantCulture),8} " +
Progress.GetMeter(percentage, 1);
}
Copy link

Copilot AI Jan 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Both branches of this 'if' statement return - consider using '?' to express intent better.

Copilot uses AI. Check for mistakes.
asdacap and others added 2 commits January 28, 2026 15:32
Co-authored-by: Ben {chmark} Adams <thundercat@illyriad.co.uk>
Co-authored-by: Ben {chmark} Adams <thundercat@illyriad.co.uk>
@asdacap
Copy link
Contributor Author

asdacap commented Jan 28, 2026

@copilot open a new pull request to apply changes based on the comments in this thread

Copy link
Contributor

Copilot AI commented Jan 28, 2026

@asdacap I've opened a new pull request, #10342, to work on those changes. Once the pull request is ready, I'll request review from you.

/// Make storing slot value smaller than a byte[].
/// </summary>
[StructLayout(LayoutKind.Sequential, Pack = 32, Size = 32)]
public struct SlotValue
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might be inline array, not a big diff though

Comment on lines +71 to +74
/// <summary>
/// Disposes it once, decreasing the lease count by 1.
/// </summary>
public void Dispose() => ReleaseLeaseOnce();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A bit weird semantics.
Normally Dispose i dispose and we are done with the object.
Maybe Aquire should return something disposable that would decrease the count?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe dont know. This is from paprika btw.

Comment on lines 21 to 22
// Temporary disable to see if it fix crash
// RocksDbSharp.Native.Instance.rocksdb_cache_destroy(handle);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds bad

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It crash on exit.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually it still crash even with this comment

FlatLayout.Flat => ctx.Resolve<RocksDbPersistence>(),
FlatLayout.FlatInTrie => ctx.Resolve<FlatInTriePersistence>(),
FlatLayout.PreimageFlat => ctx.Resolve<PreimageRocksdbPersistence>(),
_ => throw new Exception($"Unsupported layout {flatDbConfig.Layout}")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

avoid base Exception

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot do it.

Comment on lines 145 to 146
HashSet<Address> addressToClear = new HashSet<Address>();
HashSet<Hash256AsKey> addressHashToClear = new HashSet<Hash256AsKey>();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like big opportunity for some pooling?
We can used PooledCollections which has pooled hash set.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

* Some initial refactors and optimizations

* more
Copy link
Contributor

Copilot AI commented Jan 28, 2026

@asdacap I've opened a new pull request, #10346, to work on those changes. Once the pull request is ready, I'll request review from you.

…eModule (#10346)

* Initial plan

* Replace generic Exception with NotSupportedException

Co-authored-by: asdacap <1841324+asdacap@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: asdacap <1841324+asdacap@users.noreply.github.com>
asdacap and others added 9 commits January 30, 2026 08:58
* Parallelize FlatTrieVerifier with partition-based verification

Partition the 256-bit key space into 8 ranges for parallel account
co-iteration in hashed mode. Add bounded iterator support to
IPersistenceReader and TrieLeafIterator. Add HashVerifyingTrieStore
wrapper for hash integrity checks during verification. Improve
diagnostic logging for trie path traversal issues.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Add parameterless iterator defaults to IPersistenceReader

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Revert 3 files to use parameterless iterator overloads

NoopPersistenceReader, RefCountingPersistenceReader, and
PersistenceScenario don't need bounded signatures — the default
interface methods on IPersistenceReader handle the dispatch.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Merge TrieLeafIterator constructors and fix build errors

Combine two nearly-identical constructors into one with optional
startPath/endPath parameters. Fix NoopPersistenceReader and
RefCountingPersistenceReader to implement the required ranged
iterator overloads. Remove unused System.Linq import from tests.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Simplify TrieLeafIterator comparators using TreePath bound methods

Replace ~80 lines of manual nibble-level byte comparison logic with
TreePath.ToUpperBoundPath()/ToLowerBoundPath() one-liners.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Fix spelling: Childs -> Children, simplify GetStartChildIndex

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Remove redundant CompareWithCorrectLength after upstream fix

The upstream fix in 190f688 corrected Bytes.BytesComparer.Compare
to use SequenceCompareTo, making CompareWithCorrectLength redundant.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants