feat:Add migration-checker#271
Conversation
|
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the 📝 WalkthroughWalkthroughAdds a migration-checker CLI to verify consistency between ZK-2048 tries and MPTs via parallel leaf comparisons; introduces tracer infrastructure for tries, ZkTrie traversal/counting APIs, disk↔header state-root mapping and cross-format handling, and multiple tests and flags to support MPT/zkTrie migration behavior. Changes
Sequence DiagramsequenceDiagram
autonumber
participant User
participant MC as migration-checker
participant DB as LevelDBs
participant ZK as ZkTrie
participant MPT as MPTTrie
participant WP as WorkerPool
participant ACC as AccountChecker
participant STR as StorageChecker
User->>MC: invoke with DB paths, roots, flags
MC->>DB: open zkDB & mptDB
MC->>ZK: construct zkTrie from zk root
MC->>MPT: construct mptTrie from mpt root
par load leaves
MC->>ZK: loadZkTrie (parallel/sequential)
MC->>MPT: loadMPT (parallel/sequential)
end
MC->>MC: verify leaf counts match
MC->>WP: create worker pool (GOMAXPROCS)
loop per zk leaf
MC->>ACC: dispatch checkAccountEquality
ACC->>MPT: decode/lookup MPT account RLP
ACC->>ZK: decode ZK account
alt storage root non-empty
ACC->>WP: schedule checkTrieEquality (async)
WP->>STR: compare sub-trie leaves (load & per-key checks)
STR->>MPT: fetch storage MPT values
STR->>ZK: fetch storage ZK values
STR->>STR: compare storage entries
end
ACC->>MC: signal completion (atomic counter)
end
WP->>MC: all workers done
MC->>User: print results / exit (panic on critical failures)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (4)
trie/tracer.go (1)
43-52: Consider defensive deep-copy inonReadfor safety.The comment warns callers not to modify the value, but storing
valdirectly creates aliasing risk if callers inadvertently mutate the slice. Sincecopy()already deep-copies viacommon.CopyBytes, consider doing the same here for consistency and safety.🔎 Suggested fix
func (t *tracer) onRead(path []byte, val []byte) { if t == nil { return } - - t.accessList[string(path)] = val + t.accessList[string(path)] = common.CopyBytes(val) }trie/secure_trie.go (1)
224-233: Add documentation for the exportedNewSecureNoTracerfunction.The function lacks a documentation comment. Following Go conventions, exported functions should have documentation explaining their purpose. Also, verify that this nil assignment is necessary—if
New()doesn't initialize the tracer, setting it to nil is redundant.🔎 Suggested fix
+// NewSecureNoTracer creates a SecureTrie with tracing explicitly disabled. +// This is useful when the caller does not need to track trie node changes. func NewSecureNoTracer(root common.Hash, db *Database) (*SecureTrie, error) { t, err := NewSecure(root, db) if err != nil { return nil, err } t.trie.tracer = nil return t, nil }trie/zk_trie.go (1)
238-242: Stub method panics—consider not exporting or returning an error.
Witness()is exported but immediately panics. If this is intentional WIP, consider either keeping it unexported until implemented, or returning an error instead of panicking to provide a safer API.cmd/migration-checker/main.go (1)
133-156: Redundant hash conversion and potential goroutine leak on panic.
- Lines 136-137 perform redundant conversions:
zkAccount.RootandmptAccount.Rootare alreadycommon.Hashtypes.- If a panic occurs in the goroutine before
trieCheckers <- struct{}{}(line 150), the semaphore token is lost, which could cause the final drain loop inmain()to hang.🔎 Suggested fix for redundant conversion
} else if zkAccount.Root != (common.Hash{}) { - zkRoot := common.BytesToHash(zkAccount.Root[:]) - mptRoot := common.BytesToHash(mptAccount.Root[:]) + zkRoot := zkAccount.Root + mptRoot := mptAccount.Root <-trieCheckersFor the semaphore leak, consider using
deferto ensure the token is always returned:go func() { defer func() { trieCheckers <- struct{}{} }() defer func() { if p := recover(); p != nil { fmt.Println(p) os.Exit(1) } }() // ... }()
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
cmd/migration-checker/main.go(1 hunks)trie/secure_trie.go(1 hunks)trie/tracer.go(1 hunks)trie/trie.go(1 hunks)trie/zk_trie.go(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
trie/tracer.go (2)
common/bytes.go (1)
CopyBytes(37-45)trie/database_types.go (1)
Clone(43-47)
cmd/migration-checker/main.go (8)
trie/trie.go (1)
New(85-100)common/types.go (2)
HexToHash(66-66)BytesToHash(54-58)trie/zk_trie.go (2)
NewZkTrie(53-59)ZkTrie(35-38)trie/zk_trie_database.go (1)
NewZktrieDatabaseFromTriedb(33-36)trie/secure_trie.go (2)
NewSecureNoTracer(225-233)SecureTrie(38-44)crypto/crypto.go (1)
Keccak256(78-86)core/types/state_account_marshalling.go (1)
UnmarshalStateAccount(86-107)rlp/raw.go (1)
Split(80-86)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Analyze (go)
🔇 Additional comments (7)
trie/tracer.go (3)
1-32: Well-structured tracer implementation with clear documentation.The tracer design is clean with proper nil-checks on all methods, allowing safe calls on nil receivers. The documentation clearly explains the purpose and thread-safety considerations.
54-82: LGTM!The insert/delete tracking with resurrection logic is correctly implemented. When a node is deleted then re-inserted (or vice versa), it's properly marked as "untouched" by removing from the opposite set.
84-130: LGTM!The
reset,copy, anddeletedNodesmethods are correctly implemented. Thecopymethod properly deep-copies byte slices inaccessListwhile usingmaps.Clonefor the struct-valued maps.cmd/migration-checker/main.go (3)
70-113: LGTM—clear trie equality check logic.The function properly loads leaves from both tries, validates counts match, and transforms keys appropriately for address (20-byte) vs storage (32-byte) semantics. The padding validation ensures data integrity.
218-246: LGTM—correct mutex handling inloadZkTrie.Unlike
loadMPT, this function correctly guards bothLockandUnlockcalls with theparallelcheck. The value storage is safe becauseCountLeavesalready provides copies.
158-166: LGTM!The storage equality check correctly decodes both ZK and MPT storage values and compares them.
trie/trie.go (1)
69-72: Tracer field added but not initialized inNew().The
tracerfield is added to theTriestruct but isn't initialized in theNew()constructor. Since all tracer methods handle nil receivers gracefully, this works, but it means tracing is effectively disabled by default. If tracing should be enabled by default, consider initializing it inNew().
| func main() { | ||
| var ( | ||
| mptDbPath = flag.String("mpt-db", "", "path to the MPT node DB") | ||
| zkDbPath = flag.String("zk-db", "", "path to the ZK node DB") | ||
| mptRoot = flag.String("mpt-root", "", "root hash of the MPT node") | ||
| zkRoot = flag.String("zk-root", "", "root hash of the ZK node") | ||
| paranoid = flag.Bool("paranoid", false, "verifies all node contents against their expected hash") | ||
| ) | ||
| flag.Parse() | ||
|
|
||
| zkDb, err := leveldb.New(*zkDbPath, 1024, 128, "", true) | ||
| panicOnError(err, "", "failed to open zk db") | ||
| mptDb, err := leveldb.New(*mptDbPath, 1024, 128, "", true) | ||
| panicOnError(err, "", "failed to open mpt db") | ||
|
|
||
| zkRootHash := common.HexToHash(*zkRoot) | ||
| mptRootHash := common.HexToHash(*mptRoot) | ||
|
|
||
| for i := 0; i < runtime.GOMAXPROCS(0)*4; i++ { | ||
| trieCheckers <- struct{}{} | ||
| } | ||
|
|
||
| checkTrieEquality(&dbs{ | ||
| zkDb: zkDb, | ||
| mptDb: mptDb, | ||
| }, zkRootHash, mptRootHash, "", checkAccountEquality, true, *paranoid) | ||
|
|
||
| for i := 0; i < runtime.GOMAXPROCS(0)*4; i++ { | ||
| <-trieCheckers | ||
| } | ||
| } |
There was a problem hiding this comment.
Add flag validation for required parameters.
The CLI flags mpt-db, zk-db, mpt-root, and zk-root are required but not validated. If empty, the tool may fail with cryptic errors or undefined behavior.
🔎 Suggested fix
flag.Parse()
+ if *mptDbPath == "" || *zkDbPath == "" || *mptRoot == "" || *zkRoot == "" {
+ fmt.Fprintln(os.Stderr, "Error: all flags (mpt-db, zk-db, mpt-root, zk-root) are required")
+ flag.Usage()
+ os.Exit(1)
+ }
+
zkDb, err := leveldb.New(*zkDbPath, 1024, 128, "", true)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func main() { | |
| var ( | |
| mptDbPath = flag.String("mpt-db", "", "path to the MPT node DB") | |
| zkDbPath = flag.String("zk-db", "", "path to the ZK node DB") | |
| mptRoot = flag.String("mpt-root", "", "root hash of the MPT node") | |
| zkRoot = flag.String("zk-root", "", "root hash of the ZK node") | |
| paranoid = flag.Bool("paranoid", false, "verifies all node contents against their expected hash") | |
| ) | |
| flag.Parse() | |
| zkDb, err := leveldb.New(*zkDbPath, 1024, 128, "", true) | |
| panicOnError(err, "", "failed to open zk db") | |
| mptDb, err := leveldb.New(*mptDbPath, 1024, 128, "", true) | |
| panicOnError(err, "", "failed to open mpt db") | |
| zkRootHash := common.HexToHash(*zkRoot) | |
| mptRootHash := common.HexToHash(*mptRoot) | |
| for i := 0; i < runtime.GOMAXPROCS(0)*4; i++ { | |
| trieCheckers <- struct{}{} | |
| } | |
| checkTrieEquality(&dbs{ | |
| zkDb: zkDb, | |
| mptDb: mptDb, | |
| }, zkRootHash, mptRootHash, "", checkAccountEquality, true, *paranoid) | |
| for i := 0; i < runtime.GOMAXPROCS(0)*4; i++ { | |
| <-trieCheckers | |
| } | |
| } | |
| func main() { | |
| var ( | |
| mptDbPath = flag.String("mpt-db", "", "path to the MPT node DB") | |
| zkDbPath = flag.String("zk-db", "", "path to the ZK node DB") | |
| mptRoot = flag.String("mpt-root", "", "root hash of the MPT node") | |
| zkRoot = flag.String("zk-root", "", "root hash of the ZK node") | |
| paranoid = flag.Bool("paranoid", false, "verifies all node contents against their expected hash") | |
| ) | |
| flag.Parse() | |
| if *mptDbPath == "" || *zkDbPath == "" || *mptRoot == "" || *zkRoot == "" { | |
| fmt.Fprintln(os.Stderr, "Error: all flags (mpt-db, zk-db, mpt-root, zk-root) are required") | |
| flag.Usage() | |
| os.Exit(1) | |
| } | |
| zkDb, err := leveldb.New(*zkDbPath, 1024, 128, "", true) | |
| panicOnError(err, "", "failed to open zk db") | |
| mptDb, err := leveldb.New(*mptDbPath, 1024, 128, "", true) | |
| panicOnError(err, "", "failed to open mpt db") | |
| zkRootHash := common.HexToHash(*zkRoot) | |
| mptRootHash := common.HexToHash(*mptRoot) | |
| for i := 0; i < runtime.GOMAXPROCS(0)*4; i++ { | |
| trieCheckers <- struct{}{} | |
| } | |
| checkTrieEquality(&dbs{ | |
| zkDb: zkDb, | |
| mptDb: mptDb, | |
| }, zkRootHash, mptRootHash, "", checkAccountEquality, true, *paranoid) | |
| for i := 0; i < runtime.GOMAXPROCS(0)*4; i++ { | |
| <-trieCheckers | |
| } | |
| } |
🤖 Prompt for AI Agents
In cmd/migration-checker/main.go around lines 29 to 59, the required CLI flags
mpt-db, zk-db, mpt-root and zk-root are not validated; add validation
immediately after flag.Parse() to ensure none are empty (trim spaces), and if
any are missing call flag.Usage() and exit non‑zero (os.Exit(2)) or log.Fatalf
with a clear message listing the missing flags; do this before attempting to
open DBs or parse roots so the program fails fast with a user-friendly error.
| defer mptWg.Done() | ||
| for trieIt.Next() { | ||
| if parallel { | ||
| mptLeafMutex.Lock() | ||
| } | ||
|
|
||
| if _, ok := mptLeafMap[string(trieIt.Key)]; ok { | ||
| mptLeafMutex.Unlock() | ||
| break | ||
| } | ||
|
|
||
| mptLeafMap[string(dup(trieIt.Key))] = dup(trieIt.Value) | ||
|
|
||
| if parallel { | ||
| mptLeafMutex.Unlock() | ||
| } | ||
|
|
||
| if parallel && len(mptLeafMap)%10000 == 0 { | ||
| fmt.Println("MPT Accounts Loaded:", len(mptLeafMap)) | ||
| } | ||
| } |
There was a problem hiding this comment.
Critical bug: Unlock called without matching Lock when parallel=false.
When parallel=false, the mutex is never locked (line 189 skipped), but line 193 unconditionally calls Unlock(), which will panic with "sync: unlock of unlocked mutex".
🔎 Apply this diff to fix the unlock logic
go func() {
defer mptWg.Done()
for trieIt.Next() {
if parallel {
mptLeafMutex.Lock()
}
if _, ok := mptLeafMap[string(trieIt.Key)]; ok {
- mptLeafMutex.Unlock()
+ if parallel {
+ mptLeafMutex.Unlock()
+ }
break
}
mptLeafMap[string(dup(trieIt.Key))] = dup(trieIt.Value)
if parallel {
mptLeafMutex.Unlock()
}🤖 Prompt for AI Agents
In cmd/migration-checker/main.go around lines 186 to 206, the code
unconditionally calls mptLeafMutex.Unlock() even when parallel == false (mutex
was never locked), causing a panic; fix by only locking and unlocking the mutex
when parallel is true — move the Lock() and a matching Unlock() into the same
conditional branch (or use a defer Unlock() immediately after Lock()) and ensure
any early break/continue paths release the lock only if it was acquired.
There was a problem hiding this comment.
Same as above - won't trigger. When parallel=false, workers=1, single goroutine iterates, no duplicate keys, condition never true.
There was a problem hiding this comment.
@curryxbo, you're absolutely right! I appreciate the clarification. When parallel=false, workers=1 ensures only a single goroutine runs, so the duplicate-key check will never be true and that unlock path is indeed unreachable.
While the unconditional unlock is technically safe given the logic you've described, it does make the code a bit harder to follow at first glance. But I understand your reasoning, and the code is correct as written for the non-parallel case.
Thank you for the detailed explanation! 👍
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (7)
core/types/l2trace.go (1)
49-72: Consider removing orphanedExecutionResultstruct.
ExecutionResultis no longer referenced inBlockTraceafter the simplification. If this struct is no longer needed, consider removing it along with related types (StructLogRes,NewStructLogResBasic) to reduce dead code.#!/bin/bash # Verify if ExecutionResult is still used elsewhere echo "=== Searching for ExecutionResult usage ===" rg -n --type=go 'ExecutionResult[^s]' -C2rollup/tracing/tracing.go (1)
258-264: Minor:structLogger.Hooks()called twice.Consider caching the hooks in a local variable to avoid the repeated call.
🔎 Proposed optimization
txContext := core.NewEVMTxContext(msg) structLogger := logger.NewStructLogger(env.logConfig) +hooks := structLogger.Hooks() -tracingStateDB := state.NewHookedState(statedb, structLogger.Hooks()) +tracingStateDB := state.NewHookedState(statedb, hooks) // Run the transaction with tracing enabled. -vmenv := vm.NewEVM(env.blockCtx, txContext, tracingStateDB, env.chainConfig, vm.Config{Tracer: structLogger.Hooks(), NoBaseFee: true}) +vmenv := vm.NewEVM(env.blockCtx, txContext, tracingStateDB, env.chainConfig, vm.Config{Tracer: hooks, NoBaseFee: true})core/blockchain_l2.go (1)
116-121: Consider atomicity of disk state root mapping.The cross-format state root mapping logic is sound and necessary for MPT ↔ zkTrie migration. However,
WriteDiskStateRootis called outside theblockBatch, which could lead to inconsistencies if the batch write fails after the mapping is written.Consider adding the mapping write to the
blockBatchfor atomicity:🔎 Suggested approach for atomic writes
if err := blockBatch.Write(); err != nil { log.Crit("Failed to write block into disk", "err", err) } + // Commit all cached state changes into underlying memory database. root, err := state.Commit(bc.chainConfig.IsEIP158(block.Number())) if err != nil { return err } // If the block header root differs from the local computed root, // write the mapping for cross-format state access (MPT ↔ zkTrie). // This happens when a zkTrie node processes an MPT block or vice versa. if block.Root() != root { - rawdb.WriteDiskStateRoot(bc.db, block.Root(), root) + mappingBatch := bc.db.NewBatch() + rawdb.WriteDiskStateRoot(mappingBatch, block.Root(), root) + if err := mappingBatch.Write(); err != nil { + log.Crit("Failed to write disk state root mapping", "err", err) + } }Alternatively, verify that the current approach (separate write after batch commit) is intentional for migration scenarios.
core/state/database.go (1)
139-147: Consider distinguishing not-found errors from database errors.The current implementation silently ignores all errors from
ReadDiskStateRoot. While falling back to the original root is appropriate when no mapping exists, actual database errors should be logged or handled differently to avoid masking underlying issues.🔎 Suggested improvement
// Try to resolve disk state root mapping first. // If the block's trie format differs from our local format, // the mapping will redirect us to the actual on-disk root. diskDB := db.db.DiskDB() - if diskRoot, err := rawdb.ReadDiskStateRoot(diskDB, root); err == nil { + diskRoot, err := rawdb.ReadDiskStateRoot(diskDB, root) + if err == nil { root = diskRoot + } else if !errors.Is(err, leveldb.ErrNotFound) && !errors.Is(err, memorydb.ErrMemorydbNotFound) { + // Log unexpected database errors but continue with original root + // to maintain backward compatibility + log.Warn("Failed to read disk state root mapping", "root", root, "err", err) } // If mapping doesn't exist, root remains unchanged (normal case).ethclient/authclient/engine.go (1)
15-15: Breaking change: Document nil behavior for timeStamp parameter.The addition of
timeStamp *uint64parameter is a breaking change for existing callers. Consider adding a comment to document the behavior whentimeStampisnil(presumably defaults to current time in the RPC handler).📝 Suggested documentation addition
-// AssembleL2Block assembles L2 Block used for L2 sequencer to propose a block in L2 consensus progress +// AssembleL2Block assembles L2 Block used for L2 sequencer to propose a block in L2 consensus progress. +// If timeStamp is nil, the RPC handler will use the current time. func (ec *Client) AssembleL2Block(ctx context.Context, timeStamp *uint64, number *big.Int, transactions types.Transactions) (*catalyst.ExecutableL2Data, error) {cmd/utils/flags.go (1)
1784-1800: Consider extracting duplicated Morph network configuration logic.The zkTrie/MPT handling code is identical across all three Morph networks (mainnet, Holesky, Hoodi). This repetition increases maintenance burden and risk of inconsistency.
🔎 Suggested helper function
// Add helper function before SetEthConfig func configureMorphTrieMode(ctx *cli.Context, cfg *ethconfig.Config) { cfg.Genesis.Config.Morph.UseZktrie = !ctx.GlobalBool(MorphMPTFlag.Name) if cfg.Genesis.Config.Morph.UseZktrie { if ctx.GlobalString(GCModeFlag.Name) != GCModeArchive { log.Crit("zkTrie mode requires --gcmode=archive") } log.Info("Pruning disabled (zkTrie mode)") cfg.NoPruning = true log.Info("Prefetch disabled (zkTrie mode)") cfg.NoPrefetch = true } else { log.Info("MPT mode enabled, pruning is allowed") } }Then replace each duplicated block with:
case ctx.GlobalBool(MorphFlag.Name): if !ctx.GlobalIsSet(NetworkIdFlag.Name) { cfg.NetworkId = 2818 } cfg.Genesis = core.DefaultMorphMainnetGenesisBlock() - - // Handle MPT flag - cfg.Genesis.Config.Morph.UseZktrie = !ctx.GlobalBool(MorphMPTFlag.Name) - if cfg.Genesis.Config.Morph.UseZktrie { - // zkTrie mode: forced archive mode - if ctx.GlobalString(GCModeFlag.Name) != GCModeArchive { - log.Crit("zkTrie mode requires --gcmode=archive") - } - log.Info("Pruning disabled (zkTrie mode)") - cfg.NoPruning = true - // disable prefetch - log.Info("Prefetch disabled (zkTrie mode)") - cfg.NoPrefetch = true - } else { - // MPT mode: pruning is allowed - log.Info("MPT mode enabled, pruning is allowed") - } + configureMorphTrieMode(ctx, cfg)Also applies to: 1806-1822, 1828-1844
core/state/mpt_migration_test.go (1)
388-428: Minor: Redundant assertion can be simplified.Lines 424-426 check
if resolved == originalDiskRoot, but this condition cannot be true if the assertion at line 418-420 passed. The check is harmless but redundant.🔎 Suggested simplification
if resolved != updatedDiskRoot { t.Errorf("Updated mapping incorrect: expected %x, got %x", updatedDiskRoot, resolved) } t.Logf("✓ Mapping successfully updated: %x -> %x", headerRoot, resolved) - - // Verify old value is completely gone - if resolved == originalDiskRoot { - t.Errorf("Old mapping value still present: %x", originalDiskRoot) - } - t.Logf("✓ Old mapping value no longer accessible") + t.Logf("✓ Old mapping value no longer accessible (overwritten)")
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (36)
cmd/geth/main.gocmd/geth/usage.gocmd/utils/flags.gocore/block_validator.gocore/block_validator_test.gocore/blockchain.gocore/blockchain_l2.gocore/chain_makers.gocore/genesis.gocore/mpt_fork_test.gocore/rawdb/accessors_chain_test.gocore/rawdb/accessors_state.gocore/rawdb/accessors_state_test.gocore/rawdb/schema.gocore/state/database.gocore/state/mpt_migration_test.gocore/state/snapshot/account.gocore/state/snapshot/generate.gocore/state/state_object.gocore/state/state_test.gocore/state/statedb.gocore/types/l2trace.gocore/types/state_account.goeth/api.goeth/catalyst/api_types.goeth/catalyst/gen_l2blockparams.goeth/catalyst/l2_api.goeth/catalyst/l2_api_test.goeth/tracers/logger/logger_trace.goethclient/authclient/engine.gointernal/web3ext/web3ext.gominer/miner.goparams/config.goparams/version.gorollup/tracing/tracing.gotrie/database.go
🧰 Additional context used
🧬 Code graph analysis (23)
core/blockchain_l2.go (1)
core/rawdb/accessors_state.go (1)
WriteDiskStateRoot(101-105)
cmd/geth/usage.go (1)
cmd/utils/flags.go (1)
MorphMPTFlag(185-188)
core/state/database.go (1)
core/rawdb/accessors_state.go (1)
ReadDiskStateRoot(109-115)
ethclient/authclient/engine.go (2)
eth/catalyst/api_types.go (1)
ExecutableL2Data(91-113)eth/catalyst/gen_l2_ed.go (2)
ExecutableL2Data(18-33)ExecutableL2Data(59-74)
cmd/geth/main.go (1)
cmd/utils/flags.go (1)
MorphMPTFlag(185-188)
core/state/snapshot/account.go (1)
crypto/codehash/codehash.go (1)
PoseidonCodeHash(12-14)
miner/miner.go (2)
core/types/transaction.go (1)
Transactions(615-615)miner/worker.go (1)
NewBlockResult(73-77)
cmd/utils/flags.go (2)
core/genesis.go (1)
Genesis(51-68)trie/database.go (1)
Config(280-285)
core/types/state_account.go (1)
crypto/codehash/codehash.go (1)
PoseidonCodeHash(12-14)
core/rawdb/accessors_state_test.go (1)
core/rawdb/accessors_state.go (3)
WriteDiskStateRoot(101-105)ReadDiskStateRoot(109-115)DeleteDiskStateRoot(118-122)
core/state/mpt_migration_test.go (5)
core/rawdb/database.go (1)
NewMemoryDatabase(234-236)trie/database.go (2)
Config(280-285)NewDatabaseWithConfig(297-320)core/state/database.go (1)
NewDatabaseWithConfig(120-128)core/state/statedb.go (1)
New(130-158)core/rawdb/accessors_state.go (3)
WriteDiskStateRoot(101-105)ReadDiskStateRoot(109-115)DeleteDiskStateRoot(118-122)
core/genesis.go (1)
core/rawdb/accessors_state.go (1)
WriteDiskStateRoot(101-105)
eth/tracers/logger/logger_trace.go (1)
eth/tracers/logger/logger.go (1)
CodeInfo(205-210)
core/state/snapshot/generate.go (1)
crypto/codehash/codehash.go (1)
PoseidonCodeHash(12-14)
core/chain_makers.go (1)
params/config.go (1)
TestChainConfig(440-471)
eth/api.go (2)
rpc/types.go (1)
BlockNumberOrHash(138-142)core/rawdb/accessors_state.go (1)
ReadDiskStateRoot(109-115)
core/blockchain.go (1)
core/rawdb/accessors_state.go (1)
WriteDiskStateRoot(101-105)
params/config.go (1)
common/types.go (3)
HexToHash(66-66)Hash(50-50)Hash(172-172)
trie/database.go (2)
core/rawdb/accessors_state.go (1)
ReadDiskStateRoot(109-115)common/types.go (1)
HashLength(39-39)
core/state/state_object.go (3)
core/state/database.go (1)
Database(42-60)crypto/codehash/codehash.go (1)
KeccakCodeHash(16-18)common/types.go (1)
BytesToHash(54-58)
core/block_validator_test.go (1)
params/config.go (2)
ChainConfig(561-604)MorphConfig(606-623)
eth/catalyst/gen_l2blockparams.go (2)
eth/catalyst/api_types.go (1)
AssembleL2BlockParams(41-45)common/hexutil/json.go (1)
Uint64(339-339)
rollup/tracing/tracing.go (5)
eth/tracers/logger/logger.go (1)
CodeInfo(205-210)core/state/state_prove.go (1)
ZktrieProofTracer(19-21)core/gaspool.go (1)
GasPool(26-26)core/types/l2trace.go (2)
BytecodeTrace(25-30)StorageTrace(33-44)core/state/state_object.go (1)
Code(37-37)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Analyze (go)
🔇 Additional comments (60)
params/version.go (1)
26-27: Unable to verify the version downgrade claim due to unavailable git history.The current code shows version
2.0.9(VersionMinor = 0, VersionPatch = 9). However, I cannot confirm whether this represents a downgrade from2.1.0as the review comment claims, because the git history for this file is shallow or inaccessible. The only visible commit in the file's history doesn't show a previous version state.To properly assess whether this is a breaking version change:
- Review the PR diff or commit message to confirm the previous version
- Verify the intentionality of the version change against the PR's feature additions (migration-checker, tracer infrastructure, etc.)
- Consider semantic versioning expectations if new features are being added
If this is indeed a downgrade, the concern about violating semantic versioning principles remains valid and should be addressed.
core/chain_makers.go (1)
258-260: Good defensive change to prevent shared config mutations.Cloning
TestChainConfiginstead of using the shared instance directly ensures test isolation and prevents accidental mutations of the global test configuration. TheClone()method onChainConfig(params/config.go:680) performs a proper deep copy via JSON marshaling/unmarshaling, which correctly handles nested structures and pointer fields.eth/tracers/logger/logger_trace.go (1)
50-70: LGTM! Trie-mode-aware code hash key selection.The logic correctly determines the map key based on trie mode:
- MPT mode: uses
keccakCodeHash(whenposeidonCodeHashis zero)- zkTrie mode: uses
poseidonCodeHash(when non-zero)This aligns with the dual-trie support for migration checking.
core/types/l2trace.go (2)
33-44: LGTM!StorageTracesimplified by removingDeletionProofs.The removal of
DeletionProofsaligns with the broader tracing simplification. The remaining fields (RootBefore,RootAfter,Proofs,StorageProofs) provide the essential storage proof data.
14-22: Partially incorrect:BlockTracestruct changes do not includeWithdrawTrieRootorSequencerSetVerifyHashremovals.The review claims five fields were removed from
BlockTrace, but only three were actually removed from the struct definition:
ExecutionResults— removed, zero usages in codebaseTxStorageTraces— removed, zero usages in codebaseVersion— removed, zero usages in codebase
WithdrawTrieRootandSequencerSetVerifyHashwere never fields inBlockTrace. They are slot references used within the tracing logic (rollup/tracing/tracing.go:465,475) to fetch storage proofs that populate theStorageTracefield. Their usage remains intact.Note: The
ExecutionResultstruct (lines 49-72) remains defined inl2trace.gobut is no longer used anywhere in the codebase, suggesting it should be removed to avoid orphaned types.Likely an incorrect or invalid review comment.
rollup/tracing/tracing.go (6)
74-76: LGTM! SimplifiedTraceEnvpublic surface.The removal of
ExecutionResultsandTxStorageTracesfromTraceEnvis consistent with theBlockTracesimplification inl2trace.go.
85-88: LGTM! SimplifiedContextstruct.Removing
BlockHashis acceptable as it can be obtained from theblockparameter when needed.
305-321: LGTM! Consistent trie-mode-aware code key logic.The
codeKeydetermination logic mirrorslogger_trace.go, maintaining consistency for dual-trie (zkTrie/MPT) support across the codebase.
513-515: Verify:Coinbaseonly populatesAddressfield.The
AccountWrapperhas additional fields (Nonce,Balance,KeccakCodeHash,PoseidonCodeHash,CodeSize) that are not populated forCoinbase. If this is intentional simplification, consider documenting or using a simpler type for coinbase.
367-381: LGTM! Lock usage for proof collection is correct.The lock pattern here is safe: the
existedcheck prevents redundant work, and the actual map write on line 380 is properly protected. Even if another goroutine adds the same key between check and write, the second write would just overwrite with equivalent data.
404-444: LGTM! Storage proof collection with proper synchronization.The lock usage for storage proofs is correct, with proper handling of the zkTrie tracer merge and deletion marking.
internal/web3ext/web3ext.go (1)
894-899: LGTM! JavaScript method declaration follows existing patterns.The
diskRootmethod declaration is correctly structured and consistent with other Morph API methods in this file. The use ofinputBlockNumberFormatterappropriately handles block number/hash input formatting.core/rawdb/accessors_chain_test.go (1)
396-396: LGTM! Improved error messages aid debugging.The enhanced error messages provide better context when assertions fail, making test failures easier to diagnose compared to the previous
err.Error()approach.Also applies to: 406-406
trie/database.go (1)
670-672: LGTM! Disk state root override enables cross-format state mapping.This logic allows the commit path to use an on-disk mapped root when available, supporting the zkTrie ↔ MPT migration feature. The implementation correctly replaces the node only when a valid disk root mapping exists (
err == nil).core/block_validator_test.go (1)
233-326: LGTM! Comprehensive test coverage for XOR-based state-root validation.The test suite thoroughly validates the cross-format state-root validation logic across six scenarios:
- Same-format validation (zkTrie↔zkTrie, MPT↔MPT): should validate
- Cross-format scenarios (zkTrie↔MPT): should skip validation
- Edge cases with no fork configured
The XOR condition (
shouldValidate = UseZktrie XOR IsMPTFork) correctly enforces that validation occurs only when the node format matches the block format, preventing false positives during migration.eth/api.go (1)
663-667: LGTM! Struct definition is clear and well-documented.The
DiskAndHeaderRoottype appropriately separates the on-disk state root from the header's state root, which is essential for debugging cross-format state access during zkTrie ↔ MPT migration.params/config.go (12)
21-21: LGTM!The
encoding/jsonimport is correctly added to support the newClone()method implementation.
328-328: LGTM!The
GenesisStateRootfield is correctly wired into the Morph Hoodi chain configuration, referencing the corresponding genesis state root constant.
360-360: LGTM!The
GenesisStateRootfield is correctly wired into the Morph Mainnet chain configuration, referencing the corresponding genesis state root constant.
591-591: LGTM!The
MPTForkTimefield correctly follows the timestamp-based fork pattern and naming conventions used for other Morph forks.
610-614: LGTM!The
GenesisStateRootfield is well-documented and correctly typed. The comment clearly explains its purpose for maintaining compatibility between MPT and zkTrie formats.
703-703: LGTM!The
String()method is correctly updated to includeMPTForkTimein the configuration output, following the established pattern for other fork times.Also applies to: 726-726
832-836: LGTM!The
IsMPTFork()method correctly implements timestamp-based fork detection following the established pattern for other Morph forks.
902-902: LGTM!The
mptForkTimeis correctly added to the fork ordering validation list as an optional timestamp-based fork.
1012-1014: LGTM!The
MPTForkTimecompatibility check correctly follows the pattern for timestamp-based fork validation.
1157-1161: LGTM!The
Rulesstruct andRules()method are correctly updated to includeIsMPTFork, following the established pattern for fork rules.Also applies to: 1189-1189
43-44: Verify these genesis state root values match the official Morph network configuration.The hard-coded genesis state root constants for Mainnet and Hoodi networks are critical for network compatibility. These values cannot be verified from code examination alone—confirm they match the official network genesis roots for the zkTrie implementation by consulting the Morph network documentation or genesis specification.
678-690: The JSON-based Clone implementation is reasonable for this use case.While JSON marshaling adds serialization overhead compared to manual field copying, the concerns raised are not well-founded:
- The approach won't "silently fail" if fields change—it will panic loudly with a marshaling error, which is appropriate for catching programming mistakes
- Thread safety is actually correct: each Clone returns an independent copy, making concurrent cloning safe
- All ChainConfig fields are JSON-serializable with proper struct tags, so this implementation is robust
- The method is not test-only; it's used in production code paths like
GenerateChainThe implementation is straightforward and appropriate given the infrequent cloning of chain configurations. Manual field-by-field copying offers minimal benefit and introduces maintenance risk.
cmd/geth/usage.go (1)
53-53: LGTM!The
MorphMPTFlagis correctly added to the ETHEREUM flag group, properly positioned with other Morph-related configuration flags.eth/catalyst/l2_api.go (1)
85-89: LGTM!The timestamp handling correctly provides flexibility by using
params.Timestampwhen provided, falling back to the current time. The change aligns with the updatedBuildBlocksignature.eth/catalyst/l2_api_test.go (1)
114-114: LGTM!The
BuildBlocktest calls are correctly updated to passuint64(time.Now().Unix())instead oftime.Time, aligning with the updated method signature.Also applies to: 173-173, 226-226
eth/catalyst/api_types.go (1)
44-44: LGTM!The addition of the optional
Timestampfield with proper JSON marshaling is correctly implemented. Using a pointer type allows for backward compatibility when the timestamp is not provided.Also applies to: 51-51
cmd/geth/main.go (1)
154-154: LGTM!The addition of
MorphMPTFlagto the node flags is correctly positioned alongside other Morph-related configuration flags.core/rawdb/schema.go (1)
126-127: LGTM!The disk state root key implementation follows the established patterns in this file. The prefix choice "dsr" is concise and the helper function correctly constructs the key by appending the header root bytes to the prefix.
Also applies to: 317-320
core/state/statedb.go (1)
298-304: All call sites correctly updated and consistent.The signature change from
CodeSize()toCodeSize(db Database)is properly implemented. All direct calls tostateObject.CodeSize()correctly pass the database parameter, and all publicGetCodeSize()API calls handle the parameter internally. The pattern is consistent with the existingCode(db)method.core/state/state_test.go (1)
232-233: LGTM: Test correctly updated for CodeSize API change.The test has been properly updated to pass the database parameter to
CodeSize(), aligning with the new method signature. The change is consistent and maintains the comparison logic.core/state/snapshot/generate.go (1)
621-622: Correct exclusion of derived fields from RLP serialization.The
rlp:"-"tags appropriately excludePoseidonCodeHashandCodeSizefrom disk serialization since:
PoseidonCodeHashis zkTrie-specific and computed on-demandCodeSizeis derivable from the code itselfEnsure that
SlimAccountRLP(lines 640, 643) handles these fields correctly when writing snapshot data.core/block_validator.go (1)
104-109: XOR logic correctly implements fork-aware state root validation.The condition
v.config.Morph.UseZktrie != v.config.IsMPTFork(header.Time)allows cross-format blocks (zkTrie node processing MPT block or vice versa) to pass validation without state root mismatches. The locally computed root is preserved via disk state root mappings (seecore/blockchain.go), ensuring state consistency.Note: When validation is skipped, the node trusts the header's state root. Ensure upstream block validation (consensus layer) provides sufficient security guarantees for cross-format scenarios.
core/blockchain.go (1)
1253-1258: LGTM: Cross-format state root mapping correctly implemented.The conditional write of the disk state root mapping enables nodes to process blocks from different trie formats (MPT ↔ zkTrie) while maintaining accurate state mappings. The implementation correctly:
- Checks for root mismatch before writing the mapping
- Maps the block header root to the locally computed root
- Integrates with the validation skip logic in
block_validator.goethclient/authclient/engine.go (1)
28-28: LGTM: Timestamp parameter correctly propagated.The
timeStampparameter is correctly passed to theTimestampfield inAssembleL2BlockParams.core/rawdb/accessors_state_test.go (2)
17-47: LGTM! Well-structured test coverage.The test comprehensively covers the basic CRUD operations for disk state root mappings, including positive cases (write/read), error cases (non-existent mapping), and cleanup (delete).
49-95: LGTM! Excellent multi-mapping test coverage.This test effectively verifies that multiple disk state root mappings can coexist independently and that deleting one mapping doesn't affect others. The approach of storing expected values in a map and iterating for verification is clean and maintainable.
core/state/state_object.go (2)
556-561: LGTM! Defensive guard improves API safety.The new guard correctly prevents returning a Poseidon code hash when ZKTrie is not enabled, aligning with the field's purpose as "zkTrie specific" (per the comment in
core/types/state_account.go).
506-523: All call sites have been properly updated to pass the Database parameter.The signature change from
CodeSize()toCodeSize(db Database)is correctly implemented throughout the codebase:
- Direct calls to
stateObject.CodeSize(db)properly pass the Database parameter (state_test.go, statedb.go)- StateDB's
GetCodeSize(addr)wrapper correctly forwards the call withstateObject.CodeSize(s.db)- All external callers continue to use the StateDB wrapper, requiring no changes
The implementation correctly handles the three cases: in-memory code, empty code hash, and database query. Error handling follows the existing
setError()pattern consistently.eth/catalyst/gen_l2blockparams.go (1)
19-23: LGTM! Generated code correctly handles optional Timestamp field.The marshaling and unmarshaling logic for the new
Timestamp *uint64field follows the established pattern for optional fields in this generated file. Since this is auto-generated code (line 1), ensure the source definition ineth/catalyst/api_types.gois correct.Also applies to: 38-48
miner/miner.go (1)
170-177: All BuildBlock call sites have been properly updated to use uint64 timestamps.The signature change is consistently implemented across the codebase:
- Test cases (eth/catalyst/l2_api_test.go): Converting
time.Now().Unix()touint64at call sites- API layer (eth/catalyst/l2_api.go): Declaring timestamp as
uint64before passing to BuildBlock- Implementation (miner/miner.go): Receiving and using
uint64directlyThis aligns with blockchain conventions and the change is complete with no stray
time.Timecalls remaining.core/types/state_account.go (1)
34-35: Therlp:"-"tags are correctly applied and backward compatible.The exclusion of
PoseidonCodeHashandCodeSizefrom RLP serialization is appropriate since:
PoseidonCodeHashis zkTrie-specific and unnecessary in MPT modeCodeSizecan be derived from codeGo's RLP library automatically handles deserialization safely: old state data lacking these fields will initialize them to zero, and any hypothetical old data containing them will be safely ignored. No additional migration logic is required.
core/state/snapshot/account.go (1)
36-37: LGTM - Correct RLP exclusion for zkTrie-specific fields.The
rlp:"-"tags properly excludePoseidonCodeHashandCodeSizefrom serialization. The inline comments clearly document the rationale. The surrounding functions (SlimAccount,FullAccount) correctly handle these fields for in-memory operations while respecting the serialization boundary.core/genesis.go (1)
389-392: LGTM - Genesis state root override with proper disk mapping.The logic correctly:
- Checks for Morph configuration with a custom GenesisStateRoot
- Overrides the block header root with the configured value
- Persists the mapping from header root (external format) to disk root (local format)
This enables cross-format genesis handling where the block uses one trie format's root while the disk stores another format.
core/rawdb/accessors_state.go (1)
97-122: LGTM - Well-designed disk state root mapping APIs.The three new functions follow established patterns in this file:
WriteDiskStateRoot: Useslog.Criton failure (consistent withWriteCode,WriteTrieNode)ReadDiskStateRoot: Returns error for missing mappings (documented as normal for same-format blocks)DeleteDiskStateRoot: Useslog.Criton failure (consistent withDeleteCode,DeleteTrieNode)The documentation clearly explains the cross-format mapping purpose for MPT migration.
core/mpt_fork_test.go (4)
32-87: LGTM - Comprehensive fork transition tests.The table-driven tests cover the essential boundary conditions for
IsMPTFork:
- Before fork time → false
- At exact fork time → true
- After fork time → true
- No fork configured (nil) → false
Good use of descriptive test names and logging for debugging.
91-180: LGTM - Good coverage of same-format block processing.The test correctly verifies that same-format block processing (pure MPT or pure zkTrie) does not create disk state root mappings. The comment at line 114-116 appropriately notes that cross-format scenarios are tested separately.
187-293: LGTM - Thorough cross-format sync test.This test validates the complete workflow:
- Generate zkTrie chain with transactions
- MPT node uses zkTrie genesis root via
GenesisStateRootconfig- Sync zkTrie blocks to MPT chain
- Verify mappings exist and roots differ
- Verify state accessibility via zkTrie header roots
The assertions at lines 276-279 correctly verify that cross-format mapping produces different roots.
301-415: LGTM - Symmetric reverse-direction test.This test complements
TestMPTNodeSyncsZkTrieBlocksby testing the reverse scenario: a zkTrie node syncing MPT blocks. The test correctly:
- Creates shared zkTrie genesis
- Generates MPT blocks from an MPT node
- Has zkTrie node sync these MPT blocks
- Verifies MPT header root → zkTrie disk root mappings
cmd/utils/flags.go (1)
185-188: LGTM - Flag definition is clear and well-documented.The flag name
morph-mptand usage description clearly communicate the purpose: switching from default zkTrie to MPT for state storage.core/state/mpt_migration_test.go (5)
31-117: LGTM - Good structure with appropriate test skipping.The test structure is well-organized with table-driven approach. The zkTrie tests are correctly skipped with clear explanation (line 53: "zkTrie requires special initialization not available in unit tests"). This is pragmatic - testing MPT-to-MPT validates the infrastructure.
120-168: LGTM - Comprehensive mapping CRUD test.Good test coverage for the basic write/read flow. The simulated zkTrie root approach (line 149) is a reasonable way to test the mapping mechanism without requiring actual zkTrie initialization.
171-239: LGTM - Good mapping resolution test with clear documentation.The test clearly documents expected behavior at lines 224-226, acknowledging that opening the simulated root directly won't work (as expected), while demonstrating that the mapping mechanism itself functions correctly.
242-290: LGTM - Important edge case coverage.Testing with
common.Hash{}andtypes.EmptyRootHashvalidates that the mapping mechanism handles boundary cases correctly. TheTestMissingMappingfunction properly verifies error handling for non-existent mappings.
293-385: LGTM - Thorough deletion and isolation tests.
TestMappingDeletionandTestMultipleMappingsprovide good coverage:
- Deletion correctly removes only the targeted mapping
- Other mappings remain intact after partial deletion
- Independent mappings don't interfere with each other
| if diskRoot, _ := rawdb.ReadDiskStateRoot(api.eth.ChainDb(), block.Root()); diskRoot != (common.Hash{}) { | ||
| return DiskAndHeaderRoot{ | ||
| DiskRoot: diskRoot, | ||
| HeaderRoot: block.Root(), | ||
| }, nil | ||
| } |
There was a problem hiding this comment.
Surface database errors instead of silently ignoring them.
Line 681 discards the error from ReadDiskStateRoot, making database errors indistinguishable from "no mapping exists". This could hide real database issues from API consumers.
🔎 Proposed fix to check and handle the error
- if diskRoot, _ := rawdb.ReadDiskStateRoot(api.eth.ChainDb(), block.Root()); diskRoot != (common.Hash{}) {
+ diskRoot, err := rawdb.ReadDiskStateRoot(api.eth.ChainDb(), block.Root())
+ if err != nil && err != errors.New("not found") {
+ return DiskAndHeaderRoot{}, fmt.Errorf("failed to read disk state root: %w", err)
+ }
+ if diskRoot != (common.Hash{}) {
return DiskAndHeaderRoot{
DiskRoot: diskRoot,
HeaderRoot: block.Root(),
}, nil
}Note: You'll need to verify the exact error returned by ReadDiskStateRoot when no mapping exists, as it might return nil or a specific error type. Adjust the error check accordingly.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In eth/api.go around lines 681-686, the call to rawdb.ReadDiskStateRoot
currently discards its error, conflating genuine DB failures with "no mapping"
results; change the code to capture the returned error, and if err != nil return
that error immediately (so callers see DB issues), otherwise proceed to check
diskRoot against the zero Hash and return DiskAndHeaderRoot when appropriate; if
the underlying API uses a specific "not found" error value instead of a nil
error for missing mappings, explicitly treat that case as a non-fatal condition
and only suppress it, while propagating any other errors.
| blockTrace.Bytecodes = append(blockTrace.Bytecodes, &types.BytecodeTrace{ | ||
| CodeSize: 0, | ||
| KeccakCodeHash: codehash.EmptyKeccakCodeHash, | ||
| PoseidonCodeHash: codehash.EmptyPoseidonCodeHash, | ||
| Code: hexutil.Bytes{}, | ||
| Code: hexutil.Bytes{}, | ||
| }) | ||
| for _, codeInfo := range env.Codes { | ||
| blockTrace.Bytecodes = append(blockTrace.Bytecodes, &types.BytecodeTrace{ | ||
| CodeSize: codeInfo.CodeSize, | ||
| KeccakCodeHash: codeInfo.KeccakCodeHash, | ||
| PoseidonCodeHash: codeInfo.PoseidonCodeHash, | ||
| Code: codeInfo.Code, | ||
| Code: codeInfo.Code, | ||
| }) | ||
| } |
There was a problem hiding this comment.
Data loss: BytecodeTrace only populates Code field.
The codeInfo contains CodeSize, KeccakCodeHash, PoseidonCodeHash, and Code, but only Code is transferred to BytecodeTrace. The BytecodeTrace struct still defines all these fields (see core/types/l2trace.go:25-30), so consumers may expect them to be populated.
🔎 Proposed fix to include all fields
blockTrace.Bytecodes = append(blockTrace.Bytecodes, &types.BytecodeTrace{
- Code: hexutil.Bytes{},
+ CodeSize: 0,
+ KeccakCodeHash: common.Hash{},
+ PoseidonCodeHash: common.Hash{},
+ Code: hexutil.Bytes{},
})
for _, codeInfo := range env.Codes {
blockTrace.Bytecodes = append(blockTrace.Bytecodes, &types.BytecodeTrace{
- Code: codeInfo.Code,
+ CodeSize: codeInfo.CodeSize,
+ KeccakCodeHash: codeInfo.KeccakCodeHash,
+ PoseidonCodeHash: codeInfo.PoseidonCodeHash,
+ Code: codeInfo.Code,
})
}If this is intentional (e.g., to reduce payload size), consider removing the unused fields from BytecodeTrace struct to avoid confusion.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| blockTrace.Bytecodes = append(blockTrace.Bytecodes, &types.BytecodeTrace{ | |
| CodeSize: 0, | |
| KeccakCodeHash: codehash.EmptyKeccakCodeHash, | |
| PoseidonCodeHash: codehash.EmptyPoseidonCodeHash, | |
| Code: hexutil.Bytes{}, | |
| Code: hexutil.Bytes{}, | |
| }) | |
| for _, codeInfo := range env.Codes { | |
| blockTrace.Bytecodes = append(blockTrace.Bytecodes, &types.BytecodeTrace{ | |
| CodeSize: codeInfo.CodeSize, | |
| KeccakCodeHash: codeInfo.KeccakCodeHash, | |
| PoseidonCodeHash: codeInfo.PoseidonCodeHash, | |
| Code: codeInfo.Code, | |
| Code: codeInfo.Code, | |
| }) | |
| } | |
| blockTrace.Bytecodes = append(blockTrace.Bytecodes, &types.BytecodeTrace{ | |
| CodeSize: 0, | |
| KeccakCodeHash: common.Hash{}, | |
| PoseidonCodeHash: common.Hash{}, | |
| Code: hexutil.Bytes{}, | |
| }) | |
| for _, codeInfo := range env.Codes { | |
| blockTrace.Bytecodes = append(blockTrace.Bytecodes, &types.BytecodeTrace{ | |
| CodeSize: codeInfo.CodeSize, | |
| KeccakCodeHash: codeInfo.KeccakCodeHash, | |
| PoseidonCodeHash: codeInfo.PoseidonCodeHash, | |
| Code: codeInfo.Code, | |
| }) | |
| } |
| c.db.dirtiesSize -= common.StorageSize(common.HashLength + int(node.size)) | ||
| if node.children != nil { | ||
| c.db.dirtiesSize -= common.StorageSize(cachedNodeChildrenSize + len(node.children)*(common.HashLength+2)) | ||
| c.db.childrenSize -= common.StorageSize(cachedNodeChildrenSize + len(node.children)*(common.HashLength+2)) |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Essential size accounting correction for external children.
The change ensures children metadata size is subtracted from childrenSize rather than dirtiesSize, aligning with how this size is tracked when nodes are added (see lines 473, 480 where children are added to childrenSize). This maintains consistent accounting across the node lifecycle.
🤖 Prompt for AI Agents
In trie/database.go around line 788, the code subtracts external children
metadata size from the wrong counter; change the subtraction to
c.db.childrenSize instead of c.db.dirtiesSize so the metadata removed uses the
same childrenSize counter used when nodes are added (use cachedNodeChildrenSize
+ len(node.children)*(common.HashLength+2) as the amount to subtract). Ensure
the updated line updates c.db.childrenSize with that computed value and
preserves any existing locking or atomic semantics around size updates.
# Conflicts: # eth/catalyst/api_types.go # eth/catalyst/gen_l2blockparams.go # eth/catalyst/l2_api.go
2f6cd3b to
5277c38
Compare
* feat: change zktrie to mpt * fix: add diskroot mapping for consensus api new L2 block * fix: fixed alttxfee stroage trace in mpt mode * refactor: remove useless block trace struct fields * refactor: retain executionResults in blocktrace but remove structlog * fix: fixed ci code scanning alerts * feat: add eth_config with morph extension * feat: add mptForkTime override flag * fix: fixed getblockTrace stateroot mismatch * Update AssembleL2Block (#259) * update AssembleL2BlockParams --------- Co-authored-by: corey <corey.zhang@bitget.com> * feat:Add migration-checker (#271) * update AssembleL2BlockParams * update ec clent * add migration-checker * add gen preimages * clean changes * optimize“ * update readme“ * clean readme“ * clean trie changes --------- Co-authored-by: corey <corey.zhang@bitget.com> * limit AssembleL2Block (#285) * limit AssembleL2Block * fmt --------- Co-authored-by: corey <corey.zhang@bitget.com> * fix(zktrie): add max recursion depth limit to countLeaves (#286) Add maxZkTrieDepth (256) constant and depth check in countLeaves function to prevent potential stack overflow, node crashes, and DoS attacks. The theoretical maximum depth of zkTrie is 256, so this limit ensures safe recursion while covering all valid use cases. Co-authored-by: corey <corey.zhang@bitget.com> * Add Progress bar (#280) * add progress bar * clean default value * fix --------- Co-authored-by: corey <corey.zhang@bitget.com> --------- Co-authored-by: corey <coreyx1992@gmail.com> Co-authored-by: corey <corey.zhang@bitget.com>
1. Purpose or design rationale of this PR
...
2. PR title
Your PR title must follow conventional commits (as we are doing squash merge for each PR), so it must start with one of the following types:
3. Deployment tag versioning
Has the version in
params/version.gobeen updated?4. Breaking change label
Does this PR have the
breaking-changelabel?Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.