Conversation
Previously the reporting interval for the InfluxDB metrics exporter was hard-coded to 10s in SetupMetrics, which did not align with upstream go-ethereum where the value is configurable via both a CLI flag and the metrics Config struct. - Add InfluxDBInterval to metrics.Config and default to 10s. - Add --metrics.influxdb.interval DurationFlag. - Propagate the value through applyMetricConfig and SetupMetrics. - Cover the default and custom values with a config_test unit test. Made-with: Cursor
eth_getTransactionByHash and related APIs previously returned only block hash and number for mined transactions, forcing clients to issue an extra eth_getBlockByNumber call just to obtain the block time. This change mirrors upstream PR #33709 by adding a blockTimestamp field that is populated alongside blockHash/blockNumber when the transaction is mined, and left as null for pending transactions. The new field is placed adjacent to BlockNumber to preserve a natural JSON ordering. - Add BlockTimestamp *hexutil.Uint64 to RPCTransaction. - Populate from blockTime inside NewRPCTransaction's mined branch. - Cover mined and pending shapes in api_blocktimestamp_test. Made-with: Cursor
…eam #33163) Upstream go-ethereum exposes the same block-range cap under --rpc.rangelimit, while morph has historically used --rpc.getlogs.maxrange. Introducing the upstream-aligned name as an alias lets operators migrate launch scripts without churn and gives the existing flag a well-known counterpart. - Add RPCRangeLimitFlag (int64) sharing the same ethconfig.MaxBlockRange. - When both are supplied, --rpc.rangelimit wins. - Normalize 0 / negative values from --rpc.rangelimit to -1 (unlimited) so the upstream "0 = unlimited" convention interoperates cleanly with morph's -1 sentinel. - Cover all resolution paths in a dedicated unit test. Made-with: Cursor
…33704)
The single-file TxPool used to unconditionally stamp pool.beats[addr]
from the pending-replacement branch of add() and from promoteTx(). That
introduced two problems:
* Accounts with only pending transactions accumulated a heartbeat
entry that their queue (the thing Lifetime actually tracks) never
held, skewing eviction bookkeeping.
* enqueueTx only recorded the heartbeat on the very first enqueue, so
long-lived but still-active queues could be evicted prematurely
because their heartbeat was pinned to a stale timestamp.
This change ports upstream PR #33704's intent to the single-file pool:
* Introduce pool.bumpBeats(addr) that refreshes the heartbeat iff the
account already has an entry. The queue-empty cleanup paths
(removeTx / promoteExecutables) already delete entries as needed,
so "beats[addr] exists" is a reliable proxy for "queue non-empty".
* Use bumpBeats at the add() replacement path (L957) and at
promoteTx() (L1065).
* Replace the "only if missing" write in enqueueTx (L1015) with an
unconditional refresh so every enqueue from the same sender extends
the Lifetime window.
Covered by new unit tests for the helper, for the enqueueTx refresh
path, and for the queue-less pending replacement path.
Made-with: Cursor
eth_getStorageValues returns multiple storage slot values for multiple
accounts against a single consistent state snapshot, avoiding the N+1
eth_getStorageAt round-trips that DApps and indexers issue today.
The method:
* accepts a map[Address][]Hash of slot requests plus a block
number/hash selector,
* rejects empty requests and requests exceeding 1024 total slots
(maxGetStorageSlots) before touching the state snapshot,
* preserves per-address slot ordering in the response.
Behavior mirrors upstream go-ethereum PR #32591 verbatim. The method is
automatically exposed as eth_getStorageValues via the existing reflect-
based RPC registration for PublicBlockChainAPI.
A new dedicated test file covers the happy path, missing slots, empty
requests (both forms), the limit boundary, and the over-limit rejection.
Made-with: Cursor
…8892) Ports ethereum/go-ethereum#28892 to morph's v1.10.26-era trie package. The implementation matches upstream's public API surface while adapting to the differences in morph's trie opening / reader layer and to the ZKTrie-aware chain config. New building blocks ------------------- trie/levelstats.go Per-depth classifier that counts short / full / value nodes and raw byte size per level, backed by atomic counters for concurrent writers. AddLeaf / LeafDepths preserve the witness-stats API so the same type can be reused by stateless verification if morph ever pulls in that path. trie/inspect.go - Inspect(triedb, root, config): two-pass inspector. Pass 1 walks the account trie with bounded parallelism (16 concurrent walkers via semaphore.Weighted + errgroup), streams one fixed-size record per non-empty storage trie into a dump file on disk, and emits the account trie as a sentinel record. A ticker logs progress every 8s for long scans. - InspectContract(triedb, db, stateRoot, address): inspects a single contract's storage footprint by running the storage trie walk in parallel with an iterator over the snapshot storage prefix, so operators see both views side-by-side. - Summarize(dumpPath, config): pass 2 aggregates per-level totals, builds three top-N rankings (by max depth, total node count, value-node count), and emits a human-readable table via internal/tablewriter or a JSON blob when --output is supplied. - ErrUnsupportedTrieFormat signals ZKTrie-encoded state (morph pre-JadeFork) so the CLI can refuse before producing bogus statistics. internal/tablewriter Self-contained text/tabwriter-backed stub mirroring the public API used by upstream's tablewriter call sites (SetHeader / AppendBulk / SetFooter / Render). Avoids pulling in a third-party dependency. CLI surface (cmd/geth db inspect-trie) -------------------------------------- latest | <blocknum> | snapshot select state root --exclude-storage skip per-account storage walks --top N top-N ranking size (default 10) --output <path> write JSON report instead of stdout tables --dump-path <path> custom pass-1 dump location (default <datadir>/trie-dump.bin) --summarize <path> re-summarize an existing dump, skipping the walk --contract 0xADDR run InspectContract against the resolved state root Tests ----- TestInspectRoundTripsDump — Inspect output and a standalone Summarize over the same dump produce byte-identical JSON. TestInspectNoStorageSkipsWalk — NoStorage short-circuits the storage walk but still emits the account sentinel. TestInspectEmptyRootEmitsAccountSentinel — one 352-byte record on an empty trie. TestInspectRejectsMissingRoot — clean error on unknown root. TestSummarizeRejectsTruncatedDump — dump file size validation. TestSummarizeRejectsMissingAccountSentinel — sentinel record is mandatory. TestInspectContract — end-to-end run on a synthetic contract with snapshot data. TestInspectContractRejectsMissingAccount / TestInspectContractRejectsStorageless — refuse absent or empty storage tries. Level-stats + tablewriter unit tests round out the bucket. Intentionally out of scope -------------------------- - core/stateless/stats.go does not currently use LevelStats on morph; migrating it is a separate change that can follow once stateless verification tracks upstream more closely. Made-with: Cursor
Follow-up to f9eba28: the JSON-RPC method registered automatically via reflection on PublicBlockChainAPI, but the JS console relies on web3.js extensions for any non-built-in eth_* methods. Without this registration `eth.getStorageValues(...)` fails inside `geth attach` even though raw HTTP/IPC requests succeed. The new web3._extend.Method block matches upstream PR #32591 verbatim (params: 2, second arg formatted via inputBlockNumberFormatter), so console scripts written against an upstream go-ethereum node now work unchanged against morph. Made-with: Cursor
📝 WalkthroughWalkthroughAdds a two‑pass trie inspection tool and CLI subcommand, eth_getStorageValues RPC and web3 binding, configurable InfluxDB export interval and RPC range alias flags, transaction pool heartbeat refinements, table rendering and trie level‑stats utilities, metrics config/validation, RPC transaction block timestamp enrichment, and many tests. Changes
Sequence Diagram(s)sequenceDiagram
participant User as User/CLI
participant DBCmd as db inspect-trie
participant Trie as Trie Inspector
participant DB as Database
participant Dump as Dump File
participant Output as Console/JSON
User->>DBCmd: run `db inspect-trie` with flags
alt Summarize mode
DBCmd->>Trie: Summarize(dumpPath, config)
Trie->>Dump: read binary dump
Trie->>Trie: parse records, aggregate stats
Trie->>Output: emit JSON/table summary
else Inspect mode
DBCmd->>DB: resolve state root (block/snapshot)
DBCmd->>Trie: Inspect(root, config)
Trie->>DB: walk account trie and storage tries (concurrent)
Trie->>Dump: write fixed-size dump records
Trie->>Trie: finalize and Summarize(dump)
Trie->>Output: emit summary and progress logs
else Contract mode
DBCmd->>Trie: InspectContract(root, address)
Trie->>DB: scan snapshot prefix
Trie->>Trie: walk contract storage trie
Trie->>Output: print per-depth storage report
end
sequenceDiagram
participant Client as RPC Client
participant Server as RPC Server
participant API as PublicBlockChainAPI
participant State as State DB
participant Response as Response
Client->>Server: eth_getStorageValues(requests, blockNrOrHash)
Server->>API: GetStorageValues(ctx, requests, block)
API->>API: validate requests (non-empty, <=1024 slots)
alt invalid
API->>Response: return invalidParamsError / clientLimitExceededError
Response->>Client: error
else valid
API->>State: StateAndHeaderByNumberOrHash(block)
State->>API: state snapshot
loop per address & key
API->>State: GetState(address, key)
State->>API: value
API->>API: accumulate results preserving input order
end
API->>Response: map[address][]hexutil.Bytes
Response->>Client: success
end
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 golangci-lint (2.11.4)Error: can't load config: unsupported version of the configuration: "" See https://golangci-lint.run/docs/product/migration-guide for migration instructions Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
…spectTarget This commit introduces a new test file, dbcmd_test.go, which includes unit tests for the snapshot inspection functionality. The tests validate the behavior of resolveInspectTarget when handling different metadata scenarios, ensuring correct retrieval of snapshot roots and associated block metadata. Additionally, the resolveInspectTarget function has been updated to return a boolean indicating whether the block metadata is known, improving error handling and clarity in the inspection process. The inspectTrie function has also been modified to utilize this new metadata information. Made-with: Cursor
…uxDB interval checks This commit introduces a new function, validateInspectTrieTopN, to validate the top-N parameter for the inspectTrie command, ensuring it must be greater than zero. The usage message for the corresponding CLI flag has been updated to reflect this requirement. Additionally, a new test, TestValidateInspectTrieTopN, has been added to cover various scenarios for the top-N validation. Furthermore, the InfluxDB reporting interval is now validated in the SetupMetrics function, ensuring that only positive intervals are accepted. A corresponding test, TestValidateInfluxDBInterval, has been added to verify this behavior. These changes improve error handling and user feedback for both the inspectTrie command and InfluxDB metrics configuration. Made-with: Cursor
There was a problem hiding this comment.
Actionable comments posted: 6
🧹 Nitpick comments (3)
internal/tablewriter/tablewriter.go (1)
105-114: Separator width is fixed at 9 runes per column regardless of actual cell width.
strings.Repeat("─────────\t", cols)emits a fixed 9-rune bar per column, buttext/tabwritersizes columns to the widest cell in each column. When a header or data cell exceeds 9 characters, the separator will visibly under-run the column (and be padded with spaces by tabwriter), producing a ragged rule. Also, when a footer is set but there are no data rows, two separator lines are emitted back-to-back.Neither is a correctness issue, but if the output is user-facing in the CLI reporters this will look off on realistic data. Consider computing per-column widths (max of header/rows/footer lengths) and emitting the rule sized to each column, or dropping the rule and letting tabwriter's column spacing do the separation.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/tablewriter/tablewriter.go` around lines 105 - 114, The separator uses a fixed 9-rune bar so rules misalign with tabwriter columns and you also emit two back-to-back separators when there are no rows; update the rendering where tabwriter.NewWriter is used (variables tw, t.header, t.rows, t.footer) to compute per-column max widths from header, rows and footer, then build the separator by repeating the box-drawing rune to each column's width and joining with "\t" (instead of strings.Repeat("─────────\t", cols)); also only emit the footer separator once (i.e., emit header rule, data rows, and a single footer rule if footer exists, skipping the extra rule when rows are empty).cmd/utils/flags.go (1)
1578-1600: Optional: legacy flag's zero-value handling is asymmetric with the new flag's documented semantics.
RPCRangeLimitFlag's usage advertises "-1 or 0 = unlimited", and the new-flag branch coercesv <= 0to-1. The legacy branch, however, assignsctx.GlobalInt64(MaxBlockRangeFlag.Name)verbatim, so--rpc.getlogs.maxrange 0yieldscfg.MaxBlockRange = 0rather than-1. That's likely the historical behavior you want to preserve (the doc comment says so), but an operator migrating from--rpc.rangelimit 0to the legacy name may be surprised. If that's a concern, apply the same<= 0 → -1coercion in the legacy branch too; otherwise consider a brief note inMaxBlockRangeFlag.Usageclarifying that0is not treated as unlimited there.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/utils/flags.go` around lines 1578 - 1600, The legacy branch in setMaxBlockRange treats MaxBlockRangeFlag verbatim, causing --rpc.getlogs.maxrange 0 to set cfg.MaxBlockRange=0 while RPCRangeLimitFlag coerces v<=0 to -1; update the MaxBlockRangeFlag handling in setMaxBlockRange to mirror the RPCRangeLimitFlag branch by reading ctx.GlobalInt64(MaxBlockRangeFlag.Name) into a local v and setting cfg.MaxBlockRange = -1 if v <= 0, otherwise set it to v (or alternatively add a Usage note on MaxBlockRangeFlag if you prefer to keep asymmetric behavior).core/tx_pool_heartbeat_test.go (1)
160-173: Minor: reach intopendingList.txs.itemsis fragile.Accessing
pendingList.txs.items[0]directly couples the test to internal map layout. Given theLen() == 1guard above, it works today, but a no-op rewrite viaFlatten()keeps the test resilient to internal restructuring oftxSortedMap.Optional refactor
- if got, want := pendingList.txs.items[0].Hash(), replacement.Hash(); got != want { - t.Fatalf("pending head mismatch: got %x want %x", got, want) - } + if got, want := pendingList.Flatten()[0].Hash(), replacement.Hash(); got != want { + t.Fatalf("pending head mismatch: got %x want %x", got, want) + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/tx_pool_heartbeat_test.go` around lines 160 - 173, The test reaches into the internal map via pendingList.txs.items[0], which is fragile; instead call the provided Flatten() on the txSortedMap to get a stable slice. Replace direct access to pendingList.txs.items with calling pendingList.txs.Flatten() (or pendingList.Flatten() if available) and use the first element's Hash() for the comparison so the test no longer depends on internal map layout.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cmd/geth/dbcmd.go`:
- Around line 489-496: The zkTrie gate currently only rejects legacy trie
handling when blockMetaKnown is true, allowing a garbage dump if metadata is
unknown; update the check in the db inspection path (see chainConfig,
chainConfig.Morph.UseZktrie, blockTime and blockMetaKnown in cmd/geth/dbcmd.go)
so that a Morph chain with UseZktrie=true is refused whenever we cannot prove we
are past JadeFork — i.e. change the condition to treat unknown metadata as
unsafe (fail when !blockMetaKnown OR !chainConfig.IsJadeFork(blockTime)) and
return trie.ErrUnsupportedTrieFormat; also ensure the behavior applies to the
snapshot inspection codepath that calls resolveSnapshotInspectTarget.
In `@cmd/utils/flags.go`:
- Around line 2059-2069: The InfluxDB interval in SetupMetrics is only using
metrics.DefaultConfig.InfluxDBInterval and the CLI flag
(ctx.GlobalIsSet/ctx.GlobalDuration) and thus ignores a value set via the parsed
config (applyMetricConfig); update the resolution to prefer the TOML/parsed
config value when present by checking cfg.Metrics.InfluxDBInterval (or
equivalent parsed config field) before falling back to the default and before
applying the CLI override, keeping the CLI flag
(MetricsInfluxDBIntervalFlag.Name) highest precedence; ensure the variable
interval (currently seeded from metrics.DefaultConfig.InfluxDBInterval) is
initialized from cfg.Metrics.InfluxDBInterval if non-zero, then overridden by
ctx.GlobalDuration, and still validated via metrics.ValidateInfluxDBInterval
with the same Fatalf error path when enableExport or enableExportV2 are set.
In `@core/tx_pool.go`:
- Around line 1017-1020: The unconditional heartbeat refresh in enqueueTx
(pool.beats[from] = time.Now()) causes internal reshuffles to extend queue
lifetime; change enqueueTx so the beat is updated only when the enqueue
represents external activity by gating it behind the addAll flag (i.e., if
addAll { pool.beats[from] = time.Now() }). Update call sites/paths that pass
addAll=false (removeTx, demoteUnexecutables) to ensure they keep passing
addAll=false so internal demotions and reorg-driven moves do not reset the
heartbeat.
In `@internal/ethapi/api_blocktimestamp_test.go`:
- Around line 56-61: Update the comment in TestRPCTransactionBlockTimestamp to
reflect the actual asserted JSON shape for pending transactions: change the
sentence that currently reads "must not expose the field" to state that pending
transactions should include the "blockTimestamp" field with a null value (e.g.,
"pending transactions should expose the field as null so
eth_getTransactionByHash returns a stable shape and clients can distinguish
pending from confirmed txs"), keeping the rest of the comment intact.
In `@internal/ethapi/api.go`:
- Around line 1017-1044: The loop that only sums totalSlots (using requests and
maxGetStorageSlots) must also reject per-address empty slot lists to prevent
unbounded address iteration and map allocation; update the validation in the
initial loop over requests (before calling StateAndHeaderByNumberOrHash and
before allocating result) to return an invalidParamsError when any keys slice is
empty (e.g., len(keys) == 0) while still enforcing the overall
maxGetStorageSlots cap, so that result and subsequent iteration (state.GetState
/ result map population) only happen for addresses with at least one slot.
In `@trie/inspect.go`:
- Around line 446-465: writeDumpRecord currently truncates per-level counters to
uint32 silently; add explicit overflow guards before packing to prevent silent
data loss: in writeDumpRecord (the block that iterates s.level[i] and calls
binary.LittleEndian.PutUint32) check each loaded counter
(s.level[i].short.Load(), s.level[i].full.Load(), s.level[i].value.Load())
against math.MaxUint32 and if any exceeds it call in.setError(fmt.Errorf("dump
record overflow: level %d %s counter %d exceeds uint32 max", i,
"<short|full|value>", value)) and return without writing; keep the existing
write path unchanged so the on-disk wire format remains compatible.
---
Nitpick comments:
In `@cmd/utils/flags.go`:
- Around line 1578-1600: The legacy branch in setMaxBlockRange treats
MaxBlockRangeFlag verbatim, causing --rpc.getlogs.maxrange 0 to set
cfg.MaxBlockRange=0 while RPCRangeLimitFlag coerces v<=0 to -1; update the
MaxBlockRangeFlag handling in setMaxBlockRange to mirror the RPCRangeLimitFlag
branch by reading ctx.GlobalInt64(MaxBlockRangeFlag.Name) into a local v and
setting cfg.MaxBlockRange = -1 if v <= 0, otherwise set it to v (or
alternatively add a Usage note on MaxBlockRangeFlag if you prefer to keep
asymmetric behavior).
In `@core/tx_pool_heartbeat_test.go`:
- Around line 160-173: The test reaches into the internal map via
pendingList.txs.items[0], which is fragile; instead call the provided Flatten()
on the txSortedMap to get a stable slice. Replace direct access to
pendingList.txs.items with calling pendingList.txs.Flatten() (or
pendingList.Flatten() if available) and use the first element's Hash() for the
comparison so the test no longer depends on internal map layout.
In `@internal/tablewriter/tablewriter.go`:
- Around line 105-114: The separator uses a fixed 9-rune bar so rules misalign
with tabwriter columns and you also emit two back-to-back separators when there
are no rows; update the rendering where tabwriter.NewWriter is used (variables
tw, t.header, t.rows, t.footer) to compute per-column max widths from header,
rows and footer, then build the separator by repeating the box-drawing rune to
each column's width and joining with "\t" (instead of
strings.Repeat("─────────\t", cols)); also only emit the footer separator once
(i.e., emit header rule, data rows, and a single footer rule if footer exists,
skipping the extra rule when rows are empty).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 9d2a1865-ec32-432b-bba2-180d016c0d8c
📒 Files selected for processing (22)
cmd/geth/chaincmd.gocmd/geth/config.gocmd/geth/dbcmd.gocmd/geth/dbcmd_test.gocmd/geth/main.gocmd/utils/flags.gocmd/utils/rangelimit_test.gocore/tx_pool.gocore/tx_pool_heartbeat_test.gointernal/ethapi/api.gointernal/ethapi/api_blocktimestamp_test.gointernal/ethapi/api_getstoragevalues_test.gointernal/ethapi/errors.gointernal/tablewriter/tablewriter.gointernal/tablewriter/tablewriter_test.gointernal/web3ext/web3ext.gometrics/config.gometrics/config_test.gotrie/inspect.gotrie/inspect_test.gotrie/levelstats.gotrie/levelstats_test.go
…ds, etc.) cmd/geth/dbcmd.go: Treat unknown block metadata as unsafe for zkTrie chains; gate passes JadeFork check now requires blockMetaKnown to be true (fail on !blockMetaKnown || !IsJadeFork) so snapshot path with no metadata does not silently dump garbage. cmd/utils/flags.go: Normalize MaxBlockRangeFlag value 0 to -1, matching RPCRangeLimitFlag behavior so --rpc.getlogs.maxrange 0 means "no limit" consistently. core/tx_pool.go: Gate heartbeat refresh in enqueueTx behind addAll flag; internal reshuffles (demoteUnexecutables, removeTx) pass addAll=false and must not extend queue lifetime. core/tx_pool_heartbeat_test.go: Replace fragile pendingList.txs.items[0] with Flatten()[0]; update invariant comment to reflect conditional heartbeat refresh. internal/ethapi/api.go: Reject per-address empty slot lists in GetStorageValues to prevent unbounded address iteration and map allocation. internal/ethapi/api_blocktimestamp_test.go: Fix comment: pending txs expose blockTimestamp as null (not absent). internal/tablewriter/tablewriter.go: Compute per-column max widths for separators; emit only one separator before footer (no double separator when rows are empty). trie/inspect.go: Add explicit math.MaxUint32 overflow guards in writeDumpRecord before packing level counters to uint32 to prevent silent data loss. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (1)
cmd/utils/flags.go (1)
2065-2074:⚠️ Potential issue | 🟡 MinorStill ignoring config-only InfluxDB interval values.
intervalis resolved from the default plus CLI only, so a TOML/parsed config value is not honored unless the CLI flag is set.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/utils/flags.go` around lines 2065 - 2074, The code always falls back to metrics.DefaultConfig.InfluxDBInterval and only respects a CLI flag, ignoring any TOML/parsed config value; change the resolution so interval is initialized from the parsed metrics configuration (the loaded config struct for metrics) rather than DefaultConfig, then override it only if ctx.GlobalIsSet(MetricsInfluxDBIntervalFlag.Name) and ctx.GlobalDuration(...) is provided; keep the existing validation call to metrics.ValidateInfluxDBInterval(interval) when enableExport or enableExportV2 is true and keep the Fatalf on error. Reference the symbols to update: replace use of metrics.DefaultConfig.InfluxDBInterval with the parsed metrics config field (the loaded config struct's InfluxDBInterval), retain MetricsInfluxDBIntervalFlag.Name, ctx.GlobalIsSet, ctx.GlobalDuration, metrics.ValidateInfluxDBInterval, enableExport, enableExportV2, and Fatalf.
🧹 Nitpick comments (3)
internal/ethapi/api.go (1)
1016-1049: Consider honoringctxcancellation during slot reads.Validation and the
StateAndHeaderByNumberOrHashcall are ctx-aware, but the main read loop (state.GetStateup to 1024 times) doesn't observectx.Done(). On a cold state / large snapshot the worst case can pin a request thread well past client cancellation. A cheapselectevery N slots (or per-address) avoids wasted work without materially hurting the happy path.♻️ Optional: cooperative cancellation in the read loop
result := make(map[common.Address][]hexutil.Bytes, len(requests)) for addr, keys := range requests { + select { + case <-ctx.Done(): + return nil, ctx.Err() + default: + } vals := make([]hexutil.Bytes, len(keys)) for i, key := range keys { v := state.GetState(addr, key) vals[i] = v[:] }Also a minor nit: the pre-state validation loop iterates a map, so when a request violates both rules (an empty slot list and total-slot-overflow), the returned error is non-deterministic because Go randomizes map iteration. Not a correctness bug, but worth being aware of if you add client-facing error contracts/tests.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/ethapi/api.go` around lines 1016 - 1049, In PublicBlockChainAPI.GetStorageValues: make the state.GetState loop respect ctx cancellation by checking ctx.Done() periodically (e.g., per-address or every N slots) and returning ctx.Err() if canceled so long-running snapshot reads stop promptly; also address the nondeterministic pre-validation by making the empty-slot check deterministic (for example validate empty slot lists before accumulating totalSlots or iterate addresses in a stable order) so the returned error for compound violations is predictable.trie/inspect.go (2)
183-221: Preferdefer close(done)to avoid leaking the progress reporter on a panic / early return.
close(done)is placed near the bottom ofInspect(and similarly at lines 317/320 ofInspectContract). IfwriteDumpRecord,closeDump,g.Wait, or any future addition between thego func()launch andclose(done)panics, the ticker goroutine runs forever. Adefer close(done)immediately after the goroutine is spawned eliminates that class of leak without changing the happy-path behavior.🛠️ Proposed fix (Inspect)
done := make(chan struct{}) + defer close(done) go func() { ticker := time.NewTicker(8 * time.Second) ... }() @@ in.writeDumpRecord(common.Hash{}, in.accountStat) if err := in.closeDump(); err != nil { in.setError(err) } - - close(done)Apply the equivalent change in
InspectContract(drop bothclose(done)calls anddefer close(done)after launching the ticker goroutine).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@trie/inspect.go` around lines 183 - 221, The progress-reporter goroutine started in Inspect (and InspectContract) can leak if code between starting the goroutine and the current close(done) panics or returns early; to fix, immediately defer close(done) right after spawning the goroutine (i.e., insert defer close(done) immediately after the go func() { ... }() in Inspect and InspectContract) and remove the later explicit close(done) calls so you don't close the channel twice; reference the done channel and the ticker goroutine launched in Inspect and InspectContract when making the change.
335-361: Minor: consolidate the two passes overstorageStat.level.Lines 336-340 compute
trieTotal/trieSizeand lines 347-361 build the per-depth table — both callstorageStat.level[i].load()for every level. Since the load isatomic.Uint64reads this isn't expensive, but merging the loops keeps the per-depth rendering and the totals from drifting if the shape ever changes (e.g., a new counter added). Purely a readability nit — feel free to skip.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@trie/inspect.go` around lines 335 - 361, Consolidate the two loops that call storageStat.level[i].load() into one: while iterating levels, accumulate trieTotal and trieSize (the current trieTotal/trieSize variables) and also build the table rows for the per-depth output (the code that uses table.AppendBulk and common.StorageSize). Update the existing loop that constructs the table to also add the loaded short/full/value/size into trieTotal/trieSize so we only call storageStat.level[i].load() once per level and keep totals and per-depth rows consistent.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cmd/utils/flags.go`:
- Around line 1578-1605: The default branch in setMaxBlockRange currently
unconditionally sets cfg.MaxBlockRange = -1, overwriting any value loaded from
config/defaults; change the logic so only when neither RPCRangeLimitFlag nor
MaxBlockRangeFlag are set you leave cfg.MaxBlockRange unchanged (do not assign
-1), and keep the existing behavior for the cases when RPCRangeLimitFlag or
MaxBlockRangeFlag are set (set -1 when <=0 else the provided value). Update the
switch/default in setMaxBlockRange to preserve cfg.MaxBlockRange when no flag is
present.
In `@internal/tablewriter/tablewriter.go`:
- Around line 79-81: AppendBulk currently appends caller-owned [][]string rows
directly to Table.rows, causing later mutations by the caller to affect the
table; change AppendBulk (on type Table) to snapshot the input by deep-copying
the outer slice and each inner []string (e.g. allocate new [][]string and for
each row allocate a new []string and copy its contents) before appending to
t.rows so behavior matches SetHeader/SetFooter and is safe until Render is
called.
In `@trie/inspect.go`:
- Around line 476-485: The progress counter is incremented even when
in.dumpBuf.Write fails; change the control flow in the block that writes the
dump so that after calling in.dumpBuf.Write(buf[:]) you check err and call
in.setError(...) and return/skip further work for that record (or use an else)
so storageRecordsWritten.Add(1) only runs when err == nil and owner !=
(common.Hash{}); update the section surrounding in.dumpMu.Lock()/Unlock(), the
Write call, the in.setError(...) call, and the storageRecordsWritten.Add(1) to
ensure the counter only increments on successful writes.
---
Duplicate comments:
In `@cmd/utils/flags.go`:
- Around line 2065-2074: The code always falls back to
metrics.DefaultConfig.InfluxDBInterval and only respects a CLI flag, ignoring
any TOML/parsed config value; change the resolution so interval is initialized
from the parsed metrics configuration (the loaded config struct for metrics)
rather than DefaultConfig, then override it only if
ctx.GlobalIsSet(MetricsInfluxDBIntervalFlag.Name) and ctx.GlobalDuration(...) is
provided; keep the existing validation call to
metrics.ValidateInfluxDBInterval(interval) when enableExport or enableExportV2
is true and keep the Fatalf on error. Reference the symbols to update: replace
use of metrics.DefaultConfig.InfluxDBInterval with the parsed metrics config
field (the loaded config struct's InfluxDBInterval), retain
MetricsInfluxDBIntervalFlag.Name, ctx.GlobalIsSet, ctx.GlobalDuration,
metrics.ValidateInfluxDBInterval, enableExport, enableExportV2, and Fatalf.
---
Nitpick comments:
In `@internal/ethapi/api.go`:
- Around line 1016-1049: In PublicBlockChainAPI.GetStorageValues: make the
state.GetState loop respect ctx cancellation by checking ctx.Done() periodically
(e.g., per-address or every N slots) and returning ctx.Err() if canceled so
long-running snapshot reads stop promptly; also address the nondeterministic
pre-validation by making the empty-slot check deterministic (for example
validate empty slot lists before accumulating totalSlots or iterate addresses in
a stable order) so the returned error for compound violations is predictable.
In `@trie/inspect.go`:
- Around line 183-221: The progress-reporter goroutine started in Inspect (and
InspectContract) can leak if code between starting the goroutine and the current
close(done) panics or returns early; to fix, immediately defer close(done) right
after spawning the goroutine (i.e., insert defer close(done) immediately after
the go func() { ... }() in Inspect and InspectContract) and remove the later
explicit close(done) calls so you don't close the channel twice; reference the
done channel and the ticker goroutine launched in Inspect and InspectContract
when making the change.
- Around line 335-361: Consolidate the two loops that call
storageStat.level[i].load() into one: while iterating levels, accumulate
trieTotal and trieSize (the current trieTotal/trieSize variables) and also build
the table rows for the per-depth output (the code that uses table.AppendBulk and
common.StorageSize). Update the existing loop that constructs the table to also
add the loaded short/full/value/size into trieTotal/trieSize so we only call
storageStat.level[i].load() once per level and keep totals and per-depth rows
consistent.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: e9f68ace-dbd0-442e-b86d-5e8370cfafad
📒 Files selected for processing (8)
cmd/geth/dbcmd.gocmd/utils/flags.gocore/tx_pool.gocore/tx_pool_heartbeat_test.gointernal/ethapi/api.gointernal/ethapi/api_blocktimestamp_test.gointernal/tablewriter/tablewriter.gotrie/inspect.go
✅ Files skipped from review due to trivial changes (2)
- internal/ethapi/api_blocktimestamp_test.go
- core/tx_pool_heartbeat_test.go
🚧 Files skipped from review as they are similar to previous changes (2)
- core/tx_pool.go
- cmd/geth/dbcmd.go
…x cancel - setMaxBlockRange: drop default=-1 so TOML-loaded MaxBlockRange is preserved - AppendBulk: deep-copy input rows to match SetHeader/SetFooter semantics - writeDumpRecord: only increment storageRecordsWritten on successful write - SetupMetrics: accept metrics.Config; init InfluxDBInterval from parsed config rather than DefaultConfig; add makeMetricsConfig helper in cmd/geth/config.go - GetStorageValues: two-pass validation (empty-slot before slot-count) + ctx cancellation check per address in the state-reading loop - Inspect/InspectContract: defer close(done) to prevent progress-goroutine leak - InspectContract: merge two storageStat level loops into one to halve load() calls Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (3)
internal/tablewriter/tablewriter.go (1)
132-146: Separator won't span inter-column padding (cosmetic).
tabwriteris configured withpadding=2, so rendered columns are separated by at least two spaces. The separator row only emits─runes up to each column's content width and a tab between them, so the separator will show visible gaps between columns rather than a continuous rule. If a continuous rule is desired, consider widening eachsepParts[i]by the padding (e.g. append two─to all but the last column) or rendering the separator outsidetabwriterafter computing final column widths.Non-blocking — purely visual.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/tablewriter/tablewriter.go` around lines 132 - 146, The separator row (sepParts/sep) is too short because tabwriter.NewWriter is configured with padding=2 so tabs become at least two spaces; extend each sepParts[i] by the padding width (append two "─" for all but the last column) before joining so the visual rule spans the inter-column padding, e.g. use the same padding value used in tabwriter.NewWriter and adjust sepParts for t.header/t.rows/t.footer output; alternatively render the separator outside the tabwriter after computing final column widths if you prefer to avoid tweaking sepParts.cmd/geth/main.go (1)
197-200: Consider marking--rpc.getlogs.maxrangeand--rpc.rangelimitas mutually exclusive (or deprecating one).Both flags are now registered side-by-side.
setMaxBlockRangealready defines a precedence order (rangelimit wins over maxrange), but nothing surfaces a warning/error to operators who accidentally set both with conflicting values — the maxrange value will silently be ignored. Consider either:
- calling
utils.CheckExclusive(ctx, utils.RPCRangeLimitFlag, utils.MaxBlockRangeFlag)inSetEthConfig, or- logging a warning in
setMaxBlockRangewhen both are set, or- marking
MaxBlockRangeFlagas deprecated in itsUsagestring (since the PR comment calls--rpc.rangelimitthe upstream-aligned name).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/geth/main.go` around lines 197 - 200, Set the two RPC range flags as mutually exclusive so operators aren't silently ignored: in SetEthConfig call utils.CheckExclusive(ctx, utils.RPCRangeLimitFlag, utils.MaxBlockRangeFlag) to surface an error when both are provided (alternative: add a warning in setMaxBlockRange if both flags are set, or mark utils.MaxBlockRangeFlag as deprecated in its Usage string to prefer --rpc.rangelimit). Ensure references to SetEthConfig, setMaxBlockRange, utils.RPCRangeLimitFlag, and utils.MaxBlockRangeFlag are used so the change is applied in the correct places.trie/inspect.go (1)
542-548: UsehexToKeybytesto extract the storage-trie owner hash.
hexToCompact(path)produces a 33-byte compact-encoded buffer with a flag byte prefix for a 64-nibble path;hexToKeybytes(path)directly produces the 32-byte key. UsinghexToKeybytesmakes intent explicit and is the idiomatic choice for this conversion.♻️ Proposed refactor
- owner := common.BytesToHash(hexToCompact(path)) + owner := common.BytesToHash(hexToKeybytes(path))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@trie/inspect.go` around lines 542 - 548, The code builds the storage-trie owner hash using common.BytesToHash(hexToCompact(path)), but hexToCompact returns a 33-byte compact buffer; replace that call with common.BytesToHash(hexToKeybytes(path)) so the owner hash is created from the 32-byte key bytes. Update the owner assignment in the block that calls New(account.Root, in.triedb) to use hexToKeybytes(path) instead of hexToCompact(path) to make the intent explicit and idiomatic.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cmd/utils/flags.go`:
- Around line 1578-1604: The switch in setMaxBlockRange does not set
cfg.MaxBlockRange when neither RPCRangeLimitFlag nor MaxBlockRangeFlag is
present, leaving a zero value; modify setMaxBlockRange (function name) to
include a default branch (or an explicit fallback after the switch) that sets
cfg.MaxBlockRange = -1 when both RPCRangeLimitFlag and MaxBlockRangeFlag are
unset, so fresh ethconfig.Config instances match the expected unlimited
semantics used by the enforcement in eth/filters/filter.go and the test in
rangelimit_test.go.
In `@trie/inspect.go`:
- Around line 183-219: The progress ticker goroutine's done channel is closed
only by a defer, so it remains active during the costly Summarize call and
continues to log; change the lifecycle to close done explicitly when pass 1
finishes: remove the defer close(done) and instead call close(done) right after
the final dump/err checks (after in.writeDumpRecord(common.Hash{},
in.accountStat), in.closeDump(), and in.getError() handling) and before calling
Summarize(config.DumpPath, config) so the ticker stops before summary; ensure
you only close the done channel once (remove the deferred close to avoid
double-close).
- Around line 486-569: inspect() can call stat.addSize/stat.add with height >=
trieStatLevels (16), causing out-of-range panics; add a depth guard to stop
recursion past the last valid level. At the top of inspect (or before any
stat.addSize/stat.add calls) check if height >= trieStatLevels and if so either
return immediately (prevent further recursion and stats) or clamp further calls
so no stat methods are invoked with height >= trieStatLevels; reference the
inspect method and the trieStatLevels/LevelStats usage to locate the places to
short-circuit before calling stat.addSize or stat.add.
---
Nitpick comments:
In `@cmd/geth/main.go`:
- Around line 197-200: Set the two RPC range flags as mutually exclusive so
operators aren't silently ignored: in SetEthConfig call
utils.CheckExclusive(ctx, utils.RPCRangeLimitFlag, utils.MaxBlockRangeFlag) to
surface an error when both are provided (alternative: add a warning in
setMaxBlockRange if both flags are set, or mark utils.MaxBlockRangeFlag as
deprecated in its Usage string to prefer --rpc.rangelimit). Ensure references to
SetEthConfig, setMaxBlockRange, utils.RPCRangeLimitFlag, and
utils.MaxBlockRangeFlag are used so the change is applied in the correct places.
In `@internal/tablewriter/tablewriter.go`:
- Around line 132-146: The separator row (sepParts/sep) is too short because
tabwriter.NewWriter is configured with padding=2 so tabs become at least two
spaces; extend each sepParts[i] by the padding width (append two "─" for all but
the last column) before joining so the visual rule spans the inter-column
padding, e.g. use the same padding value used in tabwriter.NewWriter and adjust
sepParts for t.header/t.rows/t.footer output; alternatively render the separator
outside the tabwriter after computing final column widths if you prefer to avoid
tweaking sepParts.
In `@trie/inspect.go`:
- Around line 542-548: The code builds the storage-trie owner hash using
common.BytesToHash(hexToCompact(path)), but hexToCompact returns a 33-byte
compact buffer; replace that call with common.BytesToHash(hexToKeybytes(path))
so the owner hash is created from the 32-byte key bytes. Update the owner
assignment in the block that calls New(account.Root, in.triedb) to use
hexToKeybytes(path) instead of hexToCompact(path) to make the intent explicit
and idiomatic.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: a294fbef-04bd-4d97-ba91-f7f04b1cedbc
📒 Files selected for processing (7)
cmd/geth/chaincmd.gocmd/geth/config.gocmd/geth/main.gocmd/utils/flags.gointernal/ethapi/api.gointernal/tablewriter/tablewriter.gotrie/inspect.go
🚧 Files skipped from review as they are similar to previous changes (2)
- cmd/geth/chaincmd.go
- cmd/geth/config.go
| func setMaxBlockRange(ctx *cli.Context, cfg *ethconfig.Config) { | ||
| if ctx.GlobalIsSet(MaxBlockRangeFlag.Name) { | ||
| cfg.MaxBlockRange = ctx.GlobalInt64(MaxBlockRangeFlag.Name) | ||
| } else { | ||
| cfg.MaxBlockRange = -1 | ||
| // Resolution order: --rpc.rangelimit takes precedence when both are set, | ||
| // so operators can opt into the upstream-aligned name without having to | ||
| // remove the legacy --rpc.getlogs.maxrange flag from their launch | ||
| // scripts first. When neither flag is set, cfg.MaxBlockRange is left | ||
| // unchanged so TOML/defaults are respected. | ||
| var ( | ||
| rangelimitSet = ctx.GlobalIsSet(RPCRangeLimitFlag.Name) | ||
| maxrangeSet = ctx.GlobalIsSet(MaxBlockRangeFlag.Name) | ||
| ) | ||
| switch { | ||
| case rangelimitSet: | ||
| v := ctx.GlobalInt64(RPCRangeLimitFlag.Name) | ||
| if v <= 0 { | ||
| cfg.MaxBlockRange = -1 | ||
| } else { | ||
| cfg.MaxBlockRange = v | ||
| } | ||
| case maxrangeSet: | ||
| v := ctx.GlobalInt64(MaxBlockRangeFlag.Name) | ||
| if v <= 0 { | ||
| cfg.MaxBlockRange = -1 | ||
| } else { | ||
| cfg.MaxBlockRange = v | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Inspect how MaxBlockRange is consumed (esp. 0 vs -1 semantics) in filters.
rg -nP -C4 '\bMaxBlockRange\b'
ast-grep --pattern 'func NewFilterAPI($$$) $_ { $$$ }'Repository: morph-l2/go-ethereum
Length of output: 15183
🏁 Script executed:
# Check how maxBlockRange is used in FilterAPI
sed -n '62,150p' eth/filters/api.go
# Look for comparisons with maxBlockRange (0 vs -1 semantics)
rg -n 'maxBlockRange' eth/filters/api.go -A 3 -B 1
# Check the test case for "neither flag set" behavior
cat -n cmd/utils/rangelimit_test.goRepository: morph-l2/go-ethereum
Length of output: 7371
🏁 Script executed:
# Find NewRangeFilter implementation
rg -n 'func.*NewRangeFilter' eth/filters/
# See the implementation of NewRangeFilter to understand 0 vs -1 semantics
rg -n -A 30 'func.*NewRangeFilter' eth/filters/filter.go
# Check if there's additional context around setMaxBlockRange function
sed -n '1570,1610p' cmd/utils/flags.goRepository: morph-l2/go-ethereum
Length of output: 2684
🏁 Script executed:
# Check the complete NewRangeFilter to see how it handles maxBlockRange when not provided
sed -n '48,90p' eth/filters/filter.go
# Look for where NewRangeFilter checks or uses maxBlockRange value
rg -n 'maxBlockRange' eth/filters/filter.go -B 2 -A 5Repository: morph-l2/go-ethereum
Length of output: 2956
🏁 Script executed:
# Check if ethconfig.Config has any special initialization or if Defaults are applied elsewhere
rg -n 'Defaults' eth/ethconfig/config.go -B 2 -A 2
# Look for how NewFilterAPI is called and whether MaxBlockRange is processed before the call
rg -n 'NewFilterAPI' cmd/utils/flags.go -B 5 -A 5
# Check if there's code that ensures MaxBlockRange is set before calling setMaxBlockRange
sed -n '2000,2030p' cmd/utils/flags.goRepository: morph-l2/go-ethereum
Length of output: 2387
🏁 Script executed:
# Check how ethcfg is initialized before being passed to setMaxBlockRange in the actual code flow
rg -n 'setMaxBlockRange' cmd/utils/flags.go -B 10 -A 2
# See if there's initialization of cfg before setMaxBlockRange is called
rg -n 'func Configure' cmd/utils/flags.go -A 50 | head -80
# Check if cfg is cloned from Defaults somewhere
rg -n 'ethconfig.Defaults' cmd/ -B 2 -A 2Repository: morph-l2/go-ethereum
Length of output: 14720
🏁 Script executed:
# Verify how cfg is created before setMaxBlockRange is called
sed -n '1630,1675p' cmd/utils/flags.goRepository: morph-l2/go-ethereum
Length of output: 1907
🏁 Script executed:
# Find where SetEthConfig is called and how cfg is initialized
rg -n 'SetEthConfig' cmd/ -B 5 -A 2 | head -50
# Look for the callers to see cfg initialization
rg -n 'func.*SetEthConfig' cmd/ -B 2
# Search in cmd/geth for cfg initialization
sed -n '110,150p' cmd/geth/config.goRepository: morph-l2/go-ethereum
Length of output: 2727
🏁 Script executed:
# Run the specific test to see if it passes
cd cmd/utils && go test -v -run TestSetMaxBlockRangeResolution rangelimit_test.go flags.go 2>&1 | head -100Repository: morph-l2/go-ethereum
Length of output: 3904
🏁 Script executed:
# Let's check what the test actually does more carefully
cat -n cmd/utils/rangelimit_test.go | head -90
# Also check if there's any initialization in setMaxBlockRange that I missed
cat -n cmd/utils/flags.go | sed -n '1575,1610p'
# Let's also check the actual config initialization more carefully
cat -n eth/ethconfig/config.go | sed -n '1,120p'Repository: morph-l2/go-ethereum
Length of output: 10114
Add a default case to ensure MaxBlockRange defaults to -1 when neither flag is set.
The test at cmd/utils/rangelimit_test.go:60 expects MaxBlockRange = -1 when no CLI flags are provided (the "default_unlimited" case with nil args), but the current implementation has no default case in the switch statement. This means when cfg is a fresh ethconfig.Config{} (which zero-initializes MaxBlockRange to 0), the value remains 0 instead of being set to -1.
In eth/filters/filter.go:176, the enforcement logic checks if f.maxBlockRange != -1, so a value of 0 is treated as a hard limit of zero blocks—not unlimited. This differs from the upstream semantics where 0 could mean unlimited. The test will fail, and fresh deployments without TOML overrides and no CLI flags will silently restrict range queries to 0 blocks rather than leaving them unlimited.
Add a default case that sets cfg.MaxBlockRange = -1 when neither flag is set, or ensure the config is initialized from ethconfig.Defaults which already has -1 as the value.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cmd/utils/flags.go` around lines 1578 - 1604, The switch in setMaxBlockRange
does not set cfg.MaxBlockRange when neither RPCRangeLimitFlag nor
MaxBlockRangeFlag is present, leaving a zero value; modify setMaxBlockRange
(function name) to include a default branch (or an explicit fallback after the
switch) that sets cfg.MaxBlockRange = -1 when both RPCRangeLimitFlag and
MaxBlockRangeFlag are unset, so fresh ethconfig.Config instances match the
expected unlimited semantics used by the enforcement in eth/filters/filter.go
and the test in rangelimit_test.go.
| start := time.Now() | ||
| done := make(chan struct{}) | ||
| go func() { | ||
| ticker := time.NewTicker(8 * time.Second) | ||
| defer ticker.Stop() | ||
| for { | ||
| select { | ||
| case <-ticker.C: | ||
| accountNodes := in.accountStat.TotalNodes() | ||
| storageRecords := in.storageRecordsWritten.Load() | ||
| log.Info("Inspecting trie", | ||
| "accountNodes", accountNodes, | ||
| "storageRecords", storageRecords, | ||
| "elapsed", common.PrettyDuration(time.Since(start))) | ||
| case <-done: | ||
| return | ||
| } | ||
| } | ||
| }() | ||
| defer close(done) | ||
|
|
||
| in.recordRootSize(root, in.accountStat) | ||
| in.inspect(trie, trie.root, 0, []byte{}, in.accountStat) | ||
|
|
||
| // inspect is synchronous: it waits for every spawned goroutine in | ||
| // its subtree before returning, so no extra wait is needed here. | ||
|
|
||
| // Sentinel record: zero owner hash marks the account trie stats. | ||
| in.writeDumpRecord(common.Hash{}, in.accountStat) | ||
| if err := in.closeDump(); err != nil { | ||
| in.setError(err) | ||
| } | ||
|
|
||
| if err := in.getError(); err != nil { | ||
| return err | ||
| } | ||
| return Summarize(config.DumpPath, config) |
There was a problem hiding this comment.
Progress reporter keeps ticking during Summarize.
defer close(done) fires when Inspect returns, but Summarize(config.DumpPath, config) is invoked on line 219 before the defer runs. On large states Summarize takes a while and the ticker keeps emitting "Inspecting trie" log lines with frozen accountNodes / storageRecords counters, which is misleading — the dump phase is already finished. Close done explicitly once pass 1 is complete so the summary phase is not mislabelled.
🛠️ Proposed fix
start := time.Now()
done := make(chan struct{})
+ var doneOnce sync.Once
+ stopProgress := func() { doneOnce.Do(func() { close(done) }) }
go func() {
ticker := time.NewTicker(8 * time.Second)
defer ticker.Stop()
for {
select {
case <-ticker.C:
accountNodes := in.accountStat.TotalNodes()
storageRecords := in.storageRecordsWritten.Load()
log.Info("Inspecting trie",
"accountNodes", accountNodes,
"storageRecords", storageRecords,
"elapsed", common.PrettyDuration(time.Since(start)))
case <-done:
return
}
}
}()
- defer close(done)
+ defer stopProgress()
in.recordRootSize(root, in.accountStat)
in.inspect(trie, trie.root, 0, []byte{}, in.accountStat)
...
in.writeDumpRecord(common.Hash{}, in.accountStat)
if err := in.closeDump(); err != nil {
in.setError(err)
}
+ stopProgress()
if err := in.getError(); err != nil {
return err
}
return Summarize(config.DumpPath, config)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@trie/inspect.go` around lines 183 - 219, The progress ticker goroutine's done
channel is closed only by a defer, so it remains active during the costly
Summarize call and continues to log; change the lifecycle to close done
explicitly when pass 1 finishes: remove the defer close(done) and instead call
close(done) right after the final dump/err checks (after
in.writeDumpRecord(common.Hash{}, in.accountStat), in.closeDump(), and
in.getError() handling) and before calling Summarize(config.DumpPath, config) so
the ticker stops before summary; ensure you only close the done channel once
(remove the deferred close to avoid double-close).
| func (in *inspector) inspect(trie *Trie, n node, height uint32, path []byte, stat *LevelStats) { | ||
| if n == nil { | ||
| return | ||
| } | ||
|
|
||
| // Goroutines spawned at this level are tracked locally so we block | ||
| // on them before recording n itself. This preserves the invariant | ||
| // "stat is complete on return", which downstream writeDumpRecord | ||
| // calls rely on. | ||
| var wg sync.WaitGroup | ||
|
|
||
| switch n := (n).(type) { | ||
| case *shortNode: | ||
| nextPath := slices.Concat(path, n.Key) | ||
| in.inspect(trie, n.Val, height+1, nextPath, stat) | ||
| case *fullNode: | ||
| for idx, child := range n.Children { | ||
| if child == nil { | ||
| continue | ||
| } | ||
| childPath := slices.Concat(path, []byte{byte(idx)}) | ||
| childNode := child | ||
| if in.trySpawn(&wg, func() { | ||
| in.inspect(trie, childNode, height+1, childPath, stat) | ||
| }) { | ||
| continue | ||
| } | ||
| in.inspect(trie, childNode, height+1, childPath, stat) | ||
| } | ||
| case hashNode: | ||
| // Resolve the raw node via the shared database; count its | ||
| // on-disk byte size against the current level before decoding | ||
| // and recursing. This mirrors upstream's reader.Node() path. | ||
| hash := common.BytesToHash(n) | ||
| blob, err := in.triedb.Node(hash) | ||
| if err != nil { | ||
| log.Error("Failed to resolve HashNode", "err", err, "trie", trie.Hash(), "height", height+1, "path", path) | ||
| return | ||
| } | ||
| stat.addSize(height, uint64(len(blob))) | ||
| resolved := mustDecodeNode(n, blob) | ||
| in.inspect(trie, resolved, height, path, stat) | ||
| // Return early: hash nodes are a transparent indirection and | ||
| // must not be counted twice at this depth. | ||
| return | ||
| case valueNode: | ||
| if !hasTerm(path) { | ||
| break | ||
| } | ||
| var account types.StateAccount | ||
| if err := rlp.Decode(bytes.NewReader(n), &account); err != nil { | ||
| break | ||
| } | ||
| if account.Root == (common.Hash{}) || account.Root == emptyRoot { | ||
| break | ||
| } | ||
| if !in.config.NoStorage { | ||
| owner := common.BytesToHash(hexToCompact(path)) | ||
| storage, err := New(account.Root, in.triedb) | ||
| if err != nil { | ||
| log.Error("Failed to open account storage trie", "node", n, "error", err, "height", height, "path", common.Bytes2Hex(path)) | ||
| break | ||
| } | ||
| storageStat := NewLevelStats() | ||
| run := func() { | ||
| in.recordRootSize(account.Root, storageStat) | ||
| in.inspect(storage, storage.root, 0, []byte{}, storageStat) | ||
| in.writeDumpRecord(owner, storageStat) | ||
| } | ||
| if in.trySpawn(&wg, run) { | ||
| break | ||
| } | ||
| run() | ||
| } | ||
| default: | ||
| panic(fmt.Sprintf("%T: invalid node: %v", n, n)) | ||
| } | ||
|
|
||
| wg.Wait() | ||
|
|
||
| // Record stats for the current node once the subtree (and any | ||
| // spawned siblings) have been fully accounted for. | ||
| stat.add(n, height) | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Determine trieStatLevels, how it is used to size LevelStats arrays,
# and whether stat.add / addSize bounds-check the depth argument.
rg -nP '\btrieStatLevels\b' --type=go -C2
ast-grep --pattern 'func ($_ *LevelStats) add($_, $_) { $$$ }'
ast-grep --pattern 'func ($_ *LevelStats) addSize($_, $_) { $$$ }'
ast-grep --pattern 'func ($_ *stat) add($_) { $$$ }'Repository: morph-l2/go-ethereum
Length of output: 7812
🏁 Script executed:
#!/bin/bash
# Check if inspect() function has any depth/height validation or guards
rg -n "height" trie/inspect.go | head -40
# Search for any panic or guard that stops recursion
rg -n "panic\|guard\|clamp\|max.*height\|height.*check" trie/inspect.go
# Get the full add() and addSize() implementations to confirm no bounds check
cat -n trie/levelstats.go | sed -n '75,95p'Repository: morph-l2/go-ethereum
Length of output: 1493
🏁 Script executed:
#!/bin/bash
# Check if there's any validation in inspector.inspect that prevents deep recursion
sed -n '486,569p' trie/inspect.go | grep -n "height\|panic\|guard\|>=\|>\|limit"Repository: morph-l2/go-ethereum
Length of output: 759
🏁 Script executed:
#!/bin/bash
# Verify: does the reviewer claim height can exceed 15, and what evidence exists?
# Check comments about maximum depth in inspect.go
rg -n "depth\|height" trie/inspect.go -C2 | grep -A2 -B2 "max\|limit\|64\|16\|exceed"Repository: morph-l2/go-ethereum
Length of output: 46
🏁 Script executed:
#!/bin/bash
# Check if there's evidence of actual maximum depth observed
rg -n "depth.*16\|max.*depth\|production trie\|observed" trie/ -A2 -B2Repository: morph-l2/go-ethereum
Length of output: 46
🏁 Script executed:
#!/bin/bash
# Look for any initialization or guard that might validate height in inspect
rg -n "inspect.*height\|NewInspector\|inspector{" trie/inspect.go -A5 | head -50Repository: morph-l2/go-ethereum
Length of output: 209
🏁 Script executed:
#!/bin/bash
# Check test files for depth-related tests
fd -e "*test.go" trie/ -x grep -l "depth\|height" {} \;
rg -n "depth.*16\|trieStatLevels" trie/*test.goRepository: morph-l2/go-ethereum
Length of output: 389
🏁 Script executed:
#!/bin/bash
# Search for depth-related comments and limits in trie package
rg -n "depth.*16|max.*depth|production trie|observed" trie/ -A2 -B2Repository: morph-l2/go-ethereum
Length of output: 2161
🏁 Script executed:
#!/bin/bash
# Look for NewInspector initialization and height handling
rg -n "NewInspector|inspector.*{" trie/inspect.go -A5 | head -50Repository: morph-l2/go-ethereum
Length of output: 177
🏁 Script executed:
#!/bin/bash
# Find test files in trie directory
fd . trie/ -name "*test.go" | head -10Repository: morph-l2/go-ethereum
Length of output: 294
🏁 Script executed:
#!/bin/bash
# Check if any bounds validation exists for height parameter
rg -n "height.*>=|height.*>|height.*<|height.*<=|depth.*guard|depth.*clamp" trie/inspect.goRepository: morph-l2/go-ethereum
Length of output: 46
🏁 Script executed:
#!/bin/bash
# Search for test files and check levelstats_test to understand the expected behavior
cat -n trie/levelstats_test.go | sed -n '30,50p'Repository: morph-l2/go-ethereum
Length of output: 666
🏁 Script executed:
#!/bin/bash
# Check how inspect is called and if there's any depth validation at call sites
rg -n "\.inspect\(" trie/inspect.go | head -20Repository: morph-l2/go-ethereum
Length of output: 493
🏁 Script executed:
#!/bin/bash
# Look for any depth-related initialization or configuration in inspector struct
sed -n '100,200p' trie/inspect.go | head -80Repository: morph-l2/go-ethereum
Length of output: 2527
🏁 Script executed:
#!/bin/bash
# Check if there's any documentation in inspect.go about depth limits
sed -n '1,150p' trie/inspect.go | grep -n "depth\|height\|level"Repository: morph-l2/go-ethereum
Length of output: 298
Add a depth guard in inspect() or document the depth limit assumption.
The inspect() function calls stat.add() and stat.addSize() at lines 568 and 525 without validating that height < trieStatLevels (16). Both methods access s.level[depth] directly and panic on out-of-range access. While LevelStats documentation (trie/levelstats.go:37–39) states that "callers that operate on arbitrary trie data should pre-validate depth," inspect() performs no such validation. The code implicitly assumes that production tries never reach depth 16 (per trie/levelstats.go:24–26), but this assumption is not enforced. Consider either:
- Adding a bounds check that clamps or stops recursion past depth 15, or
- Adding an explicit comment in
inspect()documenting that it requiresheight < 16as a precondition.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@trie/inspect.go` around lines 486 - 569, inspect() can call
stat.addSize/stat.add with height >= trieStatLevels (16), causing out-of-range
panics; add a depth guard to stop recursion past the last valid level. At the
top of inspect (or before any stat.addSize/stat.add calls) check if height >=
trieStatLevels and if so either return immediately (prevent further recursion
and stats) or clamp further calls so no stat methods are invoked with height >=
trieStatLevels; reference the inspect method and the trieStatLevels/LevelStats
usage to locate the places to short-circuit before calling stat.addSize or
stat.add.
Summary by CodeRabbit
New Features
Bug Fixes
Tests