fix: harden Hello & ChainExchange protocol handling#6976
Conversation
WalkthroughIntroduce a per-protocol Changes
Sequence Diagram(s)sequenceDiagram
participant Peer as Peer
participant Codec as CborRequestResponse (param C)
participant Timed as timed_decode (uses C::MAX_* / C::DECODE_TIMEOUT)
participant Reader as DagCborDecodingReader
participant Decoder as CBOR Decoder
Peer->>Codec: deliver raw bytes (request/response)
Codec->>Timed: invoke timed_decode(bytes)
Timed->>Reader: wrap stream with size/timeout limits
Reader->>Decoder: stream bytes into CBOR decoder
Decoder-->>Reader: decoded message
Reader-->>Timed: return decoded message
Timed-->>Codec: Ok(decoded) or Err(io::ErrorKind::TimedOut / Io)
Codec-->>Peer: return decoded message or error
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Suggested reviewers
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
✨ Simplify code
Review rate limit: 4/5 reviews remaining, refill in 12 minutes. Comment |
b36fd6b to
74dfa72
Compare
Hello protocol handlingHello & ChainExchange protocol handling
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/libp2p/rpc/mod.rs`:
- Around line 161-163: The current shared MAX_BYTES_ALLOWED (2 MiB) passed into
timed_decode lets inbound Hello requests still consume large resources; make the
request cap protocol-specific by adding a second const generic (e.g.,
MAX_REQUEST_BYTES) to HelloCodec (and other protocol codec types) and use that
constant when decoding inbound requests instead of the global MAX_BYTES_ALLOWED;
update the call sites that construct HelloCodec and the invocation of
timed_decode for request paths to pass the new MAX_REQUEST_BYTES and keep the
existing const for response/read_response behavior unchanged.
- Around line 166-173: The hardcoded 30s in timed_decode (using
DagCborDecodingReader) is too short for large ChainExchange payloads; change
timed_decode to accept a timeout parameter or compute the timeout from max_bytes
(e.g., derive Duration from max_bytes and a bytes-per-second heuristic) and
update its callers so Hello uses a short fixed timeout while ChainExchange uses
the larger derived timeout or an explicitly larger value; adjust the
timed_decode signature and the invocations in Hello and ChainExchange
accordingly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 2f937b0a-83ff-4b92-8c60-ffefa12e5b28
📒 Files selected for processing (4)
CHANGELOG.mdsrc/libp2p/chain_exchange/mod.rssrc/libp2p/hello/codec.rssrc/libp2p/rpc/mod.rs
Codecov Report❌ Patch coverage is
Additional details and impacted files
... and 7 files with indirect coverage changes Continue to review full report in Codecov by Sentry.
🚀 New features to boost your workflow:
|
277882c to
734eafb
Compare
734eafb to
9ef2dc1
Compare
There was a problem hiding this comment.
🧹 Nitpick comments (2)
src/libp2p/rpc/mod.rs (1)
178-183: Include config context in the timeout log.Once both
HelloandChainExchangego through this helper,timed outalone will be hard to action in production. Logging at least{max_bytes, timeout}here would make it obvious which guard fired.💡 Possible refinement
- tracing::debug!("{err}"); + tracing::debug!(max_bytes, ?timeout, "{err}");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libp2p/rpc/mod.rs` around lines 178 - 183, The timeout branch in the match around tokio::time::timeout calling DagCborDecodingReader::new currently logs only the error string; update the Err(_) arm to include the timeout parameters (timeout and max_bytes) in the log message so callers (Hello/ChainExchange) can see which guard fired; specifically, modify the tracing::debug call in the Err branch to log a clear message like "DagCborDecodingReader timed out" and include the timeout and max_bytes values, and keep returning the io::Error as before.src/libp2p/hello/codec.rs (1)
9-25: Please lock the 32-byte assumption down with a serialization test.This cap is intentionally tight. A later change in
HelloResponseencoding shape could still compile but start rejecting peers at runtime. A small test that serializes the largest plausibleHelloResponseand asserts it stays withinHelloCodecConfig::MAX_RESPONSE_BYTESwould make that failure mode explicit.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libp2p/hello/codec.rs` around lines 9 - 25, Add a unit test that serializes the largest plausible HelloResponse and asserts its CBOR byte length does not exceed HelloCodecConfig::MAX_RESPONSE_BYTES; create (for example) a test named test_hello_response_size that constructs the maximal HelloResponse (e.g., using u64::MAX or whatever the response fields’ max values are), serialize it using the same CBOR serializer the codec uses, measure serialized.len(), and assert serialized.len() <= HelloCodecConfig::MAX_RESPONSE_BYTES so future changes to HelloResponse encoding will fail the test if they exceed the 32-byte cap.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src/libp2p/hello/codec.rs`:
- Around line 9-25: Add a unit test that serializes the largest plausible
HelloResponse and asserts its CBOR byte length does not exceed
HelloCodecConfig::MAX_RESPONSE_BYTES; create (for example) a test named
test_hello_response_size that constructs the maximal HelloResponse (e.g., using
u64::MAX or whatever the response fields’ max values are), serialize it using
the same CBOR serializer the codec uses, measure serialized.len(), and assert
serialized.len() <= HelloCodecConfig::MAX_RESPONSE_BYTES so future changes to
HelloResponse encoding will fail the test if they exceed the 32-byte cap.
In `@src/libp2p/rpc/mod.rs`:
- Around line 178-183: The timeout branch in the match around
tokio::time::timeout calling DagCborDecodingReader::new currently logs only the
error string; update the Err(_) arm to include the timeout parameters (timeout
and max_bytes) in the log message so callers (Hello/ChainExchange) can see which
guard fired; specifically, modify the tracing::debug call in the Err branch to
log a clear message like "DagCborDecodingReader timed out" and include the
timeout and max_bytes values, and keep returning the io::Error as before.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 7ca5c768-4e94-4c73-a686-25174e880160
📒 Files selected for processing (4)
CHANGELOG.mdsrc/libp2p/chain_exchange/mod.rssrc/libp2p/hello/codec.rssrc/libp2p/rpc/mod.rs
✅ Files skipped from review due to trivial changes (1)
- CHANGELOG.md
🚧 Files skipped from review as they are similar to previous changes (1)
- src/libp2p/chain_exchange/mod.rs
Summary of changes
Changes introduced in this pull request:
HelloandChainExchangeto avoid excessive allocationsReference issue to close (if applicable)
Closes
Other information and links
Change checklist
Outside contributions
Summary by CodeRabbit