limits to data_header.size when combining shreds' payloads#16708
limits to data_header.size when combining shreds' payloads#16708behzadnouri merged 1 commit intosolana-labs:masterfrom
Conversation
9c53133 to
0032d95
Compare
Codecov Report
@@ Coverage Diff @@
## master #16708 +/- ##
=========================================
+ Coverage 82.8% 82.9% +0.1%
=========================================
Files 419 416 -3
Lines 116251 115467 -784
=========================================
- Hits 96323 95830 -493
+ Misses 19928 19637 -291 |
b1c0221 to
e67ae3b
Compare
e67ae3b to
e6a813b
Compare
|
|
||
| shreds.iter().map(|shred| &shred.payload).collect() | ||
| let index = shreds.first().ok_or(TooFewDataShards)?.index(); | ||
| let aligned = shreds.iter().zip(index..).all(|(s, i)| s.index() == i); |
There was a problem hiding this comment.
If the indexes don't align, that would imply something is disastrously wrong with the blockstore insertion code/corruption might have occurred. In these cases, can we hard panic?
There was a problem hiding this comment.
You are right, but since this is old code and a public function I would rather not to introduce a hard panic for now (at least in this commit), until all current call paths are vetted.
| // For backward compatibility. This is needed when the data shred | ||
| // payload is None, so that deserializing to Vec<Entry> results in | ||
| // an empty vector. | ||
| Ok(vec![0u8; SIZE_OF_DATA_SHRED_PAYLOAD]) |
There was a problem hiding this comment.
Because we know the input shreds isn't empty, otherwise verify_consistent_shred_payload_sizes() above would've errored out with TooFewShardsPresent, then for this comment:
This is needed when the data shred payload is None
Does this mean when all the shred.payload[offset..size].iter() above return empty, and if so can we add a test case for this?
For backward compatibility...
This is probably something subtle, but I think I'm missing the equivalent case in the current code 😄 , is there a scenario where we're returning vec![0u8; SIZE_OF_DATA_SHRED_PAYLOAD]?
There was a problem hiding this comment.
This is needed in the case data payload is None (already mentioned in the comment):
https://github.com/solana-labs/solana/blob/68d5aec55/ledger/src/shred.rs#L252
You can have one shred with data == None, and so data_header.size == 0, which results in an empty output, but existing code assumes that we get a vector of zeros.
For example in broadcast:
https://github.com/solana-labs/solana/blob/68d5aec55/core/src/broadcast_stage/standard_broadcast_run.rs#L79
and without above code branch, this test fails:
https://github.com/solana-labs/solana/blob/68d5aec55/core/src/broadcast_stage/standard_broadcast_run.rs#L580
As commented, this is just to maintain backward compatibility with existing callers, and not to break them.
Shredder::deshred is ignoring data_header.size when combining shreds' payloads: https://github.com/solana-labs/solana/blob/37b8587d4/ledger/src/shred.rs#L940-L961 Also adding more sanity checks on the alignment of data shreds indices. (cherry picked from commit 0f3ac51) # Conflicts: # ledger/src/shred.rs
…16708) (#16870) * limits to data_header.size when combining shreds' payloads (#16708) Shredder::deshred is ignoring data_header.size when combining shreds' payloads: https://github.com/solana-labs/solana/blob/37b8587d4/ledger/src/shred.rs#L940-L961 Also adding more sanity checks on the alignment of data shreds indices. (cherry picked from commit 0f3ac51) # Conflicts: # ledger/src/shred.rs * removes backport merge conflicts Co-authored-by: behzad nouri <behzadnouri@gmail.com>
Problem
deshred is ignoring
data_header.sizewhen combining shreds' payloads:https://github.com/solana-labs/solana/blob/37b8587d4/ledger/src/shred.rs#L940-L961
Summary of Changes
data_header.size.