Fix turbine to sign last fec set if last entry in slot #6887
Fix turbine to sign last fec set if last entry in slot #6887maheshr merged 6 commits intoanza-xyz:masterfrom maheshr:fix_turbine_to_sign_last_fec_set_if_last_slot_in_entry
Conversation
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## master #6887 +/- ##
=========================================
- Coverage 83.1% 83.0% -0.1%
=========================================
Files 812 812
Lines 356900 356979 +79
=========================================
+ Hits 296612 296639 +27
- Misses 60288 60340 +52 🚀 New features to boost your workflow:
|
|
Would it be possible to add a test that creates varied (random) amounts of data for a slot via shredder API, makes shreds, and then confirms that exactly 64 last shreds are resigned, no more and no less? |
Hey Alex - There are several existing tests that check if the shreds are last in slot. I modified them to only sign the last fec set. See test_make_shreds_from_data, test_make_shreds_from_data_paranoid, and test_make_shreds_from_data_rand. They use run_make_shreds_from_data and I made the change there to only check if the last batch is signed if data is last in slot. Let know if that doesn't solve for test you've asked for. Thanks! |
…LOCK instead of hardcoding 64.
Yeah those should do fine. |
| let resigned = chained && is_last_in_slot; | ||
|
|
||
| // only sign last batch if it is chained and is the last in slot | ||
| // let resigned = chained && is_last_in_slot; |
There was a problem hiding this comment.
nit: looks like this was left in. can be deleted:
// let resigned = chained && is_last_in_slot;
^ you can do this in a follow up PR
Problem
Addressing issue #6698
Turbine creates more resigned shreds than necessary
Summary of Changes
Changed make_shreds_from_data to have only last fec_set signed if last in slot. Updated unit tests to only check fec_set when last in slot.
Ran Agave on test-net with and without my changes. Inspected several metrics (gen_data_time, gen_coding_time, data_bytes, padding_bytes, num_coding_shreds, num_data_shreds, num_merkle_data_shreds, num_merkle_coding_shreds) to ensure there was no performance degradation. Inspected num_repaired on another Anza validator to ensure that there were no unusual repair requests around the times the validator with my changes was leader.