-
Notifications
You must be signed in to change notification settings - Fork 297
implement validator custody with column refill mechanism #7127
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| for i in startIndex..<SLOTS_PER_EPOCH: | ||
| let blck = vcus.dag.getForkedBlock(blocks[int(i)]).valueOr: continue | ||
| withBlck(blck): | ||
| when typeof(forkyBlck).kind < ConsensusFork.Fulu: continue |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this ever happen? it's much cheaper to detect this before the block is ever fetched from the database, based on the slot number + fork schedule.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
detect this before the block is ever fetched from the database
the only way to make this cheaper is to see if the earliest data column in the database is against a Fulu block only then it makes sense to skip this check, imo, otherwise everytime we pull a finalized block from the DB, it's safe to check the fork, also say it can be a situation just a couple epochs after Fulu transition that on wanting block range i get EEEEEEFFFFFFF, i want to refill only against the last Fulu blocks, and wanna continue for the first few Es, note that Es and Fs should ideally way more in number, this is just for an example
|
…on for the validator custody loop
| dataColumnRefillEpoch.start_slot, blocks.toOpenArray(0, SLOTS_PER_EPOCH - 1)) | ||
| for i in startIndex..<SLOTS_PER_EPOCH: | ||
| dataColumnRefillEpoch.start_slot, blocks.toOpenArray(0, slot.epoch().int - 1)) | ||
| for i in startIndex..<slot.epoch().int: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
practically save but not that great to do this because (a) in theory 32/64-bit differences and (b) introduces potential Defect due to int conversion
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see i also has to be int(i) a couple lines later, worth looking at the types here holistically.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's the same technique used in pruneSidecars
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not really great there either, but sure, it's consistent
* some clarifications * push event topic update for data column sidecar * prevent inhibiting validator custody * increase vcus poll interval * not so cool devnet hack for now * some validator custody and status v fixes * few more fixes to event stream and vcus * fixes * few more changes * clarifications regarding blob parameters * revert reqman hack and remove assertion * add more logging and reduce getblobs timeout * cancel blob loop post fulu fork epoch * rework validator custody counting logic and remove another assertion * rman hack 2 * clarifications in validator custody logic * reduce validator custody polling duration * have validator custody detection and custody backfill on separate loops * oops * added extra logging for clarity * use total attached balance instead of active balance * some more rework on getBlobsV2 * other fixes * off vcus for supernodes * make the BN pass min DA requirements to catch missing blocks * some fixes to rman * reword peer filtering and scoring * revise score * bump up parallel requests as there are more number of cancellations * omit minDA criteria * bump down ll requests for supernodes * fix beacon block broadcast not using BPO forkdigests (#7285) * gate status vx * patch blob schedule * oops bug --------- Co-authored-by: tersec <[email protected]>
No description provided.