-
-
Notifications
You must be signed in to change notification settings - Fork 411
feat: peerDAS - implement validator custody #7640
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: peerDAS - implement validator custody #7640
Conversation
- VALIDATOR_CUSTODY_REQUIREMENT - BALANCE_PER_ADDITIONAL_CUSTODY_GROUP
- define getCustodyColumnsMeta as private method
Add targetCustodyGroupCount and advertisedCustodyGroupCount
use validator effective balances instead of raw balances assuming spec changes for now do not use cached state to access effective balances. update if cached state available when triggering updates to CustodyConfig return CUSTODY_REQUIREMENT if validatorIndices is empty
- set target CGC based on latest finalized state - convert getValidatorsCustodyRequirement to use CachedBeaconStateAllForks - TODO: the chain does not currently have a means of identifying validator indices for the node
define LocalValidatorRegistry to track indices of validators that have registered with the node. read indices of local validators from LocalValidatorRegistry when updating validator custody requirements. borrow logic from ValidatorMonitor for registering and pruning validators from the map in LocalValidatorRegistry. register validators and prune validators in tandem from LocalValidatorRegistry and ValidatorMonitor. TODO: re-use validator registry in ValidatorMonitor. we need to be able to determine the indices of connected validators even if metrics are disabled.
|
|
use existing BeaconProposerCache to access the list of indices of connected validators add method to BeaconProposerCache to access validator indices connected validator clients call 'prepareBeaconProposer' every epoch which adds or updates an entry in the BeaconProposerCache. we can use this cache instead of introducing a duplicative data structure
matthewkeil
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great start !!!!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I spent some time thinking about what you have here and the architecture of Lodestar. There are a couple things that I think will help long-term. I also spent some time speaking with @twoeths to make sure we go down the right path.
Here are some design considerations that we may want to incorporate
- Always using the
custodyConfigdirectly to pull values. - Keeping a
custodyConfigon the main thread and one on the network thread - Maintain update of attached validators and balances happening on the chain in
onFinalizedEpoch - Pass update of custody to the network (on main thread) via the
ChainEventEmitter - Pass update of custody to the network (on network thread) via worker api
Using the method that Tuyen employed we can use the custodyConfig from the chain object directly on the main thread network object which will keep those two in sync.
| this.custodyConfig = modules.chain.custodyConfig; |
I'm also thinking about breaking updateCustodyRequirement into two parts. First will be calculating the custody requirement and the second will be updating the custodyConfig(s). This way we can calculate the update in onFinalizedEpoch and then event the custodyRequirement value to the network object on both main and network threads easily.
const custodyRequirement = calculateCustodyRequirement(state, validatorIndices, config);
this.custodyConfig.updateTargetCustody(custodyRequirement);
this.events.emit(ChainEvent.UpdateTargetCustody, custodyRequirement);In the network object there would be an onUpdateTargetCustody that would be the handler for that event (kind of what you have now but will listen to the chain event bus instead of the custodyConfig object). The handler would call this.getApi().updateTargetCustody(custodyRequirement) to send the value to the network thread so that object could be updated for the PeerManager and other consumers on the network thread.
reverts changes leftover from an earlier approach to tracking local validators alongside the ValidatorMonitor - removes 'getLocalValidatorIndices' method from ValidatorMonitor - sleeps during onClockSlot only if metrics enabled
- define CustodyConfigOpts type to encapsulate options for CustodyConfig - by default, enable validator custody - add separate method for calculating target custody group count to prevent calculation if validator custody disabled - add CLI flag, 'chain.noValidatorCustody', to disable validator custody
- remove CustodyConfigOpts - remove calculateTargetCustodyGroupCount - condition all custody update logic in onForkChoiceFinalized on noValidatorCustody flag
matthewkeil
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just the small nit about using the getter so it consistent with how its done for the chain config. Looks great otherwise!!
matthewkeil
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!! 🚀
|
🎉 This PR is included in v1.34.0 🎉 |
Motivation
@hughy and I have a basic implementation of validator custody for peerDAS.
We're planning to do more testing on this, but would appreciate a review on the architecture since we're still pretty new!
This relates to #7632 - If it merges, we'll update our PR to account for it.
Description
sampledGroupsandcustodyGroupsintoCustodyConfig.CustodyConfigis now treated as a singleton.advertisedGroupCountinCustodyConfig, used for the custody group count in the node's metadata/ENR.setSamplingGroupCountandsetAdvertisedGroupCountto NetworkCore API. Updated by anEventEmitteronCustodyConfig.chain.onForkChoiceFinalized.Not Included
I'll open separate issues for these if we're okay merging this PR without them.
Closes #7619