blockchain: Cleanup and optimize stake node logic. #1504
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This reworks the stake node handling logic a bit to clean it up, slightly optimize it, and to make it easier to decouple the chain processing and connection code from the download logic in the future.
To accomplish this, the
fetchStakeNode
dependence on thegetReorganizeNodes
function is removed in favor of directly iterating the block nodes in the function as needed. Not only is this more efficient, it also allows the function to return stake nodes for branches regardless of their validation status. Currently, this is irrelevant due to the connection and download logic being tightly coupled. However, it will be necessary in the future when those are separated.Also, the flushing and pruning logic is modified to no longer rely on the download logic being tightly coupled to the the connection logic. In particular, a new function named
flushBlockIndex
is added as a wrapper to the index flush function which populates stake information in a node before flushing it to the database as needed, and all flushing invocations use the wrapper instead.Finally, the
stakeUndoData
field is removed since it is only used to avoid some database loads on reorgs that are large enough to exceed the pruning depth, which never happens in practice anyway, so there is no point in using up extra memory for it.