-
Notifications
You must be signed in to change notification settings - Fork 242
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
EIP7594: p2p-interface #6358
base: kzgpeerdas
Are you sure you want to change the base?
EIP7594: p2p-interface #6358
Conversation
…ger, fix: gossipValidation rules
debug "Data column root request done", | ||
peer, roots = columnIds.len, count, found | ||
|
||
# https://github.com/ethereum/consensus-specs/blob/v1.5.0-alpha.2/specs/_features/eip7594/p2p-interface.md#datacolumnsidecarsbyrange-v1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
|
||
var blockIds: array[int(MAX_REQUEST_DATA_COLUMNS), BlockId] | ||
let | ||
count = int min(reqCount, blockIds.lenu64) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If feasible, it's better to avoid these conversions without some kind of protection against crashing on out-of-range uint64
to int
conversions with a RangeDefect
:
Error: unhandled exception: value out of range [RangeDefect]
If it's provably not hittable (e.g., blockIds.lenu64 is bounded in a sensible way), that should be noted
In this case, blockIds.lenu64 == MAX_REQUESTS_DATA_COLUMNS
, so that part is compile-time known. One approach there might be to wrap that in static(blockIds.lenu64)
to add a kind of compile-time assertion of this much, that it's not really a variable, but is known ahead of time.
for i in startIndex..endIndex: | ||
for j in 0..<MAX_REQUEST_DATA_COLUMNS: | ||
if dag.db.getDataColumnSidecarSZ(blockIds[i].root, ColumnIndex(j), bytes): | ||
if blockIds[i].slot.epoch >= dag.cfg.BELLATRIX_FORK_EPOCH and |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is blockIds[i].slot.epoch >= dag.cfg.BELLATRIX_FORK_EPOCH
being checked? It's neither sufficient nor (I think?) necessary.
It's not sufficient because the Bellatrix fork isn't coterminous with the merge -- the merge happened within the fork, not at the beginning. And it shouldn't be necessary, because getDataColumnSidecarSZ
has already been called. It should not be possible for that to have content in a pre-Electra fork.
If it is worth checking this as a sanity check, maybe check against when PeerDAS actually starts, or at least bounding it at ELECTRA_FORK_EPOCH
rather than BELLATRIX_FORK_EPOCH
.
There's another oddity here, which is that it makes this the ByRange
inconsistent in its checks with the ByRoot
rep/resp call. Both have this information about the slots/epochs, and could perform the same check, and should allow/disallow based on this similarly.
for j in 0..<MAX_REQUEST_DATA_COLUMNS: | ||
if dag.db.getDataColumnSidecarSZ(blockIds[i].root, ColumnIndex(j), bytes): | ||
if blockIds[i].slot.epoch >= dag.cfg.BELLATRIX_FORK_EPOCH and | ||
not dag.head.executionValid: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this checked in ByRange
but not ByRoot
?
trace "got data columns range request", peer, len = columnIds.len | ||
if columnIds.len == 0: | ||
raise newException(InvalidInputsError, "No data columns requested") | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should check here for
No more than
MAX_REQUEST_DATA_COLUMN_SIDECARS
may be requested at a time.
from https://github.com/ethereum/consensus-specs/blob/v1.5.0-alpha.3/specs/_features/eip7594/p2p-interface.md#datacolumnsidecarsbyroot-v1 in which case the request is also invalid
workers[i] = rman.fetchDataColumnsFromNetwork(columnIds) | ||
|
||
await allFutures(workers) | ||
let finish = SyncMoment.now(uint64(len(columnIds))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might be worth moving this into the debug
statement so it's not run if it's not used; it's possible (albeit slightly pathological) for this to be a slow operation
@@ -1410,6 +1411,24 @@ proc pruneBlobs(node: BeaconNode, slot: Slot) = | |||
count = count + 1 | |||
debug "pruned blobs", count, blobPruneEpoch | |||
|
|||
proc pruneDataColumns(node: BeaconNode, slot: Slot) = | |||
let dataColumnPruneEpoch = (slot.epoch - |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems to underflow if called too early to genesis? Even if currently, no networks have launched with PeerDAS at genesis, presumably at some point there will be an Electra genesis, PeerDAS network just as there are now Deneb genesis, KZG-launched networks.
beacon_chain/nimbus_beacon_node.nim
Outdated
@@ -1444,6 +1463,7 @@ proc onSlotEnd(node: BeaconNode, slot: Slot) {.async.} = | |||
# the pruning for later | |||
node.dag.pruneHistory() | |||
node.pruneBlobs(slot) | |||
node.pruneDataColumns(slot) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And here, it will trigger the underflow in a new devnet. This code won't, I think, be hit during syncing, so existing testnets won't have a problem, but new devnets will
beacon_chain/sync/sync_manager.nim
Outdated
@@ -548,7 +548,7 @@ proc syncStep[A, B](man: SyncManager[A, B], index: int, peer: A) | |||
withBlck(blck[]): | |||
when consensusFork >= ConsensusFork.Deneb: | |||
if forkyBlck.message.body.blob_kzg_commitments.len > 0: | |||
hasColumns = true | |||
hasColumns = true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
adds trailing space
beacon_chain/sync/sync_manager.nim
Outdated
# of valid peerIds | ||
# validPeerIds.add(peer.peerId) | ||
# validPeerIds | ||
return true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can use just true
here
# peer.updateScore(PeerScoreNoValues) | ||
man.queue.push(req) | ||
# warn "Received data columns is inconsistent", | ||
# data_columns_map = getShortMap(req, dataColumnData), request = req, msg=groupedDataColumns.error() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
come Nim 2.0.10 should be able to uncomment this, because the foo.error()
won't trigger nim-lang/Nim#23790
beacon_chain/sync/sync_manager.nim
Outdated
# peer.updateScore(PeerScoreNoValues) | ||
man.queue.push(req) | ||
# warn "Received data columns is inconsistent", | ||
# data_columns_map = getShortMap(req, dataColumnData), request = req, msg=groupedDataColumns.error() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See also Nim 2.0.10 / nim-lang/Nim#23790
return | ||
let groupedDataColumns = groupDataColumns(req, blockData, dataColumnData) | ||
if groupedDataColumns.isErr(): | ||
# peer.updateScore(PeerScoreNoValues) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess these were are remain commented, so no change, but is this still the Deneb blob back or something else?
…ataColumn for better logs
…ssues (#6468) * reworked some of the das core specs, pr'd to check whether whether the conflicting type issue is centric to my machine or not * bumped nim-blscurve to 9c6e80c6109133c0af3025654f5a8820282cff05, same as unstable * bumped nim-eth2-scenarios, nim-nat-traversal at par with unstable, added more pathches, made peerdas devnet branch backward compatible, peerdas passing new ssz tests as per alpha3, disabled electra fixture tests, as branch hasn't been rebased for a while * refactor test fixture files * rm: serializeDataColumn * refactor: took data columns extracted from blobs during block proposal to the heap * disable blob broadcast in pd devnet * fix addBlock in message router * fix: data column iterator * added debug checkpoints to check CI * refactor if else conditions * add: updated das core specs to alpha 3, and unit tests pass
* save commit, decouples reconstruction and broadcasting * save progress * add: reconstruction event loop, previous reconstruction related cleanups
* init: add metadatav3, save progress * fix import issues * fix spec version * fix: metadata_v2 to return altair.MetaData * update metata function backward compatible now
No description provided.