Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EIP7594: p2p-interface #6358

Draft
wants to merge 110 commits into
base: kzgpeerdas
Choose a base branch
from
Draft

EIP7594: p2p-interface #6358

wants to merge 110 commits into from

Conversation

agnxsh
Copy link
Contributor

@agnxsh agnxsh commented Jun 14, 2024

No description provided.

Copy link

github-actions bot commented Jun 14, 2024

Unit Test Results

0 tests   0 ✔️  0s ⏱️
0 suites  0 💤
0 files    0

Results for commit 20e6b18.

♻️ This comment has been updated with latest results.

@agnxsh agnxsh marked this pull request as ready for review June 17, 2024 18:30
@agnxsh agnxsh marked this pull request as draft June 17, 2024 19:13
@agnxsh agnxsh requested a review from tersec June 19, 2024 07:16
debug "Data column root request done",
peer, roots = columnIds.len, count, found

# https://github.com/ethereum/consensus-specs/blob/v1.5.0-alpha.2/specs/_features/eip7594/p2p-interface.md#datacolumnsidecarsbyrange-v1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general, checking for that all of these are consistent with v1.5.0-alpha.3 and updating the links accordingly would be worthwhile, because the next PeerDAS devnet will be based on this version:
2024-06-20T23:47:08,583161972+00:00


var blockIds: array[int(MAX_REQUEST_DATA_COLUMNS), BlockId]
let
count = int min(reqCount, blockIds.lenu64)
Copy link
Contributor

@tersec tersec Jun 21, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If feasible, it's better to avoid these conversions without some kind of protection against crashing on out-of-range uint64 to int conversions with a RangeDefect:

Error: unhandled exception: value out of range [RangeDefect]

If it's provably not hittable (e.g., blockIds.lenu64 is bounded in a sensible way), that should be noted

In this case, blockIds.lenu64 == MAX_REQUESTS_DATA_COLUMNS, so that part is compile-time known. One approach there might be to wrap that in static(blockIds.lenu64) to add a kind of compile-time assertion of this much, that it's not really a variable, but is known ahead of time.

for i in startIndex..endIndex:
for j in 0..<MAX_REQUEST_DATA_COLUMNS:
if dag.db.getDataColumnSidecarSZ(blockIds[i].root, ColumnIndex(j), bytes):
if blockIds[i].slot.epoch >= dag.cfg.BELLATRIX_FORK_EPOCH and
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is blockIds[i].slot.epoch >= dag.cfg.BELLATRIX_FORK_EPOCH being checked? It's neither sufficient nor (I think?) necessary.

It's not sufficient because the Bellatrix fork isn't coterminous with the merge -- the merge happened within the fork, not at the beginning. And it shouldn't be necessary, because getDataColumnSidecarSZ has already been called. It should not be possible for that to have content in a pre-Electra fork.

If it is worth checking this as a sanity check, maybe check against when PeerDAS actually starts, or at least bounding it at ELECTRA_FORK_EPOCH rather than BELLATRIX_FORK_EPOCH.

There's another oddity here, which is that it makes this the ByRange inconsistent in its checks with the ByRoot rep/resp call. Both have this information about the slots/epochs, and could perform the same check, and should allow/disallow based on this similarly.

for j in 0..<MAX_REQUEST_DATA_COLUMNS:
if dag.db.getDataColumnSidecarSZ(blockIds[i].root, ColumnIndex(j), bytes):
if blockIds[i].slot.epoch >= dag.cfg.BELLATRIX_FORK_EPOCH and
not dag.head.executionValid:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this checked in ByRange but not ByRoot?

trace "got data columns range request", peer, len = columnIds.len
if columnIds.len == 0:
raise newException(InvalidInputsError, "No data columns requested")

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should check here for

No more than MAX_REQUEST_DATA_COLUMN_SIDECARS may be requested at a time.

from https://github.com/ethereum/consensus-specs/blob/v1.5.0-alpha.3/specs/_features/eip7594/p2p-interface.md#datacolumnsidecarsbyroot-v1 in which case the request is also invalid

workers[i] = rman.fetchDataColumnsFromNetwork(columnIds)

await allFutures(workers)
let finish = SyncMoment.now(uint64(len(columnIds)))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might be worth moving this into the debug statement so it's not run if it's not used; it's possible (albeit slightly pathological) for this to be a slow operation

@@ -1410,6 +1411,24 @@ proc pruneBlobs(node: BeaconNode, slot: Slot) =
count = count + 1
debug "pruned blobs", count, blobPruneEpoch

proc pruneDataColumns(node: BeaconNode, slot: Slot) =
let dataColumnPruneEpoch = (slot.epoch -
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems to underflow if called too early to genesis? Even if currently, no networks have launched with PeerDAS at genesis, presumably at some point there will be an Electra genesis, PeerDAS network just as there are now Deneb genesis, KZG-launched networks.

@@ -1444,6 +1463,7 @@ proc onSlotEnd(node: BeaconNode, slot: Slot) {.async.} =
# the pruning for later
node.dag.pruneHistory()
node.pruneBlobs(slot)
node.pruneDataColumns(slot)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And here, it will trigger the underflow in a new devnet. This code won't, I think, be hit during syncing, so existing testnets won't have a problem, but new devnets will

@agnxsh agnxsh changed the title req/res domain for peerdas EIP7594: p2p-interface Jul 16, 2024
@@ -548,7 +548,7 @@ proc syncStep[A, B](man: SyncManager[A, B], index: int, peer: A)
withBlck(blck[]):
when consensusFork >= ConsensusFork.Deneb:
if forkyBlck.message.body.blob_kzg_commitments.len > 0:
hasColumns = true
hasColumns = true
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

adds trailing space

# of valid peerIds
# validPeerIds.add(peer.peerId)
# validPeerIds
return true
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can use just true here

# peer.updateScore(PeerScoreNoValues)
man.queue.push(req)
# warn "Received data columns is inconsistent",
# data_columns_map = getShortMap(req, dataColumnData), request = req, msg=groupedDataColumns.error()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

come Nim 2.0.10 should be able to uncomment this, because the foo.error() won't trigger nim-lang/Nim#23790

# peer.updateScore(PeerScoreNoValues)
man.queue.push(req)
# warn "Received data columns is inconsistent",
# data_columns_map = getShortMap(req, dataColumnData), request = req, msg=groupedDataColumns.error()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See also Nim 2.0.10 / nim-lang/Nim#23790

return
let groupedDataColumns = groupDataColumns(req, blockData, dataColumnData)
if groupedDataColumns.isErr():
# peer.updateScore(PeerScoreNoValues)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess these were are remain commented, so no change, but is this still the Deneb blob back or something else?

agnxsh and others added 16 commits July 23, 2024 12:36
…ssues (#6468)

* reworked some of the das core specs, pr'd to check whether whether the conflicting type issue is centric to my machine or not

* bumped nim-blscurve to 9c6e80c6109133c0af3025654f5a8820282cff05, same as unstable

* bumped nim-eth2-scenarios, nim-nat-traversal at par with unstable, added more pathches, made peerdas devnet branch backward compatible, peerdas passing new ssz tests as per alpha3, disabled electra fixture tests, as branch hasn't been rebased for a while

* refactor test fixture files

* rm: serializeDataColumn

* refactor: took data columns extracted from blobs during block proposal to the heap

* disable blob broadcast in pd devnet

* fix addBlock in message router

* fix: data column iterator

* added debug checkpoints to check CI

* refactor if else conditions

* add: updated das core specs to alpha 3, and unit tests pass
* save commit, decouples reconstruction and broadcasting

* save progress

* add: reconstruction event loop, previous reconstruction related cleanups
* init: add metadatav3, save progress

* fix import issues

* fix spec version

* fix: metadata_v2 to return altair.MetaData

* update metata function backward compatible now
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants