-
Notifications
You must be signed in to change notification settings - Fork 5.9k
Update EIP-7892: harden the spec with p2p details. #9840
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
✅ All reviewers have approved. |
EIPS/eip-7892.md
Outdated
| "baseFeeUpdateFraction": 5007716 | ||
| }, | ||
| "1740693335": { | ||
| "12000000": { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This formatting is not what we have agreed on.
It would be something more like this:
"blobSchedule": {
"cancun": {
"target": 3,
"max": 6,
"baseFeeUpdateFraction": 3338477
},
"prague": {
"target": 6,
"max": 9,
"baseFeeUpdateFraction": 5007716
},
"osaka": {
"target": 6,
"max": 9,
"baseFeeUpdateFraction": 5007716
},
"bpo1": {
"target": 9,
"max": 12,
"baseFeeUpdateFraction": 5007716
},
"bpo2": {
"target": 4,
"max": 6,
"baseFeeUpdateFraction": 5007716
}
....as well as
"pragueTime": 0,
"osakaTime": 1748609665,
"bpo1Time": 1748609665,
"bpo2Time": 1748610049,
...There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, I was going by @jtraglia's comment that these weren't timestamps.
I do recall seeing this format discussed somewhere, but I can't find it. Could you link to it, @barnabasbusa?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, @barnabasbusa. I'll assume there's good consensus on that format, so we can take the opportunity to integrate it in the EIP here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@barnabasbusa can you pls check I got the spec right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still don’t see bpo1 and bpo1Time fields
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
GitHub UI glitch? They were introduced in this commit: 95d580a (#9840).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Edit: up to debate. Follow here: https://discord.com/channels/595666850260713488/1372894564087758949
Regarding this section:
"prague": {
"target": 6,
"max": 9,
"baseFeeUpdateFraction": 5007716
},
"osaka": {
"target": 6,
"max": 9,
"baseFeeUpdateFraction": 5007716
},
We do not need to define osaka here if it's the same. Only when there's a change.
EIPS/eip-7892.md
Outdated
| class BlobScheduleEntry(NamedTuple): | ||
| epoch: Epoch | ||
| max_blobs_per_block: int # uint32 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently, max_blobs_per_block is a uint64 in the specs though it could be a uint16 if we wanted, since MAX_BLOB_COMMITMENTS_PER_BLOCK is 4,096 and unlikely to ever be greater than 65,535.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd prefer to align the type here with the existing specs (uint64). However, that makes applying the bitmask unsafe, so to bring safety back we'd need to assert it can be coerced into an uint32. Are we good with that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alternatively we could hash the value and bitmask a slice of it, but IMO not worth it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah this is tricky. I suppose an extra assert would be fine.
EIPS/eip-7892.md
Outdated
| blob_params = next( | ||
| (entry for entry in sorted_schedule if current_epoch >= entry.epoch), | ||
| None | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using next is pythonic, which I like, but in the specs we try not to use language specific features like this. It makes implementing in other languages a little more difficult/confusing. I would recommend using a more traditional method for this, eg a for-loop.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense. Do we have these conventions / standards written up somewhere, by the way?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nah, to my knowledge this isn't written down anywhere.
EIPS/eip-7892.md
Outdated
| ```python | ||
| class BlobScheduleEntry(NamedTuple): | ||
| epoch: Epoch | ||
| max_blobs_per_block: int # uint64, aligning with the type of MAX_BLOBS_PER_BLOCK. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's just use uint64 in the type hint.
| max_blobs_per_block: int # uint64, aligning with the type of MAX_BLOBS_PER_BLOCK. | |
| max_blobs_per_block: uint64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done. Clarifying that we can use this type because we assume https://pypi.org/project/remerkleable/ as a prelude.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
W
jtraglia
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 👍 thanks @raulk!
| # This check enables us to roll out the BPO mechanism without a concurrent parameter change. | ||
| if blob_params is None: | ||
| return ForkDigest(base_digest) | ||
| # Safely bitmask blob parameters into the digest. | ||
| assert 0 <= blob_params.max_blobs_per_block <= 0xFFFFFFFF | ||
| mask = blob_params.max_blobs_per_block.to_bytes(4, 'big') | ||
| masked_digest = bytes(a ^ b for a, b in zip(base_digest, mask)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To clarify, this wouldn’t have worked if we had tried to backport Electra and Deneb into the blob schedule?
The network could easily split if a node rolled this out while others didn’t. That’s why we did this pr
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That PR probably removed the backport requirement to help alleviate implementation effort for some clients (maybe for the ones whose fork handling logic made the backport painful). But to your point: backporting past forks here would be problematic while those forks were still active. Once they become historical, it doesn't matter any longer. If that's a concern (eg if we want to have backport optionality), we can gate the masking logic behind an epoch gate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(And yeah, bugs in fork_digest can cause network partitions.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for the reply, I thought I had found a bug here then realized the electra was removed from bpo schedule:)
I'll remember to remove them for prysm
|
Looks great! I had only a couple of thoughts on the current construction:
|
|
BPOs will change the fork digest on the EL side. On the EL side each BPO will be treated as a separate fork, so even if BPO3 == BPO5 they would have different fork ids |
|
@ethDreamer Re (2): Good flag. I don't see any major downsides in that edge case. Those nodes are technically running software that's again compatible with the current consensus rules. So the payloads routed via the "recycled" topics would be valid according to the topic validation rules. They just wouldn't be able to sync the interim portion of the chain (assuming blocks with higher blob counts were effectively included). Re (1): Looks like @barnabasbusa already covered it. |
ethDreamer
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Amazing work @raulk! Thank you so much for your hard work on this!
eth-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All Reviewers Have Approved; Performing Automatic Merge...
Closes #7467. This PR primarily addresses [the P2P changes](ethereum/EIPs#9840) in [fusaka-devnet-2](https://fusaka-devnet-2.ethpandaops.io/). Specifically: * [the new `nfd` parameter added to the `ENR`](ethereum/EIPs#9840) * [the modified `compute_fork_digest()` changes for every BPO fork](ethereum/EIPs#9840) 90% of this PR was absolutely hacked together as fast as possible during the Berlinterop as fast as I could while running between Glamsterdam debates. Luckily, it seems to work. But I was unable to be as careful in avoiding bugs as I usually am. I've cleaned up the things *I remember* wanting to come back and have a closer look at. But still working on this. Progress: * [x] get it working on `fusaka-devnet-2` * [ ] [*optional* disconnect from peers with incorrect `nfd` at the fork boundary](ethereum/consensus-specs#4407) - Can be addressed in a future PR if necessary * [x] first pass clean-up * [x] fix up all the broken tests * [x] final self-review * [x] more thorough review from people more familiar with affected code
Closes sigp#7467. This PR primarily addresses [the P2P changes](ethereum/EIPs#9840) in [fusaka-devnet-2](https://fusaka-devnet-2.ethpandaops.io/). Specifically: * [the new `nfd` parameter added to the `ENR`](ethereum/EIPs#9840) * [the modified `compute_fork_digest()` changes for every BPO fork](ethereum/EIPs#9840) 90% of this PR was absolutely hacked together as fast as possible during the Berlinterop as fast as I could while running between Glamsterdam debates. Luckily, it seems to work. But I was unable to be as careful in avoiding bugs as I usually am. I've cleaned up the things *I remember* wanting to come back and have a closer look at. But still working on this. Progress: * [x] get it working on `fusaka-devnet-2` * [ ] [*optional* disconnect from peers with incorrect `nfd` at the fork boundary](ethereum/consensus-specs#4407) - Can be addressed in a future PR if necessary * [x] first pass clean-up * [x] fix up all the broken tests * [x] final self-review * [x] more thorough review from people more familiar with affected code
Supersedes #9818.
Based on conclusions captured in this comment, confirmed in ACDC #158.
To reconcile BPO forks at the p2p level, we make changes to:
compute_fork_digest, to apply an XOR mask mixing in the currently-activeMAX_BLOBS_PER_BLOCK.Furthermore, we require that all future blob parameter changes (even those happening at regular forks) be applied through the blob schedule. This is necessary to guarantee consistency and correct calculation of the updated
ForkDigestin all circumstances.