Skip to content

Conversation

@raulk
Copy link
Member

@raulk raulk commented May 30, 2025

Supersedes #9818.

Based on conclusions captured in this comment, confirmed in ACDC #158.


To reconcile BPO forks at the p2p level, we make changes to:

  • compute_fork_digest, to apply an XOR mask mixing in the currently-active MAX_BLOBS_PER_BLOCK.
  • The ENR record to preserve its ability to accurately convey the next fork.

Furthermore, we require that all future blob parameter changes (even those happening at regular forks) be applied through the blob schedule. This is necessary to guarantee consistency and correct calculation of the updated ForkDigest in all circumstances.

@raulk raulk requested a review from eth-bot as a code owner May 30, 2025 11:00
@github-actions github-actions bot added c-update Modifies an existing proposal s-draft This EIP is a Draft t-informational labels May 30, 2025
@eth-bot
Copy link
Collaborator

eth-bot commented May 30, 2025

✅ All reviewers have approved.

@eth-bot eth-bot added the a-review Waiting on author to review label May 30, 2025
@eth-bot eth-bot changed the title update EIP-7892: harden the spec with p2p details. Update EIP-7892: harden the spec with p2p details. May 30, 2025
EIPS/eip-7892.md Outdated
"baseFeeUpdateFraction": 5007716
},
"1740693335": {
"12000000": {
Copy link
Member

@barnabasbusa barnabasbusa May 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This formatting is not what we have agreed on.

It would be something more like this:

"blobSchedule": {
      "cancun": {
        "target": 3,
        "max": 6,
        "baseFeeUpdateFraction": 3338477
      },
      "prague": {
        "target": 6,
        "max": 9,
        "baseFeeUpdateFraction": 5007716
      },
      "osaka": {
        "target": 6,
        "max": 9,
        "baseFeeUpdateFraction": 5007716
      },
      "bpo1": {
        "target": 9,
        "max": 12,
        "baseFeeUpdateFraction": 5007716
      },
      "bpo2": {
        "target": 4,
        "max": 6,
        "baseFeeUpdateFraction": 5007716
      }
....

as well as

     "pragueTime": 0,
     "osakaTime": 1748609665,
     "bpo1Time": 1748609665,
     "bpo2Time": 1748610049,
...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, I was going by @jtraglia's comment that these weren't timestamps.

I do recall seeing this format discussed somewhere, but I can't find it. Could you link to it, @barnabasbusa?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, @barnabasbusa. I'll assume there's good consensus on that format, so we can take the opportunity to integrate it in the EIP here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@barnabasbusa can you pls check I got the spec right?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still don’t see bpo1 and bpo1Time fields

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

GitHub UI glitch? They were introduced in this commit: 95d580a (#9840).

Copy link
Member

@jtraglia jtraglia Jun 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Edit: up to debate. Follow here: https://discord.com/channels/595666850260713488/1372894564087758949

Regarding this section:

      "prague": {
        "target": 6,
        "max": 9,
        "baseFeeUpdateFraction": 5007716
      },
      "osaka": {
        "target": 6,
        "max": 9,
        "baseFeeUpdateFraction": 5007716
      },

We do not need to define osaka here if it's the same. Only when there's a change.

EIPS/eip-7892.md Outdated
class BlobScheduleEntry(NamedTuple):
epoch: Epoch
max_blobs_per_block: int # uint32
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently, max_blobs_per_block is a uint64 in the specs though it could be a uint16 if we wanted, since MAX_BLOB_COMMITMENTS_PER_BLOCK is 4,096 and unlikely to ever be greater than 65,535.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd prefer to align the type here with the existing specs (uint64). However, that makes applying the bitmask unsafe, so to bring safety back we'd need to assert it can be coerced into an uint32. Are we good with that?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alternatively we could hash the value and bitmask a slice of it, but IMO not worth it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah this is tricky. I suppose an extra assert would be fine.

EIPS/eip-7892.md Outdated
Comment on lines 141 to 144
blob_params = next(
(entry for entry in sorted_schedule if current_epoch >= entry.epoch),
None
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using next is pythonic, which I like, but in the specs we try not to use language specific features like this. It makes implementing in other languages a little more difficult/confusing. I would recommend using a more traditional method for this, eg a for-loop.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense. Do we have these conventions / standards written up somewhere, by the way?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nah, to my knowledge this isn't written down anywhere.

EIPS/eip-7892.md Outdated
```python
class BlobScheduleEntry(NamedTuple):
epoch: Epoch
max_blobs_per_block: int # uint64, aligning with the type of MAX_BLOBS_PER_BLOCK.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's just use uint64 in the type hint.

Suggested change
max_blobs_per_block: int # uint64, aligning with the type of MAX_BLOBS_PER_BLOCK.
max_blobs_per_block: uint64

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. Clarifying that we can use this type because we assume https://pypi.org/project/remerkleable/ as a prelude.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

W

Copy link
Member

@jtraglia jtraglia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 👍 thanks @raulk!

Comment on lines +154 to +161
# This check enables us to roll out the BPO mechanism without a concurrent parameter change.
if blob_params is None:
return ForkDigest(base_digest)
# Safely bitmask blob parameters into the digest.
assert 0 <= blob_params.max_blobs_per_block <= 0xFFFFFFFF
mask = blob_params.max_blobs_per_block.to_bytes(4, 'big')
masked_digest = bytes(a ^ b for a, b in zip(base_digest, mask))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To clarify, this wouldn’t have worked if we had tried to backport Electra and Deneb into the blob schedule?
The network could easily split if a node rolled this out while others didn’t. That’s why we did this pr

Copy link
Member Author

@raulk raulk May 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That PR probably removed the backport requirement to help alleviate implementation effort for some clients (maybe for the ones whose fork handling logic made the backport painful). But to your point: backporting past forks here would be problematic while those forks were still active. Once they become historical, it doesn't matter any longer. If that's a concern (eg if we want to have backport optionality), we can gate the masking logic behind an epoch gate.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(And yeah, bugs in fork_digest can cause network partitions.)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for the reply, I thought I had found a bug here then realized the electra was removed from bpo schedule:)
I'll remember to remove them for prysm

@ethDreamer
Copy link
Contributor

Looks great! I had only a couple of thoughts on the current construction:

  1. If we only mix in the max blobs (because that's all the consensus layer can access), then a BPO fork that only changes the target would not change the fork digest. This is probably fine as such a fork could be implemented entirely on the execution layer.
  2. If we should end up reverting to a lower max blob count after raising it previously, this construction would cause the fork digest to match an earlier fork. I would think this is also probably fine as only a minority of nodes don't update for new forks, and only a minority of those would not update and hang around long enough to also not update on the next fork. And this scenario is a rare case to begin with so I'm not that worried about it.

@barnabasbusa
Copy link
Member

BPOs will change the fork digest on the EL side.

On the EL side each BPO will be treated as a separate fork, so even if BPO3 == BPO5 they would have different fork ids

@raulk
Copy link
Member Author

raulk commented Jun 2, 2025

@ethDreamer Re (2): Good flag. I don't see any major downsides in that edge case. Those nodes are technically running software that's again compatible with the current consensus rules. So the payloads routed via the "recycled" topics would be valid according to the topic validation rules. They just wouldn't be able to sync the interim portion of the chain (assuming blocks with higher blob counts were effectively included).

Re (1): Looks like @barnabasbusa already covered it.

Copy link
Contributor

@ethDreamer ethDreamer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Amazing work @raulk! Thank you so much for your hard work on this!

@eth-bot eth-bot enabled auto-merge (squash) June 3, 2025 12:23
Copy link
Collaborator

@eth-bot eth-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All Reviewers Have Approved; Performing Automatic Merge...

@eth-bot eth-bot merged commit f37240a into ethereum:master Jun 3, 2025
22 checks passed
mergify bot pushed a commit to sigp/lighthouse that referenced this pull request Jul 10, 2025
Closes #7467.

This PR primarily addresses [the P2P changes](ethereum/EIPs#9840) in [fusaka-devnet-2](https://fusaka-devnet-2.ethpandaops.io/). Specifically:

* [the new `nfd` parameter added to the `ENR`](ethereum/EIPs#9840)
* [the modified `compute_fork_digest()` changes for every BPO fork](ethereum/EIPs#9840)

90% of this PR was absolutely hacked together as fast as possible during the Berlinterop as fast as I could while running between Glamsterdam debates. Luckily, it seems to work. But I was unable to be as careful in avoiding bugs as I usually am. I've cleaned up the things *I remember* wanting to come back and have a closer look at. But still working on this.

Progress:
* [x] get it working on `fusaka-devnet-2`
* [ ] [*optional* disconnect from peers with incorrect `nfd` at the fork boundary](ethereum/consensus-specs#4407) - Can be addressed in a future PR if necessary
* [x] first pass clean-up
* [x] fix up all the broken tests
* [x] final self-review
* [x] more thorough review from people more familiar with affected code
ethDreamer added a commit to ethDreamer/lighthouse that referenced this pull request Jul 28, 2025
Closes sigp#7467.

This PR primarily addresses [the P2P changes](ethereum/EIPs#9840) in [fusaka-devnet-2](https://fusaka-devnet-2.ethpandaops.io/). Specifically:

* [the new `nfd` parameter added to the `ENR`](ethereum/EIPs#9840)
* [the modified `compute_fork_digest()` changes for every BPO fork](ethereum/EIPs#9840)

90% of this PR was absolutely hacked together as fast as possible during the Berlinterop as fast as I could while running between Glamsterdam debates. Luckily, it seems to work. But I was unable to be as careful in avoiding bugs as I usually am. I've cleaned up the things *I remember* wanting to come back and have a closer look at. But still working on this.

Progress:
* [x] get it working on `fusaka-devnet-2`
* [ ] [*optional* disconnect from peers with incorrect `nfd` at the fork boundary](ethereum/consensus-specs#4407) - Can be addressed in a future PR if necessary
* [x] first pass clean-up
* [x] fix up all the broken tests
* [x] final self-review
* [x] more thorough review from people more familiar with affected code
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

a-review Waiting on author to review c-update Modifies an existing proposal s-draft This EIP is a Draft t-informational

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants