Skip to content
Merged
Show file tree
Hide file tree
Changes from 26 commits
Commits
Show all changes
27 commits
Select commit Hold shift + click to select a range
41f2322
modify fork digest to distinguish BPO forks + add entry to ENR.
raulk Jun 2, 2025
b2e8dba
simplify compute_fork_digest to reuse get_max_blobs_per_block.
raulk Jun 3, 2025
803460b
Merge branch 'dev' into raulk/fulu-latest-bpo
jtraglia Jun 3, 2025
45b2ee1
Combine sections
jtraglia Jun 4, 2025
77d981e
Move get_max_blobs_per_block before compute_fork_digest
jtraglia Jun 4, 2025
0a9b17d
Fix test_compute_fork_digest
jtraglia Jun 4, 2025
0be632f
tests: fix README typo.
raulk Jun 6, 2025
738cca0
add tests for compute_fork_digest @ fulu.
raulk Jun 6, 2025
b00d6a1
address review comments on beacon-chain.md.
raulk Jun 9, 2025
fc0990b
address review comments on p2p-interface.md.
raulk Jun 9, 2025
4c841df
address review comments in tests.
raulk Jun 9, 2025
5d21628
add note about `nfd` pre-upgrade.
raulk Jun 9, 2025
35fd1df
Merge branch 'dev' into raulk/fulu-latest-bpo
raulk Jun 9, 2025
d2e7d5e
Add "fulu" to "new in" comments
jtraglia Jun 9, 2025
a3ec2b5
Join comments into paragraph
jtraglia Jun 9, 2025
a674c3e
rename parameters.
raulk Jun 9, 2025
ae4d000
Update fork digest tests
jtraglia Jun 10, 2025
a428187
Hash epoch & blob limit into digest
jtraglia Jun 10, 2025
2deef95
Merge remote-tracking branch 'upstream/dev' into raulk/fulu-latest-bpo
jtraglia Jun 10, 2025
82bf29b
Use existing spec functions
jtraglia Jun 10, 2025
0b6cd8e
Update comments
jtraglia Jun 10, 2025
c910bf1
Rename to fork epoch
jtraglia Jun 10, 2025
e5dc7a2
Rename version to fork_version
jtraglia Jun 10, 2025
353e407
Apply suggestions from in-person feedback
jtraglia Jun 10, 2025
a33876a
Update compute_fork_digest() usage in fulu p2p specs
jtraglia Jun 10, 2025
d2e1b50
Fix lint & feedback
jtraglia Jun 10, 2025
1e79dbc
Address review feedback
jtraglia Jun 11, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
84 changes: 83 additions & 1 deletion specs/fulu/beacon-chain.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@

- [Introduction](#introduction)
- [Configuration](#configuration)
- [Blob schedule](#blob-schedule)
- [Beacon chain state transition function](#beacon-chain-state-transition-function)
- [Block processing](#block-processing)
- [Execution payload](#execution-payload)
Expand All @@ -15,6 +16,9 @@
- [`BeaconState`](#beaconstate)
- [Helper functions](#helper-functions)
- [Misc](#misc)
- [New `BlobParameters`](#new-blobparameters)
- [New `get_blob_parameters`](#new-get_blob_parameters)
- [Modified `compute_fork_digest`](#modified-compute_fork_digest)
- [New `compute_proposer_indices`](#new-compute_proposer_indices)
- [Beacon state accessors](#beacon-state-accessors)
- [Modified `get_beacon_proposer_index`](#modified-get_beacon_proposer_index)
Expand All @@ -32,6 +36,23 @@ and is under active development.

## Configuration

### Blob schedule

*[New in Fulu:EIP7892]* This schedule defines the maximum blobs per block limit
for a given epoch.

There MUST NOT exist multiple blob schedule entries with the same epoch value.
The maximum blobs per block limit for blob schedules entries MUST be less than
or equal to `MAX_BLOB_COMMITMENTS_PER_BLOCK`. The blob schedule entries SHOULD
be sorted by epoch in ascending order. The blob schedule MAY be empty.

*Note*: The blob schedule is to be determined.

<!-- list-of-records:blob_schedule -->

| Epoch | Max Blobs Per Block | Description |
| ----- | ------------------- | ----------- |

## Beacon chain state transition function

### Block processing
Expand All @@ -53,7 +74,10 @@ def process_execution_payload(
# Verify timestamp
assert payload.timestamp == compute_timestamp_at_slot(state, state.slot)
# [Modified in Fulu:EIP7892] Verify commitments are under limit
assert len(body.blob_kzg_commitments) <= get_max_blobs_per_block(get_current_epoch(state))
assert (
len(body.blob_kzg_commitments)
<= get_blob_parameters(get_current_epoch(state)).max_blobs_per_block
)
# Verify the execution payload is valid
versioned_hashes = [
kzg_commitment_to_versioned_hash(commitment) for commitment in body.blob_kzg_commitments
Expand Down Expand Up @@ -151,6 +175,64 @@ class BeaconState(Container):

### Misc

#### New `BlobParameters`

```python
@dataclass
class BlobParameters:
epoch: Epoch
max_blobs_per_block: uint64
```

#### New `get_blob_parameters`

```python
def get_blob_parameters(epoch: Epoch) -> BlobParameters:
"""
Return the blob parameters at a given epoch.
"""
for entry in sorted(BLOB_SCHEDULE, key=lambda e: e["EPOCH"], reverse=True):
if epoch >= entry["EPOCH"]:
return BlobParameters(entry["EPOCH"], entry["MAX_BLOBS_PER_BLOCK"])
return BlobParameters(ELECTRA_FORK_EPOCH, MAX_BLOBS_PER_BLOCK_ELECTRA)
```

#### Modified `compute_fork_digest`

*Note:* The `compute_fork_digest` helper is updated to account for
Blob-Parameters-Only forks. Also, the `fork_version` parameter has been removed
and is now computed for the given epoch with `compute_fork_version`.

```python
def compute_fork_digest(
genesis_validators_root: Root,
epoch: Epoch, # [New in Fulu:EIP7892]
) -> ForkDigest:
"""
Return the 4-byte fork digest for the ``version`` and ``genesis_validators_root``
XOR'd with the hash of the blob parameters for ``epoch``.

This is a digest primarily used for domain separation on the p2p layer.
4-bytes suffices for practical separation of forks/chains.
"""
fork_version = compute_fork_version(epoch)
base_digest = compute_fork_data_root(fork_version, genesis_validators_root)
blob_parameters = get_blob_parameters(epoch)

# Bitmask digest with hash of blob parameters
return ForkDigest(
bytes(
xor(
base_digest,
hash(
uint_to_bytes(uint64(blob_parameters.epoch))
+ uint_to_bytes(uint64(blob_parameters.max_blobs_per_block))
),
)
)[:4]
)
```

#### New `compute_proposer_indices`

```python
Expand Down
32 changes: 0 additions & 32 deletions specs/fulu/das-core.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,13 +10,11 @@
- [Configuration](#configuration)
- [Data size](#data-size)
- [Custody setting](#custody-setting)
- [Blob schedule](#blob-schedule)
- [Containers](#containers)
- [`DataColumnSidecar`](#datacolumnsidecar)
- [`MatrixEntry`](#matrixentry)
- [Helper functions](#helper-functions)
- [`get_custody_groups`](#get_custody_groups)
- [`get_max_blobs_per_block`](#get_max_blobs_per_block)
- [`compute_columns_for_custody_group`](#compute_columns_for_custody_group)
- [`compute_matrix`](#compute_matrix)
- [`recover_matrix`](#recover_matrix)
Expand Down Expand Up @@ -70,23 +68,6 @@ specification.
| `NUMBER_OF_CUSTODY_GROUPS` | `128` | Number of custody groups available for nodes to custody |
| `CUSTODY_REQUIREMENT` | `4` | Minimum number of custody groups an honest node custodies and serves samples from |

### Blob schedule

*[New in EIP7892]* This schedule defines the maximum blobs per block limit for a
given epoch.

There MUST NOT exist multiple blob schedule entries with the same epoch value.
The maximum blobs per block limit for blob schedules entries MUST be less than
or equal to `MAX_BLOB_COMMITMENTS_PER_BLOCK`. The blob schedule entries SHOULD
be sorted by epoch in ascending order. The blob schedule MAY be empty.

*Note*: The blob schedule is to be determined.

<!-- list-of-records:blob_schedule -->

| Epoch | Max Blobs Per Block | Description |
| ----- | ------------------- | ----------- |

### Containers

#### `DataColumnSidecar`
Expand Down Expand Up @@ -137,19 +118,6 @@ def get_custody_groups(node_id: NodeID, custody_group_count: uint64) -> Sequence
return sorted(custody_groups)
```

### `get_max_blobs_per_block`

```python
def get_max_blobs_per_block(epoch: Epoch) -> uint64:
"""
Return the maximum number of blobs that can be included in a block for a given epoch.
"""
for entry in sorted(BLOB_SCHEDULE, key=lambda e: e["EPOCH"], reverse=True):
if epoch >= entry["EPOCH"]:
return entry["MAX_BLOBS_PER_BLOCK"]
return MAX_BLOBS_PER_BLOCK_ELECTRA
```

### `compute_columns_for_custody_group`

```python
Expand Down
43 changes: 35 additions & 8 deletions specs/fulu/p2p-interface.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@
- [The discovery domain: discv5](#the-discovery-domain-discv5)
- [ENR structure](#enr-structure)
- [Custody group count](#custody-group-count)
- [Next fork digest](#next-fork-digest)

<!-- mdformat-toc end -->

Expand Down Expand Up @@ -304,13 +305,13 @@ During the deprecation transition period:
**Protocol ID:** `/eth2/beacon_chain/req/data_column_sidecars_by_range/1/`

The `<context-bytes>` field is calculated as
`context = compute_fork_digest(fork_version, genesis_validators_root)`:
`context = compute_fork_digest(genesis_validators_root, fork_epoch)`:

<!-- eth2spec: skip -->

| `fork_version` | Chunk SSZ type |
| ------------------- | ------------------------ |
| `FULU_FORK_VERSION` | `fulu.DataColumnSidecar` |
| `fork_epoch` | Chunk SSZ type |
| ----------------- | ------------------------ |
| `FULU_FORK_EPOCH` | `fulu.DataColumnSidecar` |

Request Content:

Expand Down Expand Up @@ -409,13 +410,13 @@ the request.
*[New in Fulu:EIP7594]*

The `<context-bytes>` field is calculated as
`context = compute_fork_digest(fork_version, genesis_validators_root)`:
`context = compute_fork_digest(genesis_validators_root, fork_epoch)`:

<!-- eth2spec: skip -->

| `fork_version` | Chunk SSZ type |
| ------------------- | ------------------------ |
| `FULU_FORK_VERSION` | `fulu.DataColumnSidecar` |
| `fork_epoch` | Chunk SSZ type |
| ----------------- | ------------------------ |
| `FULU_FORK_EPOCH` | `fulu.DataColumnSidecar` |

Request Content:

Expand Down Expand Up @@ -494,3 +495,29 @@ column discovery.
| Key | Value |
| ----- | ----------------------------------------------------------------------------------------------------------------- |
| `cgc` | Custody group count, `uint64` big endian integer with no leading zero bytes (`0` is encoded as empty byte string) |

##### Next fork digest

A new entry is added to the ENR under the key `nfd`, short for _next fork
digest_. This entry communicates the digest of the next scheduled fork,
regardless of whether it is a regular or a Blob-Parameters-Only fork.

If no next fork is scheduled, the `nfd` entry contains the default value for the
type (i.e., the SSZ representation of a zero-filled array).

| Key | Value |
| :---- | :---------------------- |
| `nfd` | SSZ Bytes4 `ForkDigest` |

Furthermore, the existing `next_fork_epoch` field under the `eth2` entry MUST be
set to the epoch of the next fork, whether a regular fork, _or a BPO fork_.

When discovering and interfacing with peers, nodes MUST evaluate `nfd` alongside
their existing consideration of the `ENRForkID::next_*` fields under the `eth2`
key, to form a more accurate view of the peer's intended next fork for the
purposes of sustained peering. A mismatch indicates that the node MUST
disconnect from such peers at the fork boundary, but not sooner.

Nodes unprepared to follow the Fulu fork will be unaware of `nfd` entries.
However, their existing comparison of `eth2` entries (concretely
`next_fork_epoch`) is sufficient to detect upcoming divergence.
Comment on lines +522 to +523
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does any client actually implement that check right now? I don't see how that works in practice since if you publish a release that has FULU_FORK_EPOCH set and other nodes in the network haven't upgraded yet then the next_fork_epoch wouldn't match with those nodes.

Or do clients ignore this check if next_fork_epoch is still set to 18446744073709551615 in either their config or the ENR of the peer?

2 changes: 1 addition & 1 deletion tests/core/pyspec/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ Or, to run a specific test function specify `k=<test-name>`:
make test k=test_verify_kzg_proof
```

Or, to run a specific test function under a single fork specify `k=<test-name>`:
Or, to run all tests under a single fork specify `fork=<name>`:

```shell
make test fork=phase0
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ def test_blob_kzg_commitments_merkle_proof__random_block_1(spec, state):
@with_fulu_and_later
@spec_state_test
def test_blob_kzg_commitments_merkle_proof__multiple_blobs(spec, state):
blob_count = spec.get_max_blobs_per_block(spec.get_current_epoch(state)) // 2
blob_count = spec.get_blob_parameters(spec.get_current_epoch(state)).max_blobs_per_block // 2
rng = random.Random(2222)
yield from _run_blob_kzg_commitments_merkle_proof_test(
spec, state, rng=rng, blob_count=blob_count
Expand All @@ -96,7 +96,7 @@ def test_blob_kzg_commitments_merkle_proof__multiple_blobs(spec, state):
@with_fulu_and_later
@spec_state_test
def test_blob_kzg_commitments_merkle_proof__max_blobs(spec, state):
max_blobs = spec.get_max_blobs_per_block(spec.get_current_epoch(state))
max_blobs = spec.get_blob_parameters(spec.get_current_epoch(state)).max_blobs_per_block
rng = random.Random(3333)
yield from _run_blob_kzg_commitments_merkle_proof_test(
spec, state, rng=rng, blob_count=max_blobs
Expand Down
Empty file.
Loading