Skip to content
Closed
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
89 changes: 76 additions & 13 deletions EIPS/eip-7892.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,14 +18,16 @@ This EIP introduces **Blob Parameter Only (BPO) Hardforks**, a lightweight mecha

Ethereum's scaling strategy relies on Layer 2 (L2) solutions for transaction execution while using Ethereum as a **data availability (DA) layer**. However, the demand for DA has increased rapidly, and the current approach of only modifying blob parameters in large, infrequent hard forks is **not agile enough** to keep up with L2 growth.

The key motivations for BPO forks are:
The key motivations for BPO forks are as follows:

1. **Continuous Scaling**
- L2 DA demand is growing rapidly, leading to saturation of blob capacity.
- L2 DA demand is growing rapidly, leading to ongoing saturation of blob capacity.
- Large, infrequent blob parameter changes create high costs and inefficiencies.
- BPO forks allow for more frequent, safer capacity increases.

2. **Reduced Operational Overhead**
- Performance improvements and further testing will continue to unlock additional capacity.
- It is desirable to reduce the time between core devs agreeing on a parameter increase and its effective deployment.
- Full Ethereum hard forks require significant coordination, testing, and upgrade efforts across clients.
- By isolating blob parameter changes, BPO forks reduce the complexity of upgrades.

Expand All @@ -39,12 +41,22 @@ The key motivations for BPO forks are:

## Specification

BPO forks are a special class of hard fork which **only modifies any of the following** blob-related parameters:
### Definition

BPO hardforks are defined as hardforks that only change certain protocol parameters at a designated point in time without requiring client-side code changes. The new parameters are activated instantly, with no transition function. These hardforks are consensus-breaking, and therefore receive a `ForkVersion` in the consensus layer.

### Managed parameters

The following blob-related parameters are now managed by parametric configuration:

- **Blob Target (`blob_target`)**: The expected number of blobs per block.
- **Blob Limit (`blob_limit`)**: The maximum number of blobs per block.
- **Blob Base Fee Update Fraction (`baseFeeUpdateFraction`)**: Determines how blob gas pricing adjusts per block.

To ensure consistency, when a regular hardfork changes any of these parameters, it MUST do so by adding an entry to the blob schedule.

### Execution layer configuration

To facilitate these changes on the execution layer, the `blobSchedule` object specified in [EIP-7840](./eip-7840.md) is extended to allow for an arbitrary number of block timestamps at which these parameters **MAY** change.

```json
Expand Down Expand Up @@ -72,24 +84,75 @@ To facilitate these changes on the execution layer, the `blobSchedule` object sp
}
```

On the consensus layer, a new parameter is added to the configuration:
### Consensus layer configuration

```
BLOB_SCHEDULE:
- EPOCH: 348618
A new `BLOB_PARAMETER_SCHEDULE` field is added to consensus layer configuration, containing a sequence of entries representing blob parameter changes after `ELECTRA_FORK_EPOCH`. This replaces the deprecated `MAX_BLOB_PER_BLOCK{_FORKNAME}` convention for future forks, while preserving existing entries for previous forks.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why are we renaming the blob schedule? we can just do this under blob_schedule still unless we're worried about old parsers breaking or something... The blob_schedule already resulted in objects and we're just adding an attribute imo..

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a leftover from a previous version that backported Deneb and Electra. Fixing.


Entry rules:

- One entry per fork that changes blob parameters (regular or BPO forks).
- `EPOCH` and `MAX_BLOBS_PER_BLOCK` required for all entries.
- `FORK_VERSION` required only for BPO forks (regular forks specify this in source).

```yaml
BLOB_PARAMETER_SCHEDULE:
- EPOCH: 400000 ## A future anonymous BPO fork carrying a fork version
FORK_VERSION: 0x09000000
MAX_BLOBS_PER_BLOCK: 24
- EPOCH: 355368
- EPOCH: 420000 ## A future anonymous BPO fork carrying a fork version
FORK_VERSION: 0x0A000000
MAX_BLOBS_PER_BLOCK: 56
- EPOCH: 440000 ## A future named fork introducing blob parameter changes
MAX_BLOBS_PER_BLOCK: 72
```

The parameters and schedules above are purely illustrative. Actual values and schedules are beyond the scope of this specification.

### Requirements
**Requirements:**

- Execution and consensus clients **MUST** share consistent BPO fork schedules.
- The timestamp in EL's `blobSchedule` **MUST** align with the start of the epoch specified in the consensus layer configuration.
- The `max` field in `blobSchedule` **MUST** equal the `MAX_BLOBS_PER_BLOCK` value in the consensus layer configuration.

### Modified `compute_fork_version`

The `compute_fork_version` helper is updated to account for BPO forks:

```python
def compute_fork_version(epoch: Epoch, blob_schedule: Sequence[BlobScheduleEntry]) -> Version:
# Start with named forks.
forks = [
(ELECTRA_FORK_EPOCH, ELECTRA_FORK_VERSION),
(DENEB_FORK_EPOCH, DENEB_FORK_VERSION),
(CAPELLA_FORK_EPOCH, CAPELLA_FORK_VERSION),
(BELLATRIX_FORK_EPOCH, BELLATRIX_FORK_VERSION),
(ALTAIR_FORK_EPOCH, ALTAIR_FORK_VERSION),
]

# Add blob schedule entries that define fork versions (therefore representing BPO forks).
bpo_forks = [
(entry.epoch, entry.fork_version)
for entry in blob_schedule
if entry.fork_version is not None
]
forks.extend(bpo_forks)

forks.sort(reverse=True)

# Find the most recent fork for this epoch.
for fork_epoch, fork_version in forks:
if epoch >= fork_epoch:
return fork_version

return GENESIS_FORK_VERSION
```

### P2P Networking

In the consensus layer:

- Execution and consensus clients **MUST** share consistent BPO fork schedules
- BPO forks **MUST NOT** conflict with other fork schedules
- The timestamp in `blobSchedule` **MUST** align with the start of the epoch specified in the consensus layer configuration
- The `max` field in `blobSchedule` **MUST** equal the `MAX_BLOBS_PER_BLOCK` value in the consensus layer configuration
- The ENR fields `next_fork_version` and `next_fork_epoch` are set from the configuration for the next BPO fork, if applicable.
- It's worth noting that p2p topics will roll over when a BPO fork is activated, as the `fork_digest` parameter is derived from the `fork_version` (modified above to account for BPO forks).

## Rationale

Expand Down