Skip to content

Add Cell Dissemination via Partial Message Specification#4558

Open
MarcoPolo wants to merge 23 commits intoethereum:masterfrom
MarcoPolo:cell-dissemination
Open

Add Cell Dissemination via Partial Message Specification#4558
MarcoPolo wants to merge 23 commits intoethereum:masterfrom
MarcoPolo:cell-dissemination

Conversation

@MarcoPolo
Copy link

@MarcoPolo MarcoPolo commented Sep 3, 2025

In draft until the libp2p/specs work is merged and we have multiple implementations of this. Also requires TODOs to be addressed.

This PR specifies how consensus clients use Gossipsub's Partial Messages to disseminate cells rather than only full columns.

More context in this ethresearch post.

Comment on lines 88 to 102
#### `PartialDataColumnSidecar`

```python
class PartialDataColumnSidecar(Container):
# Only provided if the index can not be inferred from the Gossipsub topic
index: ColumnIndex | None
# Encoded the same as an IHAVE bitmap
cells_present_bitmap: ByteVector[NUMBER_OF_COLUMNS/8] # ceiling if NUMBER_OF_COLUMNS is not divisible by 8
column: List[Cell, MAX_BLOB_COMMITMENTS_PER_BLOCK]
kzg_proofs: List[KZGProof, MAX_BLOB_COMMITMENTS_PER_BLOCK]
# The following are only provided on eager pushes.
kzg_commitments: None | List[KZGCommitment, MAX_BLOB_COMMITMENTS_PER_BLOCK]
signed_block_header: None | SignedBeaconBlockHeader
kzg_commitments_inclusion_proof: None | Vector[Bytes32, KZG_COMMITMENTS_INCLUSION_PROOF_DEPTH]
```
Copy link
Contributor

@fradamt fradamt Sep 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there's three possible ways to go with this:

  1. Add an Optional SSZ type, which we are currently missing (see https://ethereum-magicians.org/t/eip-6475-ssz-optional/12891)
class PartialDataColumnSidecar(Container):
    index: ColumnIndex
    cells_bitmap: Bitlist[MAX_BLOB_COMMITMENTS_PER_BLOCK]
    cells: List[Cell, MAX_BLOB_COMMITMENTS_PER_BLOCK]
    kzg_proofs: List[KZGProof, MAX_BLOB_COMMITMENTS_PER_BLOCK]
    kzg_commitments: Optional[List[KZGCommitment, MAX_BLOB_COMMITMENTS_PER_BLOCK]]
    signed_block_header: Optional[SignedBeaconBlockHeader]
    kzg_commitments_inclusion_proof: Optional[Vector[Bytes32, KZG_COMMITMENTS_INCLUSION_PROOF_DEPTH]
  1. Separate the data part from the kzg commitments, header and proof part
class PartialDataColumnSidecar(Container):
    index: ColumnIndex
    cells_bitmap: Bitlist[MAX_BLOB_COMMITMENTS_PER_BLOCK]
    cells: List[Cell, MAX_BLOB_COMMITMENTS_PER_BLOCK]
    kzg_proofs: List[KZGProof, MAX_BLOB_COMMITMENTS_PER_BLOCK]

class KZGCommitmentsSidecar(Container):
    kzg_commitments: List[KZGCommitment, MAX_BLOB_COMMITMENTS_PER_BLOCK]
    signed_block_header: SignedBeaconBlockHeader
    kzg_commitments_inclusion_proof: Vector[Bytes32, KZG_COMMITMENTS_INCLUSION_PROOF_DEPTH]
  1. Have two different types for a column with and without committments (hopefully with nicer names)
class PartialDataColumnSidecar(Container):
    index: ColumnIndex
    cells_bitmap: Bitlist[MAX_BLOB_COMMITMENTS_PER_BLOCK]
    cells: List[Cell, MAX_BLOB_COMMITMENTS_PER_BLOCK]
    kzg_proofs: List[KZGProof, MAX_BLOB_COMMITMENTS_PER_BLOCK]

class PartialDataColumnSidecarWithCommitments(Container):
    index: ColumnIndex
    cells_bitmap: Bitlist[MAX_BLOB_COMMITMENTS_PER_BLOCK]
    cells: List[Cell, MAX_BLOB_COMMITMENTS_PER_BLOCK]
    kzg_proofs: List[KZGProof, MAX_BLOB_COMMITMENTS_PER_BLOCK]
    kzg_commitments: List[KZGCommitment, MAX_BLOB_COMMITMENTS_PER_BLOCK]
    signed_block_header: SignedBeaconBlockHeader
    kzg_commitments_inclusion_proof: Vector[Bytes32, KZG_COMMITMENTS_INCLUSION_PROOF_DEPTH]

Also:

  • changed ByteVector to Bitlist (but could also be Bitvector)
  • NUMBER_OF_COLUMNS isn't the length/max-length of a column, should be MAX_BLOB_COMMITMENTS_PER_BLOCK
  • I'd just leave the column_index in all the time, 8 bytes doesn't seem worth worrying about imho

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Preference for 1, as the having two containers makes things a bit more annoying as you need to then define when one or the other (or both) are transmitted.

I'd just leave the column_index in all the time, 8 bytes doesn't seem worth worrying about imho

The only reason we would need it is when the subnet != column index, right? 8 bytes is small, but a couple bytes here and a couple bytes there and soon we have a whole packet of redundant data. My preference might be to just remove this field and leave it as future work to define how to derive a column index when the subnet count != column count.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 may seem better but the other options move the complexity from the type system (and the requisite push to add a new concept to the ssz spec) to other layers of the stack

that said, we have have Option as a non-canonical type for some time now, and it may be time to go ahead and add formal support for it

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Including the commitments here is an optimization. We could rely on just receiving them in the block body.

Copy link
Contributor

@fradamt fradamt Sep 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Something worth paying attention to here is also how DataColumnSidedcar evolves in the gloas spec (though these changes will likely pre-date gloas). At the moment it seems like header + inclusion proof will be removed in favor of beacon_block_root only, requiring the beacon block to be seen in advance (though this still will not contain the kzg commitments), which is somewhat natural with epbs.

Copy link
Contributor

@fradamt fradamt Sep 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could do the same thing here, requiring the block (or a full sidecar) to have been seen before receiving a partial column (and excluding the commitments, since at this point they're still in the beacon block). Similar to what you did below but with beacon_block_root instead of the whole signed_beacon_block_header

class PartialDataColumnSidecar(Container):
    index: ColumnIndex
    cells_present_bitmap: Bitlist[MAX_BLOB_COMMITMENTS_PER_BLOCK]
    partial_column: List[Cell, MAX_BLOB_COMMITMENTS_PER_BLOCK]
    kzg_proofs: List[KZGProof, MAX_BLOB_COMMITMENTS_PER_BLOCK]
    beacon_block_root: Root

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've updated this to include the KZG Commitments and block header as an optional field that will be eagerly pushed. This is called the PartialDataColumnHeader, and it's essentially the DataColumnSidecar without cells and proofs. In Gloas it will change to align with the modified DataColumnSidecar as well (or maybe we can just mandate partial data columns?).

Copy link
Member

@raulk raulk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While this mechanistically introduces cell-level deltas, how they precisely integrate into existing PeerDAS processes isn't very clear. How about a section covering the end-to-end behaviour:

  • When nodes push DataColumnSidecars
  • When nodes send partial IWANTs
  • When nodes send partial IHAVEs
  • When nodes send PartialDataColumnSidecars
  • Dependency on getBlobsV3

@MarcoPolo MarcoPolo force-pushed the cell-dissemination branch 2 times, most recently from 8be8bff to 829184f Compare September 5, 2025 17:07
@AgeManning
Copy link
Contributor

We've been in the process of implementing this and have run into a few issues specifically around the validation. We probably need to think about and add the validation rules to this spec.

Our current thinking is to probably include the kzg-commitments into the cell data so that we can verify each cell as we get them.

If we don't have them in there (as the current design), we have to wait for the block data before we can verify if the cells are accurate or not. This also entails adding some kind of cache to keep track of which peer sent want in order to penalize them at a future time if we find out the data is invalid.

This then opens up a few attack vectors, where peers can send us invalid data, filling the cache preventing us getting valid data etc. We would have to be careful with the cache also, to account for periods of non-finality with a high degree of forking where there can be many valid chains.

Currently it seems the complexity of handing all these edge cases might outweigh the relatively small bandwidth savings we are making by excluding the kzg commitments.

Maybe there is a clear implementation path here we are missing however.

cc @dknopik

@MarcoPolo
Copy link
Author

We’ve been in the process of implementing this and have run into a few issues specifically around the validation. We probably need to think about and add the validation rules to this spec.

Agreed.

Our current thinking is to probably include the kzg-commitments into the cell data so that we can verify each cell as we get them.

If we don’t have them in there (as the current design), we have to wait for the block data before we can verify if the cells are accurate or not. This also entails adding some kind of cache to keep track of which peer sent want in order to penalize them at a future time if we find out the data is invalid.

One thing to point out is that this is only an issue for cells you get via an eager push. Without eager push, peers only give you cells that they know you’re missing, and giving a peer this information only happens after getting the block.

This then opens up a few attack vectors, where peers can send us invalid data, filling the cache preventing us getting valid data etc.

The cache is only relevant in the eager push case. If the cache is full, it wouldn’t prevent you from getting valid data after you’ve received you block.

We would have to be careful with the cache also, to account for periods of non-finality with a high degree of forking where there can be many valid chains.

I’m unfamiliar with this scenario. Can you explain a bit more please? Do we expect multiple valid proposers as well?

Currently it seems the complexity of handing all these edge cases might outweigh the relatively small bandwidth savings we are making by excluding the kzg commitments.

No strong opinion on what we do here overall, but I’ll point out that the gloas spec also removes some of the fields and requires the validator to wait for the builder’s bid first.

https://github.com/ethereum/consensus-specs/blob/master/specs/gloas/p2p-interface.md#modified-datacolumnsidecar.

Maybe there is a clear implementation path here we are missing however.

My plan was to limit the eager push cache to 2-3 messages per topic. On first partial publish, we can validate the cache and down score peers appropriately.

I don’t think it’s an interesting attack to poison this cache as it, worst case, costs the victim 1 RTT of delay to receive the cells, at the expense of getting down scored and pruned.

That said, if we wanted to include additional information for validating eager pushes of cells before we see a blob we could send them the relevant parts (KZGCommitments, Inclusion Proof, Signed Beacon block header) only when it’s an eager push.

@fradamt, do you have any extra context on why gloas removes those fields useful for validating the column before the block in the DataColumnSidecar?

@fradamt
Copy link
Contributor

fradamt commented Oct 24, 2025

Our current thinking is to probably include the kzg-commitments into the cell data so that we can verify each cell as we get them.

Currently it seems the complexity of handing all these edge cases might outweigh the relatively small bandwidth savings we are making by excluding the kzg commitments.

I don't see how this is viable. Cells are 2 KBs (and might get down to even 1 KB in the future), so including all KZG commitments means 48 KBs * number of blobs, which is 100% overhead as soon as we get to ~40 blobs. The bandwidth saving are very large, if we care about being able to efficiently send individual cells and not just large-ish partial columns (which imo is essential).

Something else we could do is to only include the kzg_commitment of the cells we have, and an inclusion proof per cell. could include the inclusion proof of the single kzg_commitment for this cell. This is more reasonable, but still quite expensive because depth 12+ (at >= 128 blobs), so 12*32 + 48 = 432 bytes, or ~21% extra overhead, and 23% total (counting the kzg proof as well), per 2 KB cell. As it is, it's not great but might be fine, but it might prevent us from making cells even thinner, since at 1 KB cells we'd be at 46% total overhead?

class PartialDataColumnSidecar(Container):
    cells_present_bitmap: Bitlist[MAX_BLOB_COMMITMENTS_PER_BLOCK]
    partial_column: List[Cell, MAX_BLOB_COMMITMENTS_PER_BLOCK]
    kzg_proofs: List[KZGProof, MAX_BLOB_COMMITMENTS_PER_BLOCK]
    kzg_commitments: List[KZGCommitment MAX_BLOB_COMMITMENTS_PER_BLOCK]
    inclusion_proofs: List[Vector[Bytes32, SINGLE_KZG_COMMITMENT_INCLUSION_PROOF_DEPTH], MAX_BLOB_COMMITMENTS_PER_BLOCK]

```python
def verify_partial_data_column_sidecar(sidecar: PartialDataColumnSidecar) -> bool:
    """
    Verify if the KZG proofs are correct.
    """
    # The column index also represents the cell index
    cell_indices = [i for i, b in enumerate(cells_present_bitmap) if b]

    # Batch verify that the cells match the corresponding commitments and proofs
    verify_cell_kzg_proof_batch(
        commitments_bytes=sidecar.kzg_commitments,
        cell_indices=cell_indices,
        cells=sidecar.column,
        proofs_bytes=sidecar.kzg_proofs,
    )
    
def verify_data_column_sidecar_inclusion_proofs(sidecar: PartialDataColumnSidecar) -> bool:
    """
    Verify if the given KZG commitments included in the given beacon block.
    """
   return all(
      is_valid_merkle_branch(
           leaf=hash_tree_root(sidecar.kzg_commitments[i]),
           branch=sidecar.kzg_commitments_inclusion_proofs[i],
           depth=SINGLE_KZG_COMMITMENT_INCLUSION_PROOF_DEPTH,
           index=get_subtree_index(get_generalized_index(BeaconBlockBody, "blob_kzg_commitments", i)),
           root=sidecar.signed_block_header.message.body_root,
      )
      for i in range(len(sidecar.kzg_commitments))
   )

@fradamt, do you have any extra context on why gloas removes those fields useful for validating the column before the block in the DataColumnSidecar?

Yes, the thinking was that in gloas the beacon block is explicitly intended to come first, the columns are not even supposed to be sent before the beacon block has reached everyone (because the structure of the slot becomes propagate beacon block -> attest -> propagate payload and blobs -> ptc vote). However, for gloas purposes we could go back to "self-verifying columns" (kzg commitments + inclusion proof) if the added complexity turns out to not be worth it (I don't think the current work on gloas devnet0 has surfaced these issues, so maybe worth bringing it up to people working on it). Whether we do that or not doesn't change things for partial messages though, including all kzg commitments + inclusion proof is fine in a column since there's only one kzg commitment per cell, it is not fine in a small partial message.

Comment on lines 309 to 315
```python
class PartialDataColumnSidecar(Container):
cells_present_bitmap: Bitlist[MAX_BLOB_COMMITMENTS_PER_BLOCK]
partial_column: List[Cell, MAX_BLOB_COMMITMENTS_PER_BLOCK]
kzg_proofs: List[KZGProof, MAX_BLOB_COMMITMENTS_PER_BLOCK]
```

This comment was marked as resolved.

Co-authored-by: fradamt <104826920+fradamt@users.noreply.github.com>
@MarcoPolo
Copy link
Author

I've updated the spec to add an optional PartialDataColumnHeader, this contains the KZG Commitments, Inclusion Proof, and Signed block header. It is recommended to eagerly push this to peers.

Without this, peers need to wait for the beacon block before being able to request cells or respond to a peer's partial message request. From observed data on Xatu and from simulations, DataColumns on mainnet propagate a faster than beacon blocks (around 100ms).

This does introduce overhead that increases at a rate of 48 bytes * number, but:

  • This only happens in the eager push case where we haven't heard from a peer.
  • Is not sent when responding to a peer's request for missing cells.

It also aligns a bit more with the existing validation model of DataColumnSidecar which should make implementation a bit easier.

The main downside in my opinion is that this data is duplicated on all column subnets. Maybe in the future we could consider a separate gossipsub topic for disseminating just this information.

More details here: https://hackmd.io/Q9xQfN8zTtywJ75g312t8Q

cc @fradamt because we were going back and forth on this.

Comment on lines 355 to 356
# Optional header, only sent on eager pushes
header: List[PartialDataColumnHeader, 1]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume the primary motivation of not using Union[None, PartialDataColumnHeader] is missing support in some SSZ implementations, and the fact that it is not currently used elsewhere in Ethereum?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why don't we always have the header though? It doesn't seem that large (336 fixed bytes + 48 * number of commitments). Even with 128 commitments which is a very high estimate for bpo3, it would be ~6kb extra data per subnet if we always include the header.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume the primary motivation of not using Union[None, PartialDataColumnHeader] is missing support in some SSZ implementations, and the fact that it is not currently used elsewhere in Ethereum?

Correct. See also this PR: #3906

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why don't we always have the header though? It doesn't seem that large (336 fixed bytes + 48 * number of commitments). Even with 128 commitments which is a very high estimate for bpo3, it would be ~6kb extra data per subnet if we always include the header.

If we eagerly push it, the peer will always have the header when it needs it. We don't need to always include it. We don't gain anything by always including it, it is purely redundant behavior.

Here's an example exchange between two peers A and B:

+------------------+                            +------------------+
|        A         |                            |        B         |
+------------------+                            +------------------+

      |                                                  |
      |  (1) Parts Advertise                             |
      |  Header + Bitmap_A (what A has)                  |
      |------------------------------------------------->|
      |                                                  |
      |                                                  | (2) Parts Reply
      |                                                  | Bitmap_B (what B has)
      |                                                  | + Optional: Cells B_not_in_A
      |<-------------------------------------------------|
      |                                                  |
      |  (3) Send Missing Parts                          |
      |  Cells A_not_in_B (parts B lacks that A has)     |
      |------------------------------------------------->|

That hopefully makes it clear that messages 2 & 3 don't need a header.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand that it is redundant. My point is that 1. its just simpler to always send it 2. its not that much data 3. this might be a premature optimization.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I think I agree with @pawanjay176 here. I'd prefer to just include those fields in PartialDataColumnSidecar.

Copy link
Author

@MarcoPolo MarcoPolo Jan 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. its just simpler to always send it

Marginally simpler. In my current implementation in Gossipsub, the application gets a eagerPush boolean param. It just needs to add the header if eagerPush is true. On the receiving side, you need to keep state across messages anyways (for the whole partial thing to eventually become complete), so there aren't changes needed there. I don't think it's much harder to avoid sending useless data.

The behavior of the receiver should be to ignore the header data if it already has the header anyways, as there is no point in duplicating earlier compute work.

  1. its not that much data

~25 commitments equates to another packet of data. We need to receive all packets of a message before we can decode it. More packets not only mean more delay before we can process the message, but also a chance for that packet to have to be retransmitted. This is a simplistic model (packet drops aren't random), but the point is that small bits of extra data aren't free.

  1. this might be a premature optimization.

I don't think of it as an optimization at all. It's about not doing extra useless work.

Is there something specific you think is gained by always including the header? I'm not convinced by the "simpler" argument.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the response.
I'll look into implementing this and get back to you!

Comment on lines 397 to 398
- _[IGNORE]_ If the received partial message contains only cell data, the
node has not yet seen the corresponding `PartialDataColumnHeader`.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean

- _[IGNORE]_ If the received partial message contains only cell data and the
  node has not yet seen the corresponding `PartialDataColumnHeader`.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I always get tripped up with the phrasing of these rules.

I mean to say If the message only contains cell data and we haven't seen the header yet we should ignore it.

Which, with the way the rules are formatted (<action> <expectation>, and only do action if expectation is not met), I believe it should be:

If the received partial message contains only cell data, the node has seen the corresponding `PartialDataColumnHeader`.

So the quoted version is incorrect (It has an extra "not")

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated

@fradamt
Copy link
Contributor

fradamt commented Jan 13, 2026

Maybe in the future we could consider a separate gossipsub topic for disseminating just this information.

Why not do this already?

@MarcoPolo
Copy link
Author

Maybe in the future we could consider a separate gossipsub topic for disseminating just this information.

Why not do this already?

In my opinion, it makes the changes needed for cell-dissemination bigger than they need to be for the initial MVP. I also think the upgrade path from the optional header to a separate topic is fairly straightforward (if a peer is in the data column header topic, then you do not need to push the header).

Having this optional header here simplifies the design a bit because it's similar to the existing data column sidecar approach of being able to action on the message immediately (except in the case where you are missing the parent block). With the separate topic design, we'd need to stage any partial messages received before the header (which could be common as the initial partial message is very small) for later processing after receiving the header. As well as penalize peers that sent us partial messages that never corresponded to any header.

I'm sure we can figure it all out, but it seems like it could delay shipping something that will give us the majority of the benefits.

@MarcoPolo
Copy link
Author

Moving this out of draft as I think it's ready for some critical review. We still need to merge the partial message extension here: libp2p/specs#685, but that's very close to done.

@MarcoPolo
Copy link
Author

reminder to update this after https://github.com/ethereum/consensus-specs/pull/4880/changes is merged.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

10 participants