From 492553498b72c645fbb7447d836e429eef867aad Mon Sep 17 00:00:00 2001 From: Dankrad Feist Date: Wed, 5 Jan 2022 19:43:43 +0000 Subject: [PATCH 01/66] Rough structure --- specs/sharding/beacon-chain.md | 806 ++++++++------------------------- 1 file changed, 186 insertions(+), 620 deletions(-) diff --git a/specs/sharding/beacon-chain.md b/specs/sharding/beacon-chain.md index ede369b957..cde30a3f94 100644 --- a/specs/sharding/beacon-chain.md +++ b/specs/sharding/beacon-chain.md @@ -88,10 +88,8 @@ We define the following Python custom types for type hinting and readability: | Name | SSZ equivalent | Description | | - | - | - | -| `Shard` | `uint64` | A shard number | | `BLSCommitment` | `Bytes48` | A G1 curve point | -| `BLSPoint` | `uint256` | A number `x` in the range `0 <= x < MODULUS` | -| `BuilderIndex` | `uint64` | Builder registry index | +| `BLSFieldElement` | `uint256` | A number `x` in the range `0 <= x < MODULUS` | ## Constants @@ -103,67 +101,34 @@ The following values are (non-configurable) constants used throughout the specif | - | - | - | | `PRIMITIVE_ROOT_OF_UNITY` | `7` | Primitive root of unity of the BLS12_381 (inner) modulus | | `DATA_AVAILABILITY_INVERSE_CODING_RATE` | `2**1` (= 2) | Factor by which samples are extended for data availability encoding | -| `POINTS_PER_SAMPLE` | `uint64(2**3)` (= 8) | 31 * 8 = 248 bytes | +| `POINTS_PER_SAMPLE` | `uint64(2**4)` (= 16) | 31 * 16 = 496 bytes | | `MODULUS` | `0x73eda753299d7d483339d80809a1d80553bda402fffe5bfeffffffff00000001` (curve order of BLS12_381) | -### Domain types - -| Name | Value | -| - | - | -| `DOMAIN_SHARD_BLOB` | `DomainType('0x80000000')` | - -### Shard Work Status - -| Name | Value | Notes | -| - | - | - | -| `SHARD_WORK_UNCONFIRMED` | `0` | Unconfirmed, nullified after confirmation time elapses | -| `SHARD_WORK_CONFIRMED` | `1` | Confirmed, reduced to just the commitment | -| `SHARD_WORK_PENDING` | `2` | Pending, a list of competing headers | - -### Misc - -TODO: `PARTICIPATION_FLAG_WEIGHTS` backwards-compatibility is difficult, depends on usage. - -| Name | Value | -| - | - | -| `PARTICIPATION_FLAG_WEIGHTS` | `[TIMELY_SOURCE_WEIGHT, TIMELY_TARGET_WEIGHT, TIMELY_HEAD_WEIGHT, TIMELY_SHARD_WEIGHT]` | - -### Participation flag indices - -| Name | Value | -| - | - | -| `TIMELY_SHARD_FLAG_INDEX` | `3` | - -### Incentivization weights - -TODO: determine weight for shard attestations - -| Name | Value | -| - | - | -| `TIMELY_SHARD_WEIGHT` | `uint64(8)` | - -TODO: `WEIGHT_DENOMINATOR` needs to be adjusted, but this breaks a lot of Altair code. - ## Preset ### Misc | Name | Value | Notes | | - | - | - | -| `MAX_SHARDS` | `uint64(2**10)` (= 1,024) | Theoretical max shard count (used to determine data structure sizes) | -| `INITIAL_ACTIVE_SHARDS` | `uint64(2**6)` (= 64) | Initial shard count | -| `SAMPLE_PRICE_ADJUSTMENT_COEFFICIENT` | `uint64(2**3)` (= 8) | Sample price may decrease/increase by at most exp(1 / this value) *per epoch* | -| `MAX_SHARD_PROPOSER_SLASHINGS` | `2**4` (= 16) | Maximum amount of shard proposer slashing operations per block | -| `MAX_SHARD_HEADERS_PER_SHARD` | `4` | | +| `MAX_SHARDS` | `uint64(2**12)` (= 4,096) | Theoretical max shard count (used to determine data structure sizes) | +| `ACTIVE_SHARDS` | `uint64(2**8)` (= 256) | Initial shard count | +| `TARGET_SHARDS` | `uint64(ACTIVE_SHARDS // 2)` (= 256) | Initial shard count | +| `DATAGAS_PRICE_ADJUSTMENT_COEFFICIENT` | `uint64(2**3)` (= 8) | Sample price may decrease/increase by at most exp(1 / this value) *per epoch* | | `SHARD_STATE_MEMORY_SLOTS` | `uint64(2**8)` (= 256) | Number of slots for which shard commitments and confirmation status is directly available in the state | -| `BLOB_BUILDER_REGISTRY_LIMIT` | `uint64(2**40)` (= 1,099,511,627,776) | shard blob builders | + +### Time parameters + +With the introduction of intermediate blocks the number of slots per epoch is doubled (it counts beacon blocks and intermediate blocks). + +| Name | Value | Unit | Duration | +| - | - | :-: | :-: | +| `SLOTS_PER_EPOCH` | `uint64(2**6)` (= 64) | slots | 8:32 minutes | ### Shard blob samples | Name | Value | Notes | | - | - | - | -| `MAX_SAMPLES_PER_BLOB` | `uint64(2**11)` (= 2,048) | 248 * 2,048 = 507,904 bytes | -| `TARGET_SAMPLES_PER_BLOB` | `uint64(2**10)` (= 1,024) | 248 * 1,024 = 253,952 bytes | +| `SAMPLES_PER_BLOB` | `uint64(2**9)` (= 2,048) | 248 * 2,048 = 507,904 bytes | ### Precomputed size verification points @@ -171,320 +136,131 @@ TODO: `WEIGHT_DENOMINATOR` needs to be adjusted, but this breaks a lot of Altair | - | - | | `G1_SETUP` | Type `List[G1]`. The G1-side trusted setup `[G, G*s, G*s**2....]`; note that the first point is the generator. | | `G2_SETUP` | Type `List[G2]`. The G2-side trusted setup `[G, G*s, G*s**2....]` | -| `ROOT_OF_UNITY` | `pow(PRIMITIVE_ROOT_OF_UNITY, (MODULUS - 1) // int(MAX_SAMPLES_PER_BLOB * POINTS_PER_SAMPLE), MODULUS)` | - -### Gwei values - -| Name | Value | Unit | Description | -| - | - | - | - | -| `MAX_SAMPLE_PRICE` | `Gwei(2**33)` (= 8,589,934,592) | Gwei | Max sample charged for a TARGET-sized shard blob | -| `MIN_SAMPLE_PRICE` | `Gwei(2**3)` (= 8) | Gwei | Min sample price charged for a TARGET-sized shard blob | +| `ROOT_OF_UNITY` | `pow(PRIMITIVE_ROOT_OF_UNITY, (MODULUS - 1) // int(SAMPLES_PER_BLOB * POINTS_PER_SAMPLE), MODULUS)` | ## Configuration Note: Some preset variables may become run-time configurable for testnets, but default to a preset while the spec is unstable. -E.g. `INITIAL_ACTIVE_SHARDS`, `MAX_SAMPLES_PER_BLOB` and `TARGET_SAMPLES_PER_BLOB`. +E.g. `ACTIVE_SHARDS` and `SAMPLES_PER_BLOB`. -## Updated containers +### Time parameters -The following containers have updated definitions to support Sharding. - -### `AttestationData` +| Name | Value | Unit | Duration | +| - | - | :-: | :-: | +| `SECONDS_PER_SLOT` | `uint64(8)` | seconds | 8 seconds | -```python -class AttestationData(Container): - slot: Slot - index: CommitteeIndex - # LMD GHOST vote - beacon_block_root: Root - # FFG vote - source: Checkpoint - target: Checkpoint - # Hash-tree-root of ShardBlob - shard_blob_root: Root # [New in Sharding] -``` -### `BeaconBlockBody` - -```python -class BeaconBlockBody(bellatrix.BeaconBlockBody): # [extends Bellatrix block body] - shard_proposer_slashings: List[ShardProposerSlashing, MAX_SHARD_PROPOSER_SLASHINGS] - shard_headers: List[SignedShardBlobHeader, MAX_SHARDS * MAX_SHARD_HEADERS_PER_SHARD] -``` - -### `BeaconState` - -```python -class BeaconState(bellatrix.BeaconState): - # Blob builder registry. - blob_builders: List[Builder, BLOB_BUILDER_REGISTRY_LIMIT] - blob_builder_balances: List[Gwei, BLOB_BUILDER_REGISTRY_LIMIT] - # A ring buffer of the latest slots, with information per active shard. - shard_buffer: Vector[List[ShardWork, MAX_SHARDS], SHARD_STATE_MEMORY_SLOTS] - shard_sample_price: uint64 -``` - -## New containers - -### `Builder` - -```python -class Builder(Container): - pubkey: BLSPubkey - # TODO: fields for either an expiry mechanism (refunding execution account with remaining balance) - # and/or a builder-transaction mechanism. -``` - -### `DataCommitment` - -```python -class DataCommitment(Container): - # KZG10 commitment to the data - point: BLSCommitment - # Length of the data in samples - samples_count: uint64 -``` - -### `AttestedDataCommitment` - -```python -class AttestedDataCommitment(Container): - # KZG10 commitment to the data, and length - commitment: DataCommitment - # hash_tree_root of the ShardBlobHeader (stored so that attestations can be checked against it) - root: Root - # The proposer who included the shard-header - includer_index: ValidatorIndex -``` +## Updated containers -### `ShardBlobBody` +The following containers have updated definitions to support Sharding. -Unsigned shard data, bundled by a shard-builder. -Unique, signing different bodies as shard proposer for the same `(slot, shard)` is slashable. +### `IntermediateBlockBid` ```python -class ShardBlobBody(Container): - # The actual data commitment - commitment: DataCommitment - # Proof that the degree < commitment.samples_count * POINTS_PER_SAMPLE - degree_proof: BLSCommitment - # The actual data. Should match the commitment and degree proof. - data: List[BLSPoint, POINTS_PER_SAMPLE * MAX_SAMPLES_PER_BLOB] - # fee payment fields (EIP 1559 like) - # TODO: express in MWei instead? - max_priority_fee_per_sample: Gwei - max_fee_per_sample: Gwei -``` +class IntermediateBlockBid(Container): + execution_payload_root: Root -### `ShardBlobBodySummary` + sharded_data_commitment: Root # Root of the sharded data (only data, not beacon/intermediate block commitments) -Summary version of the `ShardBlobBody`, omitting the data payload, while preserving the data-commitments. + sharded_data_commitments: uint64 # Count of sharded data commitments -The commitments are not further collapsed to a single hash, -to avoid an extra network roundtrip between proposer and builder, to include the header on-chain more quickly. - -```python -class ShardBlobBodySummary(Container): - # The actual data commitment - commitment: DataCommitment - # Proof that the degree < commitment.samples_count * POINTS_PER_SAMPLE - degree_proof: BLSCommitment - # Hash-tree-root as summary of the data field - data_root: Root - # fee payment fields (EIP 1559 like) - # TODO: express in MWei instead? - max_priority_fee_per_sample: Gwei - max_fee_per_sample: Gwei + bid: Gwei # Block builder bid paid to proposer + + # Block builders use an Eth1 address -- need signature as + # block builder fees and data gas base fees will be charged to this address + signature_y_parity: bool + signature_r: uint256 + signature_s: uint256 ``` -### `ShardBlob` - -`ShardBlobBody` wrapped with the header data that is unique to the shard blob proposal. - -```python -class ShardBlob(Container): - slot: Slot - shard: Shard - # Builder of the data, pays data-fee to proposer - builder_index: BuilderIndex - # Proposer of the shard-blob - proposer_index: ValidatorIndex - # Blob contents - body: ShardBlobBody -``` - -### `ShardBlobHeader` - -Header version of `ShardBlob`. +### `BeaconBlockBody` ```python -class ShardBlobHeader(Container): - slot: Slot - shard: Shard - # Builder of the data, pays data-fee to proposer - builder_index: BuilderIndex - # Proposer of the shard-blob - proposer_index: ValidatorIndex - # Blob contents, without the full data - body_summary: ShardBlobBodySummary +class BeaconBlockBody(altair.BeaconBlockBody): # Not from bellatrix because we don't want the payload + intermediate_block_bid: IntermediateBlockBid ``` -### `SignedShardBlob` - -Full blob data, signed by the shard builder (ensuring fee payment) and shard proposer (ensuring a single proposal). +### `ShardedCommitmentsContainer` ```python -class SignedShardBlob(Container): - message: ShardBlob - signature: BLSSignature -``` +class ShardedCommitmentsContainer(Container): + sharded_commitments: List[KZGCommitment, 2 * MAX_SHARDS] -### `SignedShardBlobHeader` + # Aggregate degree proof for all sharded_commitments + degree_proof: KZGCommitment -Header of the blob, the signature is equally applicable to `SignedShardBlob`. -Shard proposers can accept `SignedShardBlobHeader` as a data-transaction by co-signing the header. + # The sizes of the blocks encoded in the commitments (last intermediate and all beacon blocks since) + included_beacon_block_sizes: List[uint64, MAX_BEACON_BLOCKS_BETWEEN_INTERMEDIATE_BLOCKS + 1] + + # Number of commitments that are for sharded data (no blocks) + included_sharded_data_commitments: uint64 -```python -class SignedShardBlobHeader(Container): - message: ShardBlobHeader - # Signature by builder. - # Once accepted by proposer, the signatures is the aggregate of both. - signature: BLSSignature + # Random evaluation of beacon blocks + execution payload (this helps with quick verification) + block_verification_y: uint256 + block_verification_kzg_proof: KZGCommitment ``` -### `PendingShardHeader` +### `IntermediateBlockBody` ```python -class PendingShardHeader(Container): - # The commitment that is attested - attested: AttestedDataCommitment - # Who voted for the header - votes: Bitlist[MAX_VALIDATORS_PER_COMMITTEE] - # Sum of effective balances of votes - weight: Gwei - # When the header was last updated, as reference for weight accuracy - update_slot: Slot +class IntermediateBlockBody(phase0.BeaconBlockBody): + attestation: Attestation + execution_payload: ExecutionPayload + sharded_commitments_container: ShardedCommitmentsContainer ``` -### `ShardBlobReference` - -Reference version of `ShardBlobHeader`, substituting the body for just a hash-tree-root. +### `IntermediateBlockHeader` ```python -class ShardBlobReference(Container): +class IntermediateBlockHeader(Container): slot: Slot - shard: Shard - # Builder of the data - builder_index: BuilderIndex - # Proposer of the shard-blob - proposer_index: ValidatorIndex - # Blob hash-tree-root for slashing reference - body_root: Root + parent_root: Root + state_root: Root + body: Root ``` -### `ShardProposerSlashing` +### `IntermediateBlock` ```python -class ShardProposerSlashing(Container): +class IntermediateBlock(Container): slot: Slot - shard: Shard - proposer_index: ValidatorIndex - builder_index_1: BuilderIndex - builder_index_2: BuilderIndex - body_root_1: Root - body_root_2: Root - signature_1: BLSSignature - signature_2: BLSSignature -``` - -### `ShardWork` - -```python -class ShardWork(Container): - # Upon confirmation the data is reduced to just the commitment. - status: Union[ # See Shard Work Status enum - None, # SHARD_WORK_UNCONFIRMED - AttestedDataCommitment, # SHARD_WORK_CONFIRMED - List[PendingShardHeader, MAX_SHARD_HEADERS_PER_SHARD] # SHARD_WORK_PENDING - ] + parent_root: Root + state_root: Root + body: IntermediateBlockBody ``` -## Helper functions - -### Misc - -#### `next_power_of_two` +### `SignedIntermediateBlock` ```python -def next_power_of_two(x: int) -> int: - return 2 ** ((x - 1).bit_length()) -``` - -#### `compute_previous_slot` +class SignedIntermediateBlock(Container): # + message: IntermediateBlock -```python -def compute_previous_slot(slot: Slot) -> Slot: - if slot > 0: - return Slot(slot - 1) - else: - return Slot(0) + signature_y_parity: bool + signature_r: uint256 + signature_s: uint256 ``` -#### `compute_updated_sample_price` +### `SignedIntermediateBlockHeader` ```python -def compute_updated_sample_price(prev_price: Gwei, samples_length: uint64, active_shards: uint64) -> Gwei: - adjustment_quotient = active_shards * SLOTS_PER_EPOCH * SAMPLE_PRICE_ADJUSTMENT_COEFFICIENT - if samples_length > TARGET_SAMPLES_PER_BLOB: - delta = max(1, prev_price * (samples_length - TARGET_SAMPLES_PER_BLOB) // TARGET_SAMPLES_PER_BLOB // adjustment_quotient) - return min(prev_price + delta, MAX_SAMPLE_PRICE) - else: - delta = max(1, prev_price * (TARGET_SAMPLES_PER_BLOB - samples_length) // TARGET_SAMPLES_PER_BLOB // adjustment_quotient) - return max(prev_price, MIN_SAMPLE_PRICE + delta) - delta -``` +class SignedIntermediateBlockHeader(Container): # + message: IntermediateBlockHeader -#### `compute_committee_source_epoch` - -```python -def compute_committee_source_epoch(epoch: Epoch, period: uint64) -> Epoch: - """ - Return the source epoch for computing the committee. - """ - source_epoch = Epoch(epoch - epoch % period) - if source_epoch >= period: - source_epoch -= period # `period` epochs lookahead - return source_epoch + signature_y_parity: bool + signature_r: uint256 + signature_s: uint256 ``` -#### `batch_apply_participation_flag` +### `BeaconState` ```python -def batch_apply_participation_flag(state: BeaconState, bits: Bitlist[MAX_VALIDATORS_PER_COMMITTEE], - epoch: Epoch, full_committee: Sequence[ValidatorIndex], flag_index: int): - if epoch == get_current_epoch(state): - epoch_participation = state.current_epoch_participation - else: - epoch_participation = state.previous_epoch_participation - for bit, index in zip(bits, full_committee): - if bit: - epoch_participation[index] = add_flag(epoch_participation[index], flag_index) +class BeaconState(bellatrix.BeaconState): + beacon_blocks_since_intermediate_block: List[BeaconBlock, MAX_BEACON_BLOCKS_BETWEEN_INTERMEDIATE_BLOCKS] + last_intermediate_block: IntermediateBlock ``` ### Beacon state accessors -#### Updated `get_committee_count_per_slot` - -```python -def get_committee_count_per_slot(state: BeaconState, epoch: Epoch) -> uint64: - """ - Return the number of committees in each slot for the given ``epoch``. - """ - return max(uint64(1), min( - get_active_shard_count(state, epoch), - uint64(len(get_active_validator_indices(state, epoch))) // SLOTS_PER_EPOCH // TARGET_COMMITTEE_SIZE, - )) -``` - #### `get_active_shard_count` ```python @@ -493,70 +269,38 @@ def get_active_shard_count(state: BeaconState, epoch: Epoch) -> uint64: Return the number of active shards. Note that this puts an upper bound on the number of committees per slot. """ - return INITIAL_ACTIVE_SHARDS -``` - -#### `get_shard_proposer_index` - -```python -def get_shard_proposer_index(state: BeaconState, slot: Slot, shard: Shard) -> ValidatorIndex: - """ - Return the proposer's index of shard block at ``slot``. - """ - epoch = compute_epoch_at_slot(slot) - seed = hash(get_seed(state, epoch, DOMAIN_SHARD_BLOB) + uint_to_bytes(slot) + uint_to_bytes(shard)) - indices = get_active_validator_indices(state, epoch) - return compute_proposer_index(state, indices, seed) + return ACTIVE_SHARDS ``` -#### `get_start_shard` - -```python -def get_start_shard(state: BeaconState, slot: Slot) -> Shard: - """ - Return the start shard at ``slot``. - """ - epoch = compute_epoch_at_slot(Slot(slot)) - committee_count = get_committee_count_per_slot(state, epoch) - active_shard_count = get_active_shard_count(state, epoch) - return committee_count * slot % active_shard_count -``` +### Block processing -#### `compute_shard_from_committee_index` +#### `process_beacon_block` ```python -def compute_shard_from_committee_index(state: BeaconState, slot: Slot, index: CommitteeIndex) -> Shard: - active_shards = get_active_shard_count(state, compute_epoch_at_slot(slot)) - assert index < active_shards - return Shard((index + get_start_shard(state, slot)) % active_shards) -``` - -#### `compute_committee_index_from_shard` +def process_beacon_block(state: BeaconState, block: BeaconBlock) -> None: + process_block_header(state, block) + process_randao(state, block.body) + process_eth1_data(state, block.body) + process_operations(state, block.body) + process_sync_aggregate(state, block.body.sync_aggregate) -```python -def compute_committee_index_from_shard(state: BeaconState, slot: Slot, shard: Shard) -> CommitteeIndex: - epoch = compute_epoch_at_slot(slot) - active_shards = get_active_shard_count(state, epoch) - index = CommitteeIndex((active_shards + shard - get_start_shard(state, slot)) % active_shards) - assert index < get_committee_count_per_slot(state, epoch) - return index + state.beacon_blocks_since_intermediate_block.append(block) ``` - -### Block processing +#### `process_intermediate_block` ```python -def process_block(state: BeaconState, block: BeaconBlock) -> None: - process_block_header(state, block) - # is_execution_enabled is omitted, execution is enabled by default. +def process_intermediate_block(state: BeaconState, block: IntermediateBlock) -> None: + process_intermediate_block_header(state, block) + process_intermediate_block_bid_commitment(state, block) + process_sharded_data(state, block) process_execution_payload(state, block.body.execution_payload, EXECUTION_ENGINE) - process_randao(state, block.body) - process_eth1_data(state, block.body) - process_operations(state, block.body) # [Modified in Sharding] - process_sync_aggregate(state, block.body.sync_aggregate) + process_intermediate_block_attestations(state, block) + + state.last_intermediate_block = block ``` -#### Operations +#### Beacon Block Operations ```python def process_operations(state: BeaconState, body: BeaconBlockBody) -> None: @@ -569,6 +313,7 @@ def process_operations(state: BeaconState, body: BeaconBlockBody) -> None: for_ops(body.proposer_slashings, process_proposer_slashing) for_ops(body.attester_slashings, process_attester_slashing) + # New shard proposer slashing processing for_ops(body.shard_proposer_slashings, process_shard_proposer_slashing) @@ -580,185 +325,68 @@ def process_operations(state: BeaconState, body: BeaconBlockBody) -> None: for_ops(body.attestations, process_attestation) for_ops(body.deposits, process_deposit) for_ops(body.voluntary_exits, process_voluntary_exit) +``` + +#### Intermediate Block Operations - # TODO: to avoid parallel shards racing, and avoid inclusion-order problems, - # update the fee price per slot, instead of per header. - # state.shard_sample_price = compute_updated_sample_price(state.shard_sample_price, ?, shard_count) +```python +def process_intermediate_block_attestations(state: BeaconState, body: IntermediateBlockBody) -> None: + + for attestation in block.body.attestations: + process_attestation(state, block.body.attestation) ``` -##### Extended Attestation processing +#### Intermediate Block Bid Commitment ```python -def process_attestation(state: BeaconState, attestation: Attestation) -> None: - altair.process_attestation(state, attestation) - process_attested_shard_work(state, attestation) +def process_intermediate_block_bid_commitment(state: BeaconState, body: IntermediateBlockBody) -> None: + # Get last intermediate block bid + intermediate_block_bid = state.beacon_blocks_since_intermediate_block[-1].body.intermediate_block_bid + + assert intermediate_block_bid.execution_payload_root == hash_tree_root(body.execution_payload) + + assert intermediate_block_bid.sharded_data_commitments == body.sharded_commitments_container.included_sharded_data_commitments + + assert intermediate_block_bid.sharded_data_commitment == hash_tree_root(body.sharded_commitments_container.sharded_commitments[-intermediate_block_bid.sharded_data_commitments:]) ``` +#### Intermediate Block header + ```python -def process_attested_shard_work(state: BeaconState, attestation: Attestation) -> None: - attestation_shard = compute_shard_from_committee_index( - state, - attestation.data.slot, - attestation.data.index, +def process_intermediate_block_header(state: BeaconState, block: BeaconBlock) -> None: + # Verify that the slots match + assert block.slot == state.slot + + # Verify that the block is newer than latest block header + assert block.slot == state.latest_block_header.slot + 1 + + # Verify that the parent matches + assert block.parent_root == hash_tree_root(state.latest_block_header) + + # Cache current block as the new latest block + # TODO! Adapt this to support intermediate block headers + state.latest_block_header = BeaconBlockHeader( + slot=block.slot, + proposer_index=block.proposer_index, + parent_root=block.parent_root, + state_root=Bytes32(), # Overwritten in the next process_slot call + body_root=hash_tree_root(block.body), ) - full_committee = get_beacon_committee(state, attestation.data.slot, attestation.data.index) - - buffer_index = attestation.data.slot % SHARD_STATE_MEMORY_SLOTS - committee_work = state.shard_buffer[buffer_index][attestation_shard] - - # Skip attestation vote accounting if the header is not pending - if committee_work.status.selector != SHARD_WORK_PENDING: - # If the data was already confirmed, check if this matches, to apply the flag to the attesters. - if committee_work.status.selector == SHARD_WORK_CONFIRMED: - attested: AttestedDataCommitment = committee_work.status.value - if attested.root == attestation.data.shard_blob_root: - batch_apply_participation_flag(state, attestation.aggregation_bits, - attestation.data.target.epoch, - full_committee, TIMELY_SHARD_FLAG_INDEX) - return - - current_headers: Sequence[PendingShardHeader] = committee_work.status.value - - # Find the corresponding header, abort if it cannot be found - header_index = len(current_headers) - for i, header in enumerate(current_headers): - if attestation.data.shard_blob_root == header.attested.root: - header_index = i - break - - # Attestations for an unknown header do not count towards shard confirmations, but can otherwise be valid. - if header_index == len(current_headers): - # Note: Attestations may be re-included if headers are included late. - return - - pending_header: PendingShardHeader = current_headers[header_index] - - # The weight may be outdated if it is not the initial weight, and from a previous epoch - if pending_header.weight != 0 and compute_epoch_at_slot(pending_header.update_slot) < get_current_epoch(state): - pending_header.weight = sum(state.validators[index].effective_balance for index, bit - in zip(full_committee, pending_header.votes) if bit) - - pending_header.update_slot = state.slot - - full_committee_balance = Gwei(0) - # Update votes bitfield in the state, update weights - for i, bit in enumerate(attestation.aggregation_bits): - weight = state.validators[full_committee[i]].effective_balance - full_committee_balance += weight - if bit: - if not pending_header.votes[i]: - pending_header.weight += weight - pending_header.votes[i] = True - - # Check if the PendingShardHeader is eligible for expedited confirmation, requiring 2/3 of balance attesting - if pending_header.weight * 3 >= full_committee_balance * 2: - # participants of the winning header are remembered with participation flags - batch_apply_participation_flag(state, pending_header.votes, attestation.data.target.epoch, - full_committee, TIMELY_SHARD_FLAG_INDEX) - - if pending_header.attested.commitment == DataCommitment(): - # The committee voted to not confirm anything - state.shard_buffer[buffer_index][attestation_shard].status.change( - selector=SHARD_WORK_UNCONFIRMED, - value=None, - ) - else: - state.shard_buffer[buffer_index][attestation_shard].status.change( - selector=SHARD_WORK_CONFIRMED, - value=pending_header.attested, - ) ``` -##### `process_shard_header` +#### Sharded data + ```python -def process_shard_header(state: BeaconState, signed_header: SignedShardBlobHeader) -> None: - header: ShardBlobHeader = signed_header.message - slot = header.slot - shard = header.shard - - # Verify the header is not 0, and not from the future. - assert Slot(0) < slot <= state.slot - header_epoch = compute_epoch_at_slot(slot) - # Verify that the header is within the processing time window - assert header_epoch in [get_previous_epoch(state), get_current_epoch(state)] - # Verify that the shard is valid - shard_count = get_active_shard_count(state, header_epoch) - assert shard < shard_count - # Verify that a committee is able to attest this (slot, shard) - start_shard = get_start_shard(state, slot) - committee_index = (shard_count + shard - start_shard) % shard_count - committees_per_slot = get_committee_count_per_slot(state, header_epoch) - assert committee_index <= committees_per_slot - - # Check that this data is still pending - committee_work = state.shard_buffer[slot % SHARD_STATE_MEMORY_SLOTS][shard] - assert committee_work.status.selector == SHARD_WORK_PENDING - - # Check that this header is not yet in the pending list - current_headers: List[PendingShardHeader, MAX_SHARD_HEADERS_PER_SHARD] = committee_work.status.value - header_root = hash_tree_root(header) - assert header_root not in [pending_header.attested.root for pending_header in current_headers] - - # Verify proposer matches - assert header.proposer_index == get_shard_proposer_index(state, slot, shard) - - # Verify builder and proposer aggregate signature - blob_signing_root = compute_signing_root(header, get_domain(state, DOMAIN_SHARD_BLOB)) - builder_pubkey = state.blob_builders[header.builder_index].pubkey - proposer_pubkey = state.validators[header.proposer_index].pubkey - assert bls.FastAggregateVerify([builder_pubkey, proposer_pubkey], blob_signing_root, signed_header.signature) - - # Verify the length by verifying the degree. - body_summary = header.body_summary - points_count = body_summary.commitment.samples_count * POINTS_PER_SAMPLE - if points_count == 0: - assert body_summary.degree_proof == G1_SETUP[0] - assert ( - bls.Pairing(body_summary.degree_proof, G2_SETUP[0]) - == bls.Pairing(body_summary.commitment.point, G2_SETUP[-points_count]) - ) +def process_sharded_data(state: BeaconState, block: BeaconBlock) -> None: + # Verify the degree proof - # Charge EIP 1559 fee, builder pays for opportunity, and is responsible for later availability, - # or fail to publish at their own expense. - samples = body_summary.commitment.samples_count - # TODO: overflows, need bigger int type - max_fee = body_summary.max_fee_per_sample * samples - - # Builder must have sufficient balance, even if max_fee is not completely utilized - assert state.blob_builder_balances[header.builder_index] >= max_fee - - base_fee = state.shard_sample_price * samples - # Base fee must be paid - assert max_fee >= base_fee - - # Remaining fee goes towards proposer for prioritizing, up to a maximum - max_priority_fee = body_summary.max_priority_fee_per_sample * samples - priority_fee = min(max_fee - base_fee, max_priority_fee) - - # Burn base fee, take priority fee - # priority_fee <= max_fee - base_fee, thus priority_fee + base_fee <= max_fee, thus sufficient balance. - state.blob_builder_balances[header.builder_index] -= base_fee + priority_fee - # Pay out priority fee - increase_balance(state, header.proposer_index, priority_fee) - - # Initialize the pending header - index = compute_committee_index_from_shard(state, slot, shard) - committee_length = len(get_beacon_committee(state, slot, index)) - initial_votes = Bitlist[MAX_VALIDATORS_PER_COMMITTEE]([0] * committee_length) - pending_header = PendingShardHeader( - attested=AttestedDataCommitment( - commitment=body_summary.commitment, - root=header_root, - includer_index=get_beacon_proposer_index(state), - ), - votes=initial_votes, - weight=0, - update_slot=state.slot, - ) + # Verify that the 2*N commitments lie on a degree N-1 polynomial + + # Verify that last intermediate block has been included + + # Verify that beacon block (blocks if intermediate blocks were missing) have been included - # Include it in the pending list - current_headers.append(pending_header) ``` The degree proof works as follows. For a block `B` with length `l` (so `l` values in `[0...l - 1]`, seen as a polynomial `B(X)` which takes these values), @@ -766,40 +394,42 @@ the length proof is the commitment to the polynomial `B(X) * X**(MAX_DEGREE + 1 where `MAX_DEGREE` is the maximum power of `s` available in the setup, which is `MAX_DEGREE = len(G2_SETUP) - 1`. The goal is to ensure that a proof can only be constructed if `deg(B) < l` (there are not hidden higher-order terms in the polynomial, which would thwart reconstruction). -##### `process_shard_proposer_slashing` - -```python -def process_shard_proposer_slashing(state: BeaconState, proposer_slashing: ShardProposerSlashing) -> None: - slot = proposer_slashing.slot - shard = proposer_slashing.shard - proposer_index = proposer_slashing.proposer_index - - reference_1 = ShardBlobReference(slot=slot, shard=shard, - proposer_index=proposer_index, - builder_index=proposer_slashing.builder_index_1, - body_root=proposer_slashing.body_root_1) - reference_2 = ShardBlobReference(slot=slot, shard=shard, - proposer_index=proposer_index, - builder_index=proposer_slashing.builder_index_2, - body_root=proposer_slashing.body_root_2) - - # Verify the signed messages are different - assert reference_1 != reference_2 - - # Verify the proposer is slashable - proposer = state.validators[proposer_index] - assert is_slashable_validator(proposer, get_current_epoch(state)) - - # The builders are not slashed, the proposer co-signed with them - builder_pubkey_1 = state.blob_builders[proposer_slashing.builder_index_1].pubkey - builder_pubkey_2 = state.blob_builders[proposer_slashing.builder_index_2].pubkey - domain = get_domain(state, DOMAIN_SHARD_PROPOSER, compute_epoch_at_slot(slot)) - signing_root_1 = compute_signing_root(reference_1, domain) - signing_root_2 = compute_signing_root(reference_2, domain) - assert bls.FastAggregateVerify([builder_pubkey_1, proposer.pubkey], signing_root_1, proposer_slashing.signature_1) - assert bls.FastAggregateVerify([builder_pubkey_2, proposer.pubkey], signing_root_2, proposer_slashing.signature_2) - - slash_validator(state, proposer_index) +#### Execution payload + +```python +def process_execution_payload(state: BeaconState, payload: ExecutionPayload, execution_engine: ExecutionEngine) -> None: + # Verify consistency of the parent hash with respect to the previous execution payload header + if is_merge_transition_complete(state): + assert payload.parent_hash == state.latest_execution_payload_header.block_hash + # Verify random + assert payload.random == get_randao_mix(state, get_current_epoch(state)) + # Verify timestamp + assert payload.timestamp == compute_timestamp_at_slot(state, state.slot) + + # Get sharded data headers + + # Get all unprocessed intermediate block bids + + + # Verify the execution payload is valid + assert execution_engine.execute_payload(payload) + # Cache execution payload header + state.latest_execution_payload_header = ExecutionPayloadHeader( + parent_hash=payload.parent_hash, + fee_recipient=payload.fee_recipient, + state_root=payload.state_root, + receipt_root=payload.receipt_root, + logs_bloom=payload.logs_bloom, + random=payload.random, + block_number=payload.block_number, + gas_limit=payload.gas_limit, + gas_used=payload.gas_used, + timestamp=payload.timestamp, + extra_data=payload.extra_data, + base_fee_per_gas=payload.base_fee_per_gas, + block_hash=payload.block_hash, + transactions_root=hash_tree_root(payload.transactions), + ) ``` ### Epoch transition @@ -808,14 +438,10 @@ This epoch transition overrides Bellatrix epoch transition: ```python def process_epoch(state: BeaconState) -> None: - # Sharding pre-processing - process_pending_shard_confirmations(state) - reset_pending_shard_work(state) - # Base functionality process_justification_and_finalization(state) process_inactivity_updates(state) - process_rewards_and_penalties(state) # Note: modified, see new TIMELY_SHARD_FLAG_INDEX + process_rewards_and_penalties(state) process_registry_updates(state) process_slashings(state) process_eth1_data_reset(state) @@ -826,63 +452,3 @@ def process_epoch(state: BeaconState) -> None: process_participation_flag_updates(state) process_sync_committee_updates(state) ``` - -#### `process_pending_shard_confirmations` - -```python -def process_pending_shard_confirmations(state: BeaconState) -> None: - # Pending header processing applies to the previous epoch. - # Skip if `GENESIS_EPOCH` because no prior epoch to process. - if get_current_epoch(state) == GENESIS_EPOCH: - return - - previous_epoch = get_previous_epoch(state) - previous_epoch_start_slot = compute_start_slot_at_epoch(previous_epoch) - - # Mark stale headers as unconfirmed - for slot in range(previous_epoch_start_slot, previous_epoch_start_slot + SLOTS_PER_EPOCH): - buffer_index = slot % SHARD_STATE_MEMORY_SLOTS - for shard_index in range(len(state.shard_buffer[buffer_index])): - committee_work = state.shard_buffer[buffer_index][shard_index] - if committee_work.status.selector == SHARD_WORK_PENDING: - winning_header = max(committee_work.status.value, key=lambda header: header.weight) - if winning_header.attested.commitment == DataCommitment(): - committee_work.status.change(selector=SHARD_WORK_UNCONFIRMED, value=None) - else: - committee_work.status.change(selector=SHARD_WORK_CONFIRMED, value=winning_header.attested) -``` - -#### `reset_pending_shard_work` - -```python -def reset_pending_shard_work(state: BeaconState) -> None: - # Add dummy "empty" PendingShardHeader (default vote if no shard header is available) - next_epoch = get_current_epoch(state) + 1 - next_epoch_start_slot = compute_start_slot_at_epoch(next_epoch) - committees_per_slot = get_committee_count_per_slot(state, next_epoch) - active_shards = get_active_shard_count(state, next_epoch) - - for slot in range(next_epoch_start_slot, next_epoch_start_slot + SLOTS_PER_EPOCH): - buffer_index = slot % SHARD_STATE_MEMORY_SLOTS - - # Reset the shard work tracking - state.shard_buffer[buffer_index] = [ShardWork() for _ in range(active_shards)] - - start_shard = get_start_shard(state, slot) - for committee_index in range(committees_per_slot): - shard = (start_shard + committee_index) % active_shards - # a committee is available, initialize a pending shard-header list - committee_length = len(get_beacon_committee(state, slot, CommitteeIndex(committee_index))) - state.shard_buffer[buffer_index][shard].status.change( - selector=SHARD_WORK_PENDING, - value=List[PendingShardHeader, MAX_SHARD_HEADERS_PER_SHARD]( - PendingShardHeader( - attested=AttestedDataCommitment(), - votes=Bitlist[MAX_VALIDATORS_PER_COMMITTEE]([0] * committee_length), - weight=0, - update_slot=slot, - ) - ) - ) - # a shard without committee available defaults to SHARD_WORK_UNCONFIRMED. -``` From 2058a50a6b82c6120d9179ca1a60bb70608f6fcc Mon Sep 17 00:00:00 2001 From: Dankrad Feist Date: Fri, 7 Jan 2022 00:34:03 +0000 Subject: [PATCH 02/66] Most of the KZG checks in --- specs/sharding/beacon-chain.md | 195 ++++++++++++++++++++++++++++++--- 1 file changed, 182 insertions(+), 13 deletions(-) diff --git a/specs/sharding/beacon-chain.md b/specs/sharding/beacon-chain.md index cde30a3f94..e52a6e5e50 100644 --- a/specs/sharding/beacon-chain.md +++ b/specs/sharding/beacon-chain.md @@ -113,8 +113,6 @@ The following values are (non-configurable) constants used throughout the specif | `MAX_SHARDS` | `uint64(2**12)` (= 4,096) | Theoretical max shard count (used to determine data structure sizes) | | `ACTIVE_SHARDS` | `uint64(2**8)` (= 256) | Initial shard count | | `TARGET_SHARDS` | `uint64(ACTIVE_SHARDS // 2)` (= 256) | Initial shard count | -| `DATAGAS_PRICE_ADJUSTMENT_COEFFICIENT` | `uint64(2**3)` (= 8) | Sample price may decrease/increase by at most exp(1 / this value) *per epoch* | -| `SHARD_STATE_MEMORY_SLOTS` | `uint64(2**8)` (= 256) | Number of slots for which shard commitments and confirmation status is directly available in the state | ### Time parameters @@ -196,7 +194,6 @@ class ShardedCommitmentsContainer(Container): included_sharded_data_commitments: uint64 # Random evaluation of beacon blocks + execution payload (this helps with quick verification) - block_verification_y: uint256 block_verification_kzg_proof: KZGCommitment ``` @@ -259,6 +256,145 @@ class BeaconState(bellatrix.BeaconState): last_intermediate_block: IntermediateBlock ``` +## Helper functions + +*Note*: The definitions below are for specification purposes and are not necessarily optimal implementations. + +### KZG + +#### `hash_to_field` + +```python +def hash_to_field(x: Container): + return int.from_bytes(hash_tree_root(x), "little") % MODULUS +``` + +#### `compute_powers` + +```python +def compute_powers(x: uint256, n: uint64): + current_power = 1 + powers = [] + for i in range(n): + powers.append(uint256(current_power)) + current_power = current_power * int(x) % MODULUS +``` + +#### `verify_kzg_proof` + +```python +def verify_kzg_proof(commitment: KZGCommitment, x: uint256, y: uint256, proof: KZGCommitment) -> List[uint256]: + zero_poly = G2_SETUP[1].add(G2_SETUP[0].mult(x).neg()) + + assert ( + bls.Pairing(proof, zero_poly) + == bls.Pairing(commitment, G2_SETUP[-degree + 1]) + ) +``` + +#### `verify_degree_proof` + +```python +def verify_degree_proof(commitment: KZGCommitment, degree: uint64, proof: KZGCommitment): + + if degree == -1: # Zero polynomial + assert body_summary.degree_proof == G1_SETUP[0] + + # TODO! Check for off by one error + assert ( + bls.Pairing(proof, G2_SETUP[0]) + == bls.Pairing(commitment, G2_SETUP[-degree + 1]) + ) +``` + +#### `block_to_field_elements` + +```python +def block_to_field_elements(block: bytes) -> List[uint256]: + """ + Slices a block into 31 byte chunks that can fit into field elements + """ + sliced_block = [block[i:i + 31] for i in range(0, len(bytes), 31)] + return [uint256(int.from_bytes(x, "little")) for x in sliced_block] +``` + +#### `roots_of_unity` + +```python +def roots_of_unity() -> List[uint256]: + r = [] + current_root_of_unity = 1 + for i in range(len(SAMPLES_PER_BLOB * POINTS_PER_SAMPLE)): + r.append(current_root_of_unity) + current_root_of_unity = current_root_of_unity * ROOT_OF_UNITY % MODULUS + return r +``` + +#### `modular_inverse` + +```python +def modular_inverse(a): + assert(a == 0): + lm, hm = 1, 0 + low, high = a % MODULUS, MODULUS + while low > 1: + r = high // low + nm, new = hm - lm * r, high - low * r + lm, low, hm, high = nm, new, lm, low + return lm % MODULUS +``` + +#### `eval_poly_at` + +```python +def eval_poly_at(poly: List[uint256], x: uint256) -> uint256: + """ + Evaluates a polynomial (in evaluation form) at an arbitrary point + """ + roots = roots_of_unity() + def A(z): + r = 1 + for w in roots: + r = r * (z - w) % MODULUS + + def Aprime(z): + return pow(z, SAMPLES_PER_BLOB * POINTS_PER_SAMPLE - 1, MODULUS) + + r = 0 + inverses = [modular_inverse(z - x) for z in roots] + for i, x in enumerate(inverses): + r += f[i] * modular_inverse(Aprime(roots[i])) * x % self.MODULUS + r = r * A(x) % self.MODULUS + return r +``` + +#### `vector_lincomb` + +```python +def vector_lincomb(vectors: List[List[uint256]], scalars: List[uint256]) -> List[uint256]: + """ + Compute a linear combination of field element vectors + """ + r = [0 for i in len(vectors[0])] + for v, a in zip(vectors, scalars): + for i, x in enumerate(v): + r[i] = (r[i] + a * x) % MODULUS + return [uint256(x) for x in r] +``` + +#### `multiscalar_multiplication` + +```python +def multiscalar_multiplication(points: List[KZGCommitment], scalars: List[uint256]) -> KZGCommitment: + """ + BLS multiscalar multiplication. This function can be optimized using Pippenger's algorithm and variants. + """ + r = bls.Z1() + for x, a in zip(points, scalars): + r = r.add(x.mult(a)) + return r +``` + ### Beacon state accessors #### `get_active_shard_count` @@ -353,7 +489,7 @@ def process_intermediate_block_bid_commitment(state: BeaconState, body: Intermed #### Intermediate Block header ```python -def process_intermediate_block_header(state: BeaconState, block: BeaconBlock) -> None: +def process_intermediate_block_header(state: BeaconState, block: IntermediateBlock) -> None: # Verify that the slots match assert block.slot == state.slot @@ -378,15 +514,37 @@ def process_intermediate_block_header(state: BeaconState, block: BeaconBlock) -> ```python -def process_sharded_data(state: BeaconState, block: BeaconBlock) -> None: - # Verify the degree proof - - # Verify that the 2*N commitments lie on a degree N-1 polynomial +def process_sharded_data(state: BeaconState, body: IntermediateBlockBody) -> None: + sharded_commitments_container = body.sharded_commitments_container - # Verify that last intermediate block has been included + # Verify the degree proof + r = hash_to_field(sharded_commitments_container.sharded_commitments) + r_powers = compute_powers(r, len(sharded_commitments_container.sharded_commitments)) + combined_commitment = multiscalar_multiplication(sharded_commitments_container.sharded_commitments, r_powers) - # Verify that beacon block (blocks if intermediate blocks were missing) have been included + verify_degree_proof(combined_commitments, SAMPLES_PER_BLOB * POINTS_PER_SAMPLE, sharded_commitments_container.degree_proof) + # Verify that the 2*N commitments lie on a degree N-1 polynomial + # TODO! Compute combined barycentric formula for this + + # Verify that last intermediate block and beacon block (blocks if intermediate blocks were missing) have been included + intermediate_block_chunked = block_to_field_elements(ssz_serialize(state.last_intermediate_block)) + beacon_blocks_chunked = [block_to_field_elements(ssz_serialize(block)) for block in state.beacon_blocks_since_intermediate_block] + block_vectors = [] + for block_chunked in [intermediate_block_chunked] + beacon_blocks_chunked: + for i in range(0, len(block_chunked), SAMPLES_PER_BLOB * POINTS_PER_SAMPLE): + block_vectors.append(block_chunked[i:i + SAMPLES_PER_BLOB * POINTS_PER_SAMPLE]) + + number_of_blobs = len(block_vectors) + r = hash_to_field([sharded_commitments_container.sharded_commitments[:number_of_blobs], 0]) + x = hash_to_field([sharded_commitments_container.sharded_commitments[:number_of_blobs], 1]) + + r_powers = compute_powers(r, number_of_blobs) + combined_vector = vector_lincomb(block_vectors, r_powers) + combined_commitment = multiscalar_multiplication(sharded_commitments_container.sharded_commitments[:number_of_blobs], r_powers) + y = eval_poly_at(combined_vector, x) + + verify_kzg_proof(combined_commitment, x, y, block_verification_kzg_proof) ``` The degree proof works as follows. For a block `B` with length `l` (so `l` values in `[0...l - 1]`, seen as a polynomial `B(X)` which takes these values), @@ -397,7 +555,10 @@ The goal is to ensure that a proof can only be constructed if `deg(B) < l` (ther #### Execution payload ```python -def process_execution_payload(state: BeaconState, payload: ExecutionPayload, execution_engine: ExecutionEngine) -> None: +def process_execution_payload(state: BeaconState, block: IntermediateBlock, execution_engine: ExecutionEngine) -> None: + + payload = block.body.execution_payload + # Verify consistency of the parent hash with respect to the previous execution payload header if is_merge_transition_complete(state): assert payload.parent_hash == state.latest_execution_payload_header.block_hash @@ -406,13 +567,21 @@ def process_execution_payload(state: BeaconState, payload: ExecutionPayload, exe # Verify timestamp assert payload.timestamp == compute_timestamp_at_slot(state, state.slot) - # Get sharded data headers + # Get sharded data commitments + sharded_commitments_container = block.body.sharded_commitments_container + sharded_data_commitments = sharded_commitments_container.sharded_commitments[-sharded_commitments_container.included_sharded_data_commitments:] # Get all unprocessed intermediate block bids + unprocessed_intermediate_block_bids = [] + for block in state.beacon_blocks_since_intermediate_block: + unprocessed_intermediate_block_bids.append(block.body.intermediate_block_bid) # Verify the execution payload is valid - assert execution_engine.execute_payload(payload) + assert execution_engine.execute_payload(payload, + sharded_data_commitments, + unprocessed_intermediate_block_bids) + # Cache execution payload header state.latest_execution_payload_header = ExecutionPayloadHeader( parent_hash=payload.parent_hash, From 739c57ca82aeb84572c3a751a2e67fb949a41a7b Mon Sep 17 00:00:00 2001 From: Dankrad Feist Date: Fri, 7 Jan 2022 12:16:50 +0000 Subject: [PATCH 03/66] Fixes suggested by Vitalik --- specs/sharding/beacon-chain.md | 55 +++++++++++++++++----------------- 1 file changed, 28 insertions(+), 27 deletions(-) diff --git a/specs/sharding/beacon-chain.md b/specs/sharding/beacon-chain.md index e52a6e5e50..5809ee967b 100644 --- a/specs/sharding/beacon-chain.md +++ b/specs/sharding/beacon-chain.md @@ -112,7 +112,6 @@ The following values are (non-configurable) constants used throughout the specif | - | - | - | | `MAX_SHARDS` | `uint64(2**12)` (= 4,096) | Theoretical max shard count (used to determine data structure sizes) | | `ACTIVE_SHARDS` | `uint64(2**8)` (= 256) | Initial shard count | -| `TARGET_SHARDS` | `uint64(ACTIVE_SHARDS // 2)` (= 256) | Initial shard count | ### Time parameters @@ -126,7 +125,7 @@ With the introduction of intermediate blocks the number of slots per epoch is do | Name | Value | Notes | | - | - | - | -| `SAMPLES_PER_BLOB` | `uint64(2**9)` (= 2,048) | 248 * 2,048 = 507,904 bytes | +| `SAMPLES_PER_BLOB` | `uint64(2**9)` (= 512) | 248 * 512 = 126,976 bytes | ### Precomputed size verification points @@ -148,19 +147,19 @@ E.g. `ACTIVE_SHARDS` and `SAMPLES_PER_BLOB`. | `SECONDS_PER_SLOT` | `uint64(8)` | seconds | 8 seconds | -## Updated containers +## Containers -The following containers have updated definitions to support Sharding. +### New Containers -### `IntermediateBlockBid` +#### `IntermediateBlockBid` ```python class IntermediateBlockBid(Container): execution_payload_root: Root - sharded_data_commitment: Root # Root of the sharded data (only data, not beacon/intermediate block commitments) + sharded_data_commitment_root: Root # Root of the sharded data (only data, not beacon/intermediate block commitments) - sharded_data_commitments: uint64 # Count of sharded data commitments + sharded_data_commitment_count: uint64 # Count of sharded data commitments bid: Gwei # Block builder bid paid to proposer @@ -171,14 +170,7 @@ class IntermediateBlockBid(Container): signature_s: uint256 ``` -### `BeaconBlockBody` - -```python -class BeaconBlockBody(altair.BeaconBlockBody): # Not from bellatrix because we don't want the payload - intermediate_block_bid: IntermediateBlockBid -``` - -### `ShardedCommitmentsContainer` +#### `ShardedCommitmentsContainer` ```python class ShardedCommitmentsContainer(Container): @@ -197,7 +189,7 @@ class ShardedCommitmentsContainer(Container): block_verification_kzg_proof: KZGCommitment ``` -### `IntermediateBlockBody` +#### `IntermediateBlockBody` ```python class IntermediateBlockBody(phase0.BeaconBlockBody): @@ -206,7 +198,7 @@ class IntermediateBlockBody(phase0.BeaconBlockBody): sharded_commitments_container: ShardedCommitmentsContainer ``` -### `IntermediateBlockHeader` +#### `IntermediateBlockHeader` ```python class IntermediateBlockHeader(Container): @@ -216,7 +208,7 @@ class IntermediateBlockHeader(Container): body: Root ``` -### `IntermediateBlock` +#### `IntermediateBlock` ```python class IntermediateBlock(Container): @@ -226,7 +218,7 @@ class IntermediateBlock(Container): body: IntermediateBlockBody ``` -### `SignedIntermediateBlock` +#### `SignedIntermediateBlock` ```python class SignedIntermediateBlock(Container): # @@ -237,7 +229,7 @@ class SignedIntermediateBlock(Container): # signature_s: uint256 ``` -### `SignedIntermediateBlockHeader` +#### `SignedIntermediateBlockHeader` ```python class SignedIntermediateBlockHeader(Container): # @@ -248,7 +240,9 @@ class SignedIntermediateBlockHeader(Container): # signature_s: uint256 ``` -### `BeaconState` +### Extended Containers + +#### `BeaconState` ```python class BeaconState(bellatrix.BeaconState): @@ -256,6 +250,13 @@ class BeaconState(bellatrix.BeaconState): last_intermediate_block: IntermediateBlock ``` +#### `BeaconBlockBody` + +```python +class BeaconBlockBody(altair.BeaconBlockBody): # Not from bellatrix because we don't want the payload + intermediate_block_bid: IntermediateBlockBid +``` + ## Helper functions *Note*: The definitions below are for specification purposes and are not necessarily optimal implementations. @@ -382,10 +383,10 @@ def vector_lincomb(vectors: List[List[uint256]], scalars: List[uint256]) -> List return [uint256(x) for x in r] ``` -#### `multiscalar_multiplication` +#### `elliptic_curve_lincomb` ```python -def multiscalar_multiplication(points: List[KZGCommitment], scalars: List[uint256]) -> KZGCommitment: +def elliptic_curve_lincomb(points: List[KZGCommitment], scalars: List[uint256]) -> KZGCommitment: """ BLS multiscalar multiplication. This function can be optimized using Pippenger's algorithm and variants. """ @@ -481,9 +482,9 @@ def process_intermediate_block_bid_commitment(state: BeaconState, body: Intermed assert intermediate_block_bid.execution_payload_root == hash_tree_root(body.execution_payload) - assert intermediate_block_bid.sharded_data_commitments == body.sharded_commitments_container.included_sharded_data_commitments + assert intermediate_block_bid.sharded_data_commitment_number == body.sharded_commitments_container.included_sharded_data_commitments - assert intermediate_block_bid.sharded_data_commitment == hash_tree_root(body.sharded_commitments_container.sharded_commitments[-intermediate_block_bid.sharded_data_commitments:]) + assert intermediate_block_bid.sharded_data_commitment_root == hash_tree_root(body.sharded_commitments_container.sharded_commitments[-intermediate_block_bid.sharded_data_commitments:]) ``` #### Intermediate Block header @@ -520,7 +521,7 @@ def process_sharded_data(state: BeaconState, body: IntermediateBlockBody) -> Non # Verify the degree proof r = hash_to_field(sharded_commitments_container.sharded_commitments) r_powers = compute_powers(r, len(sharded_commitments_container.sharded_commitments)) - combined_commitment = multiscalar_multiplication(sharded_commitments_container.sharded_commitments, r_powers) + combined_commitment = elliptic_curve_lincomb(sharded_commitments_container.sharded_commitments, r_powers) verify_degree_proof(combined_commitments, SAMPLES_PER_BLOB * POINTS_PER_SAMPLE, sharded_commitments_container.degree_proof) @@ -541,7 +542,7 @@ def process_sharded_data(state: BeaconState, body: IntermediateBlockBody) -> Non r_powers = compute_powers(r, number_of_blobs) combined_vector = vector_lincomb(block_vectors, r_powers) - combined_commitment = multiscalar_multiplication(sharded_commitments_container.sharded_commitments[:number_of_blobs], r_powers) + combined_commitment = elliptic_curve_lincomb(sharded_commitments_container.sharded_commitments[:number_of_blobs], r_powers) y = eval_poly_at(combined_vector, x) verify_kzg_proof(combined_commitment, x, y, block_verification_kzg_proof) From c6b5e28bb10c228da88822aba522a4e23f7d3f58 Mon Sep 17 00:00:00 2001 From: Dankrad Feist Date: Sat, 8 Jan 2022 19:12:36 +0000 Subject: [PATCH 04/66] Add low degree check for commitments --- specs/sharding/beacon-chain.md | 44 ++++++++++++++++++++++++++++++---- 1 file changed, 39 insertions(+), 5 deletions(-) diff --git a/specs/sharding/beacon-chain.md b/specs/sharding/beacon-chain.md index 5809ee967b..6128ef3f64 100644 --- a/specs/sharding/beacon-chain.md +++ b/specs/sharding/beacon-chain.md @@ -322,12 +322,14 @@ def block_to_field_elements(block: bytes) -> List[uint256]: #### `roots_of_unity` ```python -def roots_of_unity() -> List[uint256]: +def roots_of_unity(order: uint64) -> List[uint256]: r = [] + root_of_unity = pow(PRIMITIVE_ROOT_OF_UNITY, (MODULUS - 1) // order, MODULUS) + current_root_of_unity = 1 for i in range(len(SAMPLES_PER_BLOB * POINTS_PER_SAMPLE)): r.append(current_root_of_unity) - current_root_of_unity = current_root_of_unity * ROOT_OF_UNITY % MODULUS + current_root_of_unity = current_root_of_unity * root_of_unity % MODULUS return r ``` @@ -352,14 +354,15 @@ def eval_poly_at(poly: List[uint256], x: uint256) -> uint256: """ Evaluates a polynomial (in evaluation form) at an arbitrary point """ - roots = roots_of_unity() + points_per_blob = SAMPLES_PER_BLOB * POINTS_PER_SAMPLE + roots = roots_of_unity(points_per_blob) def A(z): r = 1 for w in roots: r = r * (z - w) % MODULUS def Aprime(z): - return pow(z, SAMPLES_PER_BLOB * POINTS_PER_SAMPLE - 1, MODULUS) + return points_per_blob * pow(z, points_per_blob - 1, MODULUS) r = 0 inverses = [modular_inverse(z - x) for z in roots] @@ -369,6 +372,37 @@ def eval_poly_at(poly: List[uint256], x: uint256) -> uint256: return r ``` +#### `next_power_of_two` + +```python +def next_power_of_two(x: int) -> int: + return 2 ** ((x - 1).bit_length()) +``` + +#### `low_degree_check` + +```python +def low_degree_check(commitments: List[KZGCommitment]): + """ + Checks that the commitments are on a low-degree polynomial + If there are 2*N commitments, that means they should lie on a polynomial + of degree d - N - 1, where d = next_power_of_two(2*N) + (The remaining positions are filled with 0, this is to make FFTs usable) + """ + result = 0 + N = len(commitments) // 2 + r = hash_to_field(commitments) + domain_size = next_power_of_two(2 * N) + r_to_domain_size = pow(r, N, domain_size) + roots = roots_of_unity(domain_size) + + coefs = [] + for i in range(2 * N): + coefs.append(((-1)**i * r_to_domain_size - 1) * modular_inverse(roots[i * (domain_size // 2 - 1) % domain_size] * (r - roots[i])) % MODULUS) + + assert elliptic_curve_lincomb(commitments, coefs) == bls.Z1() +``` + #### `vector_lincomb` ```python @@ -526,7 +560,7 @@ def process_sharded_data(state: BeaconState, body: IntermediateBlockBody) -> Non verify_degree_proof(combined_commitments, SAMPLES_PER_BLOB * POINTS_PER_SAMPLE, sharded_commitments_container.degree_proof) # Verify that the 2*N commitments lie on a degree N-1 polynomial - # TODO! Compute combined barycentric formula for this + low_degree_check(sharded_commitments_container.sharded_commitments) # Verify that last intermediate block and beacon block (blocks if intermediate blocks were missing) have been included intermediate_block_chunked = block_to_field_elements(ssz_serialize(state.last_intermediate_block)) From 9242727e8cafc6baefe8e714fd2b86057862aa7f Mon Sep 17 00:00:00 2001 From: Dankrad Feist Date: Sat, 8 Jan 2022 21:34:17 +0000 Subject: [PATCH 05/66] Remove -1 check from degree proof (not needed) --- specs/sharding/beacon-chain.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/specs/sharding/beacon-chain.md b/specs/sharding/beacon-chain.md index 6128ef3f64..cabb78fae9 100644 --- a/specs/sharding/beacon-chain.md +++ b/specs/sharding/beacon-chain.md @@ -297,9 +297,9 @@ def verify_kzg_proof(commitment: KZGCommitment, x: uint256, y: uint256, proof: K ```python def verify_degree_proof(commitment: KZGCommitment, degree: uint64, proof: KZGCommitment): - - if degree == -1: # Zero polynomial - assert body_summary.degree_proof == G1_SETUP[0] + """ + Verifies that the commitment is of polynomial degree <= degree. + """ # TODO! Check for off by one error assert ( From cc8d81d1398d496a338ff82bcdc526aa312a2674 Mon Sep 17 00:00:00 2001 From: Dankrad Feist Date: Thu, 13 Jan 2022 00:48:37 +0000 Subject: [PATCH 06/66] Require block builders to be validators to simplify things --- specs/sharding/beacon-chain.md | 392 +++++++++++++-------------------- 1 file changed, 158 insertions(+), 234 deletions(-) diff --git a/specs/sharding/beacon-chain.md b/specs/sharding/beacon-chain.md index cabb78fae9..9a5a398962 100644 --- a/specs/sharding/beacon-chain.md +++ b/specs/sharding/beacon-chain.md @@ -88,7 +88,7 @@ We define the following Python custom types for type hinting and readability: | Name | SSZ equivalent | Description | | - | - | - | -| `BLSCommitment` | `Bytes48` | A G1 curve point | +| `KZGCommitment` | `Bytes48` | A G1 curve point | | `BLSFieldElement` | `uint256` | A number `x` in the range `0 <= x < MODULUS` | ## Constants @@ -162,6 +162,8 @@ class IntermediateBlockBid(Container): sharded_data_commitment_count: uint64 # Count of sharded data commitments bid: Gwei # Block builder bid paid to proposer + + validator_index: ValidatorIndex # Validator index for this bid # Block builders use an Eth1 address -- need signature as # block builder fees and data gas base fees will be charged to this address @@ -180,7 +182,7 @@ class ShardedCommitmentsContainer(Container): degree_proof: KZGCommitment # The sizes of the blocks encoded in the commitments (last intermediate and all beacon blocks since) - included_beacon_block_sizes: List[uint64, MAX_BEACON_BLOCKS_BETWEEN_INTERMEDIATE_BLOCKS + 1] + included_block_sizes: List[uint64, MAX_BEACON_BLOCKS_BETWEEN_INTERMEDIATE_BLOCKS + 1] # Number of commitments that are for sharded data (no blocks) included_sharded_data_commitments: uint64 @@ -189,78 +191,37 @@ class ShardedCommitmentsContainer(Container): block_verification_kzg_proof: KZGCommitment ``` -#### `IntermediateBlockBody` - -```python -class IntermediateBlockBody(phase0.BeaconBlockBody): - attestation: Attestation - execution_payload: ExecutionPayload - sharded_commitments_container: ShardedCommitmentsContainer -``` - -#### `IntermediateBlockHeader` - -```python -class IntermediateBlockHeader(Container): - slot: Slot - parent_root: Root - state_root: Root - body: Root -``` - -#### `IntermediateBlock` - -```python -class IntermediateBlock(Container): - slot: Slot - parent_root: Root - state_root: Root - body: IntermediateBlockBody -``` - -#### `SignedIntermediateBlock` - -```python -class SignedIntermediateBlock(Container): # - message: IntermediateBlock - - signature_y_parity: bool - signature_r: uint256 - signature_s: uint256 -``` - -#### `SignedIntermediateBlockHeader` - -```python -class SignedIntermediateBlockHeader(Container): # - message: IntermediateBlockHeader - - signature_y_parity: bool - signature_r: uint256 - signature_s: uint256 -``` - ### Extended Containers #### `BeaconState` ```python class BeaconState(bellatrix.BeaconState): - beacon_blocks_since_intermediate_block: List[BeaconBlock, MAX_BEACON_BLOCKS_BETWEEN_INTERMEDIATE_BLOCKS] - last_intermediate_block: IntermediateBlock + blocks_since_intermediate_block: List[BeaconBlock, MAX_BEACON_BLOCKS_BETWEEN_INTERMEDIATE_BLOCKS] ``` #### `BeaconBlockBody` ```python -class BeaconBlockBody(altair.BeaconBlockBody): # Not from bellatrix because we don't want the payload - intermediate_block_bid: IntermediateBlockBid +class BeaconBlockBody(bellatrix.BeaconBlockBody): + execution_payload: Union[None, ExecutionPayload] + sharded_commitments_container: Union[None, ShardedCommitmentsContainer] + intermediate_block_bid: Union[None, IntermediateBlockBid] ``` ## Helper functions *Note*: The definitions below are for specification purposes and are not necessarily optimal implementations. +### Block processing + +#### `is_intermediate_block_slot` + +```python +def is_intermediate_block_slot(slot: Slot): + return slot % 2 == 1 +``` + ### KZG #### `hash_to_field` @@ -273,18 +234,18 @@ def hash_to_field(x: Container): #### `compute_powers` ```python -def compute_powers(x: uint256, n: uint64): +def compute_powers(x: BLSFieldElement, n: uint64) -> List[BLSFieldElement]: current_power = 1 powers = [] for i in range(n): - powers.append(uint256(current_power)) + powers.append(BLSFieldElement(current_power)) current_power = current_power * int(x) % MODULUS ``` #### `verify_kzg_proof` ```python -def verify_kzg_proof(commitment: KZGCommitment, x: uint256, y: uint256, proof: KZGCommitment) -> List[uint256]: +def verify_kzg_proof(commitment: KZGCommitment, x: BLSFieldElement, y: BLSFieldElement, proof: KZGCommitment) -> None: zero_poly = G2_SETUP[1].add(G2_SETUP[0].mult(x).neg()) assert ( @@ -311,18 +272,18 @@ def verify_degree_proof(commitment: KZGCommitment, degree: uint64, proof: KZGCom #### `block_to_field_elements` ```python -def block_to_field_elements(block: bytes) -> List[uint256]: +def block_to_field_elements(block: bytes) -> List[BLSFieldElement]: """ Slices a block into 31 byte chunks that can fit into field elements """ sliced_block = [block[i:i + 31] for i in range(0, len(bytes), 31)] - return [uint256(int.from_bytes(x, "little")) for x in sliced_block] + return [BLSFieldElement(int.from_bytes(x, "little")) for x in sliced_block] ``` #### `roots_of_unity` ```python -def roots_of_unity(order: uint64) -> List[uint256]: +def roots_of_unity(order: uint64) -> List[BLSFieldElement]: r = [] root_of_unity = pow(PRIMITIVE_ROOT_OF_UNITY, (MODULUS - 1) // order, MODULUS) @@ -350,7 +311,7 @@ def modular_inverse(a): #### `eval_poly_at` ```python -def eval_poly_at(poly: List[uint256], x: uint256) -> uint256: +def eval_poly_at(poly: List[BLSFieldElement], x: BLSFieldElement) -> BLSFieldElement: """ Evaluates a polynomial (in evaluation form) at an arbitrary point """ @@ -389,6 +350,7 @@ def low_degree_check(commitments: List[KZGCommitment]): of degree d - N - 1, where d = next_power_of_two(2*N) (The remaining positions are filled with 0, this is to make FFTs usable) """ + # TODO! -- this is currently wrong non power of two lists, need to implement last part of this: https://notes.ethereum.org/N-4E_VbaSy2Iqcug9ke8PQ result = 0 N = len(commitments) // 2 r = hash_to_field(commitments) @@ -406,7 +368,7 @@ def low_degree_check(commitments: List[KZGCommitment]): #### `vector_lincomb` ```python -def vector_lincomb(vectors: List[List[uint256]], scalars: List[uint256]) -> List[uint256]: +def vector_lincomb(vectors: List[List[BLSFieldElement]], scalars: List[BLSFieldElement]) -> List[BLSFieldElement]: """ Compute a linear combination of field element vectors """ @@ -414,13 +376,13 @@ def vector_lincomb(vectors: List[List[uint256]], scalars: List[uint256]) -> List for v, a in zip(vectors, scalars): for i, x in enumerate(v): r[i] = (r[i] + a * x) % MODULUS - return [uint256(x) for x in r] + return [BLSFieldElement(x) for x in r] ``` #### `elliptic_curve_lincomb` ```python -def elliptic_curve_lincomb(points: List[KZGCommitment], scalars: List[uint256]) -> KZGCommitment: +def elliptic_curve_lincomb(points: List[KZGCommitment], scalars: List[BLSFieldElement]) -> KZGCommitment: """ BLS multiscalar multiplication. This function can be optimized using Pippenger's algorithm and variants. """ @@ -450,136 +412,116 @@ def get_active_shard_count(state: BeaconState, epoch: Epoch) -> uint64: ```python def process_beacon_block(state: BeaconState, block: BeaconBlock) -> None: process_block_header(state, block) + verify_intermediate_block_bid_commitment(state, block) + verify_intermediate_block_bid(state, block) + process_sharded_data(state, block) + if is_execution_enabled(state, block.body): + process_execution_payload(state, block, EXECUTION_ENGINE) + process_randao(state, block.body) process_eth1_data(state, block.body) process_operations(state, block.body) process_sync_aggregate(state, block.body.sync_aggregate) - state.beacon_blocks_since_intermediate_block.append(block) + if is_intermediat_block_slot(block.slot): + state.blocks_since_intermediate_block = [] + state.blocks_since_intermediate_block.append(block) ``` -#### `process_intermediate_block` +#### Block header ```python -def process_intermediate_block(state: BeaconState, block: IntermediateBlock) -> None: - process_intermediate_block_header(state, block) - process_intermediate_block_bid_commitment(state, block) - process_sharded_data(state, block) - process_execution_payload(state, block.body.execution_payload, EXECUTION_ENGINE) - process_intermediate_block_attestations(state, block) - - state.last_intermediate_block = block -``` - -#### Beacon Block Operations - -```python -def process_operations(state: BeaconState, body: BeaconBlockBody) -> None: - # Verify that outstanding deposits are processed up to the maximum number of deposits - assert len(body.deposits) == min(MAX_DEPOSITS, state.eth1_data.deposit_count - state.eth1_deposit_index) - - def for_ops(operations: Sequence[Any], fn: Callable[[BeaconState, Any], None]) -> None: - for operation in operations: - fn(state, operation) - - for_ops(body.proposer_slashings, process_proposer_slashing) - for_ops(body.attester_slashings, process_attester_slashing) - - # New shard proposer slashing processing - for_ops(body.shard_proposer_slashings, process_shard_proposer_slashing) - - # Limit is dynamic: based on active shard count - assert len(body.shard_headers) <= MAX_SHARD_HEADERS_PER_SHARD * get_active_shard_count(state, get_current_epoch(state)) - for_ops(body.shard_headers, process_shard_header) +def process_block_header(state: BeaconState, block: BeaconBlock) -> None: + # Verify that the slots match + assert block.slot == state.slot + # Verify that the block is newer than latest block header + assert block.slot > state.latest_block_header.slot + # Verify that proposer index is the correct index + if not is_intermediate_block_slot(block.slot): + assert block.proposer_index == get_beacon_proposer_index(state) + # Verify that the parent matches + assert block.parent_root == hash_tree_root(state.latest_block_header) + # Cache current block as the new latest block + state.latest_block_header = BeaconBlockHeader( + slot=block.slot, + proposer_index=block.proposer_index, + parent_root=block.parent_root, + state_root=Bytes32(), # Overwritten in the next process_slot call + body_root=hash_tree_root(block.body), + ) - # New attestation processing - for_ops(body.attestations, process_attestation) - for_ops(body.deposits, process_deposit) - for_ops(body.voluntary_exits, process_voluntary_exit) + # Verify proposer is not slashed + proposer = state.validators[block.proposer_index] + assert not proposer.slashed ``` -#### Intermediate Block Operations +#### Intermediate Block Bid Commitment ```python -def process_intermediate_block_attestations(state: BeaconState, body: IntermediateBlockBody) -> None: +def verify_intermediate_block_bid_commitment(state: BeaconState, block: BeaconBlock) -> None: + if is_intermediate_block_slot(block.slot): - for attestation in block.body.attestations: - process_attestation(state, block.body.attestation) ``` -#### Intermediate Block Bid Commitment +#### Intermediate Block Bid ```python -def process_intermediate_block_bid_commitment(state: BeaconState, body: IntermediateBlockBody) -> None: - # Get last intermediate block bid - intermediate_block_bid = state.beacon_blocks_since_intermediate_block[-1].body.intermediate_block_bid - - assert intermediate_block_bid.execution_payload_root == hash_tree_root(body.execution_payload) - - assert intermediate_block_bid.sharded_data_commitment_number == body.sharded_commitments_container.included_sharded_data_commitments - - assert intermediate_block_bid.sharded_data_commitment_root == hash_tree_root(body.sharded_commitments_container.sharded_commitments[-intermediate_block_bid.sharded_data_commitments:]) -``` +def verify_intermediate_block_bid(state: BeaconState, block: BeaconBlock) -> None: + if is_intermediate_block_slot(block.slot): + # Get last intermediate block bid + intermediate_block_bid = state.blocks_since_intermediate_block[-1].body.intermediate_block_bid -#### Intermediate Block header + assert intermediate_block_bid.execution_payload_root == hash_tree_root(block.body.execution_payload) -```python -def process_intermediate_block_header(state: BeaconState, block: IntermediateBlock) -> None: - # Verify that the slots match - assert block.slot == state.slot + assert intermediate_block_bid.sharded_data_commitment_number == block.body.sharded_commitments_container.included_sharded_data_commitments - # Verify that the block is newer than latest block header - assert block.slot == state.latest_block_header.slot + 1 + assert intermediate_block_bid.sharded_data_commitment_root == hash_tree_root(block.body.sharded_commitments_container.sharded_commitments[-intermediate_block_bid.sharded_data_commitments:]) - # Verify that the parent matches - assert block.parent_root == hash_tree_root(state.latest_block_header) + assert intermediate_block_bid.validator_index == block.proposer_index - # Cache current block as the new latest block - # TODO! Adapt this to support intermediate block headers - state.latest_block_header = BeaconBlockHeader( - slot=block.slot, - proposer_index=block.proposer_index, - parent_root=block.parent_root, - state_root=Bytes32(), # Overwritten in the next process_slot call - body_root=hash_tree_root(block.body), - ) + assert block.body.intermediate_block_bid.selector == 0 # Verify that intermediate block does not contain bid + else: + assert block.body.intermediate_block_bid.selector == 1 ``` #### Sharded data - ```python -def process_sharded_data(state: BeaconState, body: IntermediateBlockBody) -> None: - sharded_commitments_container = body.sharded_commitments_container - - # Verify the degree proof - r = hash_to_field(sharded_commitments_container.sharded_commitments) - r_powers = compute_powers(r, len(sharded_commitments_container.sharded_commitments)) - combined_commitment = elliptic_curve_lincomb(sharded_commitments_container.sharded_commitments, r_powers) - - verify_degree_proof(combined_commitments, SAMPLES_PER_BLOB * POINTS_PER_SAMPLE, sharded_commitments_container.degree_proof) - - # Verify that the 2*N commitments lie on a degree N-1 polynomial - low_degree_check(sharded_commitments_container.sharded_commitments) - - # Verify that last intermediate block and beacon block (blocks if intermediate blocks were missing) have been included - intermediate_block_chunked = block_to_field_elements(ssz_serialize(state.last_intermediate_block)) - beacon_blocks_chunked = [block_to_field_elements(ssz_serialize(block)) for block in state.beacon_blocks_since_intermediate_block] - block_vectors = [] - for block_chunked in [intermediate_block_chunked] + beacon_blocks_chunked: - for i in range(0, len(block_chunked), SAMPLES_PER_BLOB * POINTS_PER_SAMPLE): - block_vectors.append(block_chunked[i:i + SAMPLES_PER_BLOB * POINTS_PER_SAMPLE]) - - number_of_blobs = len(block_vectors) - r = hash_to_field([sharded_commitments_container.sharded_commitments[:number_of_blobs], 0]) - x = hash_to_field([sharded_commitments_container.sharded_commitments[:number_of_blobs], 1]) - - r_powers = compute_powers(r, number_of_blobs) - combined_vector = vector_lincomb(block_vectors, r_powers) - combined_commitment = elliptic_curve_lincomb(sharded_commitments_container.sharded_commitments[:number_of_blobs], r_powers) - y = eval_poly_at(combined_vector, x) - - verify_kzg_proof(combined_commitment, x, y, block_verification_kzg_proof) +def process_sharded_data(state: BeaconState, block: BeaconBlock) -> None: + if is_intermediate_block_slot(block.slot): + assert block.body.sharded_commitments_container.selector == 1 + sharded_commitments_container = block.body.sharded_commitments_container.value + + # Verify the degree proof + r = hash_to_field(sharded_commitments_container.sharded_commitments) + r_powers = compute_powers(r, len(sharded_commitments_container.sharded_commitments)) + combined_commitment = elliptic_curve_lincomb(sharded_commitments_container.sharded_commitments, r_powers) + + verify_degree_proof(combined_commitment, SAMPLES_PER_BLOB * POINTS_PER_SAMPLE, sharded_commitments_container.degree_proof) + + # Verify that the 2*N commitments lie on a degree N-1 polynomial + low_degree_check(sharded_commitments_container.sharded_commitments) + + # Verify that last intermediate block and beacon block (blocks if intermediate blocks were missing) have been included + intermediate_block_chunked = block_to_field_elements(ssz_serialize(state.last_intermediate_block)) + beacon_blocks_chunked = [block_to_field_elements(ssz_serialize(block)) for block in state.blocks_since_intermediate_block] + block_vectors = [] + for block_chunked in [intermediate_block_chunked] + beacon_blocks_chunked: + for i in range(0, len(block_chunked), SAMPLES_PER_BLOB * POINTS_PER_SAMPLE): + block_vectors.append(block_chunked[i:i + SAMPLES_PER_BLOB * POINTS_PER_SAMPLE]) + + number_of_blobs = len(block_vectors) + r = hash_to_field([sharded_commitments_container.sharded_commitments[:number_of_blobs], 0]) + x = hash_to_field([sharded_commitments_container.sharded_commitments[:number_of_blobs], 1]) + + r_powers = compute_powers(r, number_of_blobs) + combined_vector = vector_lincomb(block_vectors, r_powers) + combined_commitment = elliptic_curve_lincomb(sharded_commitments_container.sharded_commitments[:number_of_blobs], r_powers) + y = eval_poly_at(combined_vector, x) + + verify_kzg_proof(combined_commitment, x, y, block_verification_kzg_proof) + else: + assert block.body.sharded_commitments_container.selector == 0 # Only intermediate blocks have sharded commitments ``` The degree proof works as follows. For a block `B` with length `l` (so `l` values in `[0...l - 1]`, seen as a polynomial `B(X)` which takes these values), @@ -590,69 +532,51 @@ The goal is to ensure that a proof can only be constructed if `deg(B) < l` (ther #### Execution payload ```python -def process_execution_payload(state: BeaconState, block: IntermediateBlock, execution_engine: ExecutionEngine) -> None: - - payload = block.body.execution_payload - - # Verify consistency of the parent hash with respect to the previous execution payload header - if is_merge_transition_complete(state): - assert payload.parent_hash == state.latest_execution_payload_header.block_hash - # Verify random - assert payload.random == get_randao_mix(state, get_current_epoch(state)) - # Verify timestamp - assert payload.timestamp == compute_timestamp_at_slot(state, state.slot) - - # Get sharded data commitments - sharded_commitments_container = block.body.sharded_commitments_container - sharded_data_commitments = sharded_commitments_container.sharded_commitments[-sharded_commitments_container.included_sharded_data_commitments:] - - # Get all unprocessed intermediate block bids - unprocessed_intermediate_block_bids = [] - for block in state.beacon_blocks_since_intermediate_block: - unprocessed_intermediate_block_bids.append(block.body.intermediate_block_bid) - - - # Verify the execution payload is valid - assert execution_engine.execute_payload(payload, - sharded_data_commitments, - unprocessed_intermediate_block_bids) - - # Cache execution payload header - state.latest_execution_payload_header = ExecutionPayloadHeader( - parent_hash=payload.parent_hash, - fee_recipient=payload.fee_recipient, - state_root=payload.state_root, - receipt_root=payload.receipt_root, - logs_bloom=payload.logs_bloom, - random=payload.random, - block_number=payload.block_number, - gas_limit=payload.gas_limit, - gas_used=payload.gas_used, - timestamp=payload.timestamp, - extra_data=payload.extra_data, - base_fee_per_gas=payload.base_fee_per_gas, - block_hash=payload.block_hash, - transactions_root=hash_tree_root(payload.transactions), - ) -``` - -### Epoch transition - -This epoch transition overrides Bellatrix epoch transition: - -```python -def process_epoch(state: BeaconState) -> None: - # Base functionality - process_justification_and_finalization(state) - process_inactivity_updates(state) - process_rewards_and_penalties(state) - process_registry_updates(state) - process_slashings(state) - process_eth1_data_reset(state) - process_effective_balance_updates(state) - process_slashings_reset(state) - process_randao_mixes_reset(state) - process_historical_roots_update(state) - process_participation_flag_updates(state) - process_sync_committee_updates(state) -``` +def process_execution_payload(state: BeaconState, block: BeaconBlock, execution_engine: ExecutionEngine) -> None: + if is_intermediate_block_slot(block.slot): + assert block.body.execution_payload.selector == 1 + payload = block.body.execution_payload.value + + # Verify consistency of the parent hash with respect to the previous execution payload header + if is_merge_transition_complete(state): + assert payload.parent_hash == state.latest_execution_payload_header.block_hash + # Verify random + assert payload.random == get_randao_mix(state, get_current_epoch(state)) + # Verify timestamp + assert payload.timestamp == compute_timestamp_at_slot(state, state.slot) + + # Get sharded data commitments + sharded_commitments_container = block.body.sharded_commitments_container + sharded_data_commitments = sharded_commitments_container.sharded_commitments[-sharded_commitments_container.included_sharded_data_commitments:] + + # Get all unprocessed intermediate block bids + unprocessed_intermediate_block_bids = [] + for block in state.blocks_since_intermediate_block: + unprocessed_intermediate_block_bids.append(block.body.intermediate_block_bid) + + + # Verify the execution payload is valid + assert execution_engine.execute_payload(payload, + sharded_data_commitments, + unprocessed_intermediate_block_bids) + + # Cache execution payload header + state.latest_execution_payload_header = ExecutionPayloadHeader( + parent_hash=payload.parent_hash, + fee_recipient=payload.fee_recipient, + state_root=payload.state_root, + receipt_root=payload.receipt_root, + logs_bloom=payload.logs_bloom, + random=payload.random, + block_number=payload.block_number, + gas_limit=payload.gas_limit, + gas_used=payload.gas_used, + timestamp=payload.timestamp, + extra_data=payload.extra_data, + base_fee_per_gas=payload.base_fee_per_gas, + block_hash=payload.block_hash, + transactions_root=hash_tree_root(payload.transactions), + ) + else: + assert block.body.execution_payload.selector == 0 # Only intermediate blocks have execution payloads +``` \ No newline at end of file From 2c001347a0f3a1c17a2673041562e8e87b3c0bd9 Mon Sep 17 00:00:00 2001 From: Dankrad Feist Date: Thu, 13 Jan 2022 00:53:36 +0000 Subject: [PATCH 07/66] remove verify_intermediate_block_bid_commitment --- specs/sharding/beacon-chain.md | 1 - 1 file changed, 1 deletion(-) diff --git a/specs/sharding/beacon-chain.md b/specs/sharding/beacon-chain.md index 9a5a398962..5ac7063393 100644 --- a/specs/sharding/beacon-chain.md +++ b/specs/sharding/beacon-chain.md @@ -412,7 +412,6 @@ def get_active_shard_count(state: BeaconState, epoch: Epoch) -> uint64: ```python def process_beacon_block(state: BeaconState, block: BeaconBlock) -> None: process_block_header(state, block) - verify_intermediate_block_bid_commitment(state, block) verify_intermediate_block_bid(state, block) process_sharded_data(state, block) if is_execution_enabled(state, block.body): From 71f1021b8ebacfbb6d7f5906534f10553b6ac28c Mon Sep 17 00:00:00 2001 From: Dankrad Feist Date: Thu, 13 Jan 2022 00:56:43 +0000 Subject: [PATCH 08/66] Rename back to process_block --- specs/sharding/beacon-chain.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/specs/sharding/beacon-chain.md b/specs/sharding/beacon-chain.md index 5ac7063393..694d4af04f 100644 --- a/specs/sharding/beacon-chain.md +++ b/specs/sharding/beacon-chain.md @@ -407,10 +407,10 @@ def get_active_shard_count(state: BeaconState, epoch: Epoch) -> uint64: ### Block processing -#### `process_beacon_block` +#### `process_block` ```python -def process_beacon_block(state: BeaconState, block: BeaconBlock) -> None: +def process_block(state: BeaconState, block: BeaconBlock) -> None: process_block_header(state, block) verify_intermediate_block_bid(state, block) process_sharded_data(state, block) From aba8f444c920c034e77c2daad129092623c6e6ce Mon Sep 17 00:00:00 2001 From: Dankrad Feist Date: Thu, 13 Jan 2022 17:11:10 +0000 Subject: [PATCH 09/66] Degree check formula corrections --- specs/sharding/beacon-chain.md | 37 +++++++++++++++++++++++++--------- 1 file changed, 28 insertions(+), 9 deletions(-) diff --git a/specs/sharding/beacon-chain.md b/specs/sharding/beacon-chain.md index 694d4af04f..90b3724f58 100644 --- a/specs/sharding/beacon-chain.md +++ b/specs/sharding/beacon-chain.md @@ -262,10 +262,9 @@ def verify_degree_proof(commitment: KZGCommitment, degree: uint64, proof: KZGCom Verifies that the commitment is of polynomial degree <= degree. """ - # TODO! Check for off by one error assert ( bls.Pairing(proof, G2_SETUP[0]) - == bls.Pairing(commitment, G2_SETUP[-degree + 1]) + == bls.Pairing(commitment, G2_SETUP[-degree - 1]) ) ``` @@ -321,6 +320,7 @@ def eval_poly_at(poly: List[BLSFieldElement], x: BLSFieldElement) -> BLSFieldEle r = 1 for w in roots: r = r * (z - w) % MODULUS + return r def Aprime(z): return points_per_blob * pow(z, points_per_blob - 1, MODULUS) @@ -347,20 +347,39 @@ def low_degree_check(commitments: List[KZGCommitment]): """ Checks that the commitments are on a low-degree polynomial If there are 2*N commitments, that means they should lie on a polynomial - of degree d - N - 1, where d = next_power_of_two(2*N) + of degree d = K - N - 1, where K = next_power_of_two(2*N) (The remaining positions are filled with 0, this is to make FFTs usable) """ # TODO! -- this is currently wrong non power of two lists, need to implement last part of this: https://notes.ethereum.org/N-4E_VbaSy2Iqcug9ke8PQ - result = 0 + assert len(commitments) % 2 == 1 N = len(commitments) // 2 r = hash_to_field(commitments) - domain_size = next_power_of_two(2 * N) - r_to_domain_size = pow(r, N, domain_size) - roots = roots_of_unity(domain_size) + K = next_power_of_two(2 * N) + d = K - N - 1 + r_to_K = pow(r, N, K) + roots = roots_of_unity(K) + + # For an efficient implementation, B and Bprime should be precomputed + def B(z): + r = 1 + for w in roots[:d + 1]: + r = r * (z - w) % MODULUS + return r + + def Bprime(z): + r = 0 + for i in range(d + 1): + m = 1 + for w in roots[:i] + roots[i+1:d + 1]: + m = m * (z - w) % MODULUS + r = (r + M) % MODULUS + return r coefs = [] - for i in range(2 * N): - coefs.append(((-1)**i * r_to_domain_size - 1) * modular_inverse(roots[i * (domain_size // 2 - 1) % domain_size] * (r - roots[i])) % MODULUS) + for i in range(K): + coefs.append( - (r_to_K - 1) * modular_inverse(K * roots[i * (K - 1) % K] * (r - roots[i])) % MODULUS) + for i in range(d + 1): + coefs.append( B(r) * modular_inverse(Bprime(r) * (r - roots[i])) % MODULUS) assert elliptic_curve_lincomb(commitments, coefs) == bls.Z1() ``` From 359b376fc9b8cb8cd52afc0f337a5969ce6cdf62 Mon Sep 17 00:00:00 2001 From: Dankrad Feist Date: Fri, 14 Jan 2022 00:02:27 +0000 Subject: [PATCH 10/66] Updated TOC, bid processing corrections --- specs/sharding/beacon-chain.md | 170 +++++++++++++++++---------------- 1 file changed, 87 insertions(+), 83 deletions(-) diff --git a/specs/sharding/beacon-chain.md b/specs/sharding/beacon-chain.md index 90b3724f58..d59eb432c6 100644 --- a/specs/sharding/beacon-chain.md +++ b/specs/sharding/beacon-chain.md @@ -13,57 +13,45 @@ - [Custom types](#custom-types) - [Constants](#constants) - [Misc](#misc) - - [Domain types](#domain-types) - - [Shard Work Status](#shard-work-status) - - [Misc](#misc-1) - - [Participation flag indices](#participation-flag-indices) - - [Incentivization weights](#incentivization-weights) - [Preset](#preset) - - [Misc](#misc-2) + - [Misc](#misc-1) + - [Time parameters](#time-parameters) - [Shard blob samples](#shard-blob-samples) - [Precomputed size verification points](#precomputed-size-verification-points) - - [Gwei values](#gwei-values) - [Configuration](#configuration) -- [Updated containers](#updated-containers) - - [`AttestationData`](#attestationdata) - - [`BeaconBlockBody`](#beaconblockbody) - - [`BeaconState`](#beaconstate) -- [New containers](#new-containers) - - [`Builder`](#builder) - - [`DataCommitment`](#datacommitment) - - [`AttestedDataCommitment`](#attesteddatacommitment) - - [`ShardBlobBody`](#shardblobbody) - - [`ShardBlobBodySummary`](#shardblobbodysummary) - - [`ShardBlob`](#shardblob) - - [`ShardBlobHeader`](#shardblobheader) - - [`SignedShardBlob`](#signedshardblob) - - [`SignedShardBlobHeader`](#signedshardblobheader) - - [`PendingShardHeader`](#pendingshardheader) - - [`ShardBlobReference`](#shardblobreference) - - [`ShardProposerSlashing`](#shardproposerslashing) - - [`ShardWork`](#shardwork) + - [Time parameters](#time-parameters-1) +- [Containers](#containers) + - [New Containers](#new-containers) + - [`IntermediateBlockBid`](#intermediateblockbid) + - [`IntermediateBlockBidWithRecipientAddress`](#intermediateblockbidwithrecipientaddress) + - [`ShardedCommitmentsContainer`](#shardedcommitmentscontainer) + - [Extended Containers](#extended-containers) + - [`BeaconState`](#beaconstate) + - [`BeaconBlockBody`](#beaconblockbody) - [Helper functions](#helper-functions) - - [Misc](#misc-3) + - [Block processing](#block-processing) + - [`is_intermediate_block_slot`](#is_intermediate_block_slot) + - [KZG](#kzg) + - [`hash_to_field`](#hash_to_field) + - [`compute_powers`](#compute_powers) + - [`verify_kzg_proof`](#verify_kzg_proof) + - [`verify_degree_proof`](#verify_degree_proof) + - [`block_to_field_elements`](#block_to_field_elements) + - [`roots_of_unity`](#roots_of_unity) + - [`modular_inverse`](#modular_inverse) + - [`eval_poly_at`](#eval_poly_at) - [`next_power_of_two`](#next_power_of_two) - - [`compute_previous_slot`](#compute_previous_slot) - - [`compute_updated_sample_price`](#compute_updated_sample_price) - - [`compute_committee_source_epoch`](#compute_committee_source_epoch) - - [`batch_apply_participation_flag`](#batch_apply_participation_flag) + - [`low_degree_check`](#low_degree_check) + - [`vector_lincomb`](#vector_lincomb) + - [`elliptic_curve_lincomb`](#elliptic_curve_lincomb) - [Beacon state accessors](#beacon-state-accessors) - - [Updated `get_committee_count_per_slot`](#updated-get_committee_count_per_slot) - [`get_active_shard_count`](#get_active_shard_count) - - [`get_shard_proposer_index`](#get_shard_proposer_index) - - [`get_start_shard`](#get_start_shard) - - [`compute_shard_from_committee_index`](#compute_shard_from_committee_index) - - [`compute_committee_index_from_shard`](#compute_committee_index_from_shard) - - [Block processing](#block-processing) - - [Operations](#operations) - - [Extended Attestation processing](#extended-attestation-processing) - - [`process_shard_header`](#process_shard_header) - - [`process_shard_proposer_slashing`](#process_shard_proposer_slashing) - - [Epoch transition](#epoch-transition) - - [`process_pending_shard_confirmations`](#process_pending_shard_confirmations) - - [`reset_pending_shard_work`](#reset_pending_shard_work) + - [Block processing](#block-processing-1) + - [`process_block`](#process_block) + - [Block header](#block-header) + - [Intermediate Block Bid](#intermediate-block-bid) + - [Sharded data](#sharded-data) + - [Execution payload](#execution-payload) @@ -101,7 +89,7 @@ The following values are (non-configurable) constants used throughout the specif | - | - | - | | `PRIMITIVE_ROOT_OF_UNITY` | `7` | Primitive root of unity of the BLS12_381 (inner) modulus | | `DATA_AVAILABILITY_INVERSE_CODING_RATE` | `2**1` (= 2) | Factor by which samples are extended for data availability encoding | -| `POINTS_PER_SAMPLE` | `uint64(2**4)` (= 16) | 31 * 16 = 496 bytes | +| `FIELD_ELEMENTS_PER_SAMPLE` | `uint64(2**4)` (= 16) | 31 * 16 = 496 bytes | | `MODULUS` | `0x73eda753299d7d483339d80809a1d80553bda402fffe5bfeffffffff00000001` (curve order of BLS12_381) | ## Preset @@ -133,7 +121,7 @@ With the introduction of intermediate blocks the number of slots per epoch is do | - | - | | `G1_SETUP` | Type `List[G1]`. The G1-side trusted setup `[G, G*s, G*s**2....]`; note that the first point is the generator. | | `G2_SETUP` | Type `List[G2]`. The G2-side trusted setup `[G, G*s, G*s**2....]` | -| `ROOT_OF_UNITY` | `pow(PRIMITIVE_ROOT_OF_UNITY, (MODULUS - 1) // int(SAMPLES_PER_BLOB * POINTS_PER_SAMPLE), MODULUS)` | +| `ROOT_OF_UNITY` | `pow(PRIMITIVE_ROOT_OF_UNITY, (MODULUS - 1) // int(SAMPLES_PER_BLOB * FIELD_ELEMENTS_PER_SAMPLE), MODULUS)` | ## Configuration @@ -155,6 +143,9 @@ E.g. `ACTIVE_SHARDS` and `SAMPLES_PER_BLOB`. ```python class IntermediateBlockBid(Container): + slot: Slot + parent_block_root: Root + execution_payload_root: Root sharded_data_commitment_root: Root # Root of the sharded data (only data, not beacon/intermediate block commitments) @@ -172,6 +163,14 @@ class IntermediateBlockBid(Container): signature_s: uint256 ``` +#### `IntermediateBlockBidWithRecipientAddress` + +```python +class IntermediateBlockBidWithRecipientAddress(Container): + intermediate_block_bid: Union[None, IntermediateBlockBid] + ethereum_address: Bytes[20] # Address to receive the block builder bid +``` + #### `ShardedCommitmentsContainer` ```python @@ -206,7 +205,7 @@ class BeaconState(bellatrix.BeaconState): class BeaconBlockBody(bellatrix.BeaconBlockBody): execution_payload: Union[None, ExecutionPayload] sharded_commitments_container: Union[None, ShardedCommitmentsContainer] - intermediate_block_bid: Union[None, IntermediateBlockBid] + intermediate_block_bid_with_recipient_address: Union[None, IntermediateBlockBidWithRecipientAddress] ``` ## Helper functions @@ -287,7 +286,7 @@ def roots_of_unity(order: uint64) -> List[BLSFieldElement]: root_of_unity = pow(PRIMITIVE_ROOT_OF_UNITY, (MODULUS - 1) // order, MODULUS) current_root_of_unity = 1 - for i in range(len(SAMPLES_PER_BLOB * POINTS_PER_SAMPLE)): + for i in range(len(SAMPLES_PER_BLOB * FIELD_ELEMENTS_PER_SAMPLE)): r.append(current_root_of_unity) current_root_of_unity = current_root_of_unity * root_of_unity % MODULUS return r @@ -314,8 +313,9 @@ def eval_poly_at(poly: List[BLSFieldElement], x: BLSFieldElement) -> BLSFieldEle """ Evaluates a polynomial (in evaluation form) at an arbitrary point """ - points_per_blob = SAMPLES_PER_BLOB * POINTS_PER_SAMPLE - roots = roots_of_unity(points_per_blob) + field_elements_per_blob = SAMPLES_PER_BLOB * FIELD_ELEMENTS_PER_SAMPLE + roots = roots_of_unity(field_elements_per_blob) + def A(z): r = 1 for w in roots: @@ -323,7 +323,7 @@ def eval_poly_at(poly: List[BLSFieldElement], x: BLSFieldElement) -> BLSFieldEle return r def Aprime(z): - return points_per_blob * pow(z, points_per_blob - 1, MODULUS) + return field_elements_per_blob * pow(z, field_elements_per_blob - 1, MODULUS) r = 0 inverses = [modular_inverse(z - x) for z in roots] @@ -349,9 +349,10 @@ def low_degree_check(commitments: List[KZGCommitment]): If there are 2*N commitments, that means they should lie on a polynomial of degree d = K - N - 1, where K = next_power_of_two(2*N) (The remaining positions are filled with 0, this is to make FFTs usable) + + For details see here: https://notes.ethereum.org/@dankrad/barycentric_low_degree_check """ - # TODO! -- this is currently wrong non power of two lists, need to implement last part of this: https://notes.ethereum.org/N-4E_VbaSy2Iqcug9ke8PQ - assert len(commitments) % 2 == 1 + assert len(commitments) % 2 == 0 N = len(commitments) // 2 r = hash_to_field(commitments) K = next_power_of_two(2 * N) @@ -379,7 +380,7 @@ def low_degree_check(commitments: List[KZGCommitment]): for i in range(K): coefs.append( - (r_to_K - 1) * modular_inverse(K * roots[i * (K - 1) % K] * (r - roots[i])) % MODULUS) for i in range(d + 1): - coefs.append( B(r) * modular_inverse(Bprime(r) * (r - roots[i])) % MODULUS) + coefs[i] = (coefs[i] + B(r) * modular_inverse(Bprime(r) * (r - roots[i]))) % MODULUS assert elliptic_curve_lincomb(commitments, coefs) == bls.Z1() ``` @@ -403,7 +404,7 @@ def vector_lincomb(vectors: List[List[BLSFieldElement]], scalars: List[BLSFieldE ```python def elliptic_curve_lincomb(points: List[KZGCommitment], scalars: List[BLSFieldElement]) -> KZGCommitment: """ - BLS multiscalar multiplication. This function can be optimized using Pippenger's algorithm and variants. + BLS multiscalar multiplication. This function can be optimized using Pippenger's algorithm and variants. This is a non-optimized implementation. """ r = bls.Z1() for x, a in zip(points, scalars): @@ -441,7 +442,7 @@ def process_block(state: BeaconState, block: BeaconBlock) -> None: process_operations(state, block.body) process_sync_aggregate(state, block.body.sync_aggregate) - if is_intermediat_block_slot(block.slot): + if is_intermediate_block_slot(block.slot): state.blocks_since_intermediate_block = [] state.blocks_since_intermediate_block.append(block) ``` @@ -473,21 +474,15 @@ def process_block_header(state: BeaconState, block: BeaconBlock) -> None: assert not proposer.slashed ``` -#### Intermediate Block Bid Commitment - -```python -def verify_intermediate_block_bid_commitment(state: BeaconState, block: BeaconBlock) -> None: - if is_intermediate_block_slot(block.slot): - -``` - #### Intermediate Block Bid ```python def verify_intermediate_block_bid(state: BeaconState, block: BeaconBlock) -> None: if is_intermediate_block_slot(block.slot): # Get last intermediate block bid - intermediate_block_bid = state.blocks_since_intermediate_block[-1].body.intermediate_block_bid + assert state.blocks_since_intermediate_block[-1].body.intermediate_block_bid_with_recipient_address.selector == 1 + intermediate_block_bid = state.blocks_since_intermediate_block[-1].body.intermediate_block_bid_with_recipient_address.value.intermediate_block_bid + assert intermediate_block_bid.slot + 1 == block.slot assert intermediate_block_bid.execution_payload_root == hash_tree_root(block.body.execution_payload) @@ -497,9 +492,13 @@ def verify_intermediate_block_bid(state: BeaconState, block: BeaconBlock) -> Non assert intermediate_block_bid.validator_index == block.proposer_index - assert block.body.intermediate_block_bid.selector == 0 # Verify that intermediate block does not contain bid + assert block.body.intermediate_block_bid_with_recipient_address.selector == 0 # Verify that intermediate block does not contain bid else: - assert block.body.intermediate_block_bid.selector == 1 + assert block.body.intermediate_block_bid_with_recipient_address.selector == 1 + + intermediate_block_bid = block.body.intermediate_block_bid_with_recipient_address.value.intermediate_block_bid + assert intermediate_block_bid.slot == block.slot + assert intermediate_block_bid.parent_block_root == block.parent_root ``` #### Sharded data @@ -510,23 +509,26 @@ def process_sharded_data(state: BeaconState, block: BeaconBlock) -> None: assert block.body.sharded_commitments_container.selector == 1 sharded_commitments_container = block.body.sharded_commitments_container.value + # Verify not too many commitments + assert len(sharded_commitments_container.sharded_commitments) // 2 <= get_active_shard_count(state, get_current_epoch(state)) + # Verify the degree proof r = hash_to_field(sharded_commitments_container.sharded_commitments) r_powers = compute_powers(r, len(sharded_commitments_container.sharded_commitments)) combined_commitment = elliptic_curve_lincomb(sharded_commitments_container.sharded_commitments, r_powers) - verify_degree_proof(combined_commitment, SAMPLES_PER_BLOB * POINTS_PER_SAMPLE, sharded_commitments_container.degree_proof) + verify_degree_proof(combined_commitment, SAMPLES_PER_BLOB * FIELD_ELEMENTS_PER_SAMPLE, sharded_commitments_container.degree_proof) # Verify that the 2*N commitments lie on a degree N-1 polynomial low_degree_check(sharded_commitments_container.sharded_commitments) - # Verify that last intermediate block and beacon block (blocks if intermediate blocks were missing) have been included - intermediate_block_chunked = block_to_field_elements(ssz_serialize(state.last_intermediate_block)) - beacon_blocks_chunked = [block_to_field_elements(ssz_serialize(block)) for block in state.blocks_since_intermediate_block] + # Verify that blocks since the last intermediate block have been included + blocks_chunked = [block_to_field_elements(ssz_serialize(block)) for block in state.blocks_since_intermediate_block] block_vectors = [] - for block_chunked in [intermediate_block_chunked] + beacon_blocks_chunked: - for i in range(0, len(block_chunked), SAMPLES_PER_BLOB * POINTS_PER_SAMPLE): - block_vectors.append(block_chunked[i:i + SAMPLES_PER_BLOB * POINTS_PER_SAMPLE]) + field_elements_per_blob = SAMPLES_PER_BLOB * FIELD_ELEMENTS_PER_SAMPLE + for block_chunked in blocks_chunked: + for i in range(0, len(block_chunked), field_elements_per_blob): + block_vectors.append(block_chunked[i:i + field_elements_per_blob]) number_of_blobs = len(block_vectors) r = hash_to_field([sharded_commitments_container.sharded_commitments[:number_of_blobs], 0]) @@ -538,15 +540,14 @@ def process_sharded_data(state: BeaconState, block: BeaconBlock) -> None: y = eval_poly_at(combined_vector, x) verify_kzg_proof(combined_commitment, x, y, block_verification_kzg_proof) + + # Verify that number of sharded data commitments is correctly indicated + assert 2 * (number_of_blobs + included_sharded_data_commitments) == len(sharded_commitments_container.sharded_commitments) + else: assert block.body.sharded_commitments_container.selector == 0 # Only intermediate blocks have sharded commitments ``` -The degree proof works as follows. For a block `B` with length `l` (so `l` values in `[0...l - 1]`, seen as a polynomial `B(X)` which takes these values), -the length proof is the commitment to the polynomial `B(X) * X**(MAX_DEGREE + 1 - l)`, -where `MAX_DEGREE` is the maximum power of `s` available in the setup, which is `MAX_DEGREE = len(G2_SETUP) - 1`. -The goal is to ensure that a proof can only be constructed if `deg(B) < l` (there are not hidden higher-order terms in the polynomial, which would thwart reconstruction). - #### Execution payload ```python @@ -568,15 +569,18 @@ def process_execution_payload(state: BeaconState, block: BeaconBlock, execution_ sharded_data_commitments = sharded_commitments_container.sharded_commitments[-sharded_commitments_container.included_sharded_data_commitments:] # Get all unprocessed intermediate block bids - unprocessed_intermediate_block_bids = [] - for block in state.blocks_since_intermediate_block: - unprocessed_intermediate_block_bids.append(block.body.intermediate_block_bid) - + unprocessed_intermediate_block_bid_with_recipient_addresses = [] + for block in state.blocks_since_intermediate_block[1:]: + unprocessed_intermediate_block_bid_with_recipient_addresses.append(block.body.intermediate_block_bid_with_recipient_address.value) # Verify the execution payload is valid + # The execution engine gets two extra payloads: One for the sharded data commitments (these are needed to verify type 3 transactions) + # and one for all so far unprocessed intermediate block bids: + # * The execution engine needs to transfer the balance from the bidder to the proposer. + # * The execution engine needs to deduct data gas fees from the bidder balances assert execution_engine.execute_payload(payload, sharded_data_commitments, - unprocessed_intermediate_block_bids) + unprocessed_intermediate_block_bid_with_recipient_addresses) # Cache execution payload header state.latest_execution_payload_header = ExecutionPayloadHeader( From 178694d9c1eb57b449ce2030af2bf9c4432424a9 Mon Sep 17 00:00:00 2001 From: Dankrad Feist Date: Fri, 14 Jan 2022 00:26:42 +0000 Subject: [PATCH 11/66] Link to latest sharding doc --- specs/sharding/beacon-chain.md | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/specs/sharding/beacon-chain.md b/specs/sharding/beacon-chain.md index d59eb432c6..7a6f125999 100644 --- a/specs/sharding/beacon-chain.md +++ b/specs/sharding/beacon-chain.md @@ -60,15 +60,13 @@ ## Introduction This document describes the extensions made to the Phase 0 design of The Beacon Chain to support data sharding, -based on the ideas [here](https://hackmd.io/G-Iy5jqyT7CXWEz8Ssos8g) and more broadly [here](https://arxiv.org/abs/1809.09044), +based on the ideas [here](https://notes.ethereum.org/@dankrad/new_sharding) and more broadly [here](https://arxiv.org/abs/1809.09044), using KZG10 commitments to commit to data to remove any need for fraud proofs (and hence, safety-critical synchrony assumptions) in the design. ### Glossary - **Data**: A list of KZG points, to translate a byte string into - **Blob**: Data with commitments and meta-data, like a flattened bundle of L2 transactions. -- **Builder**: Independent actor that builds blobs and bids for proposal slots via fee-paying blob-headers, responsible for availability. -- **Shard proposer**: Validator taking bids from blob builders for shard data opportunity, co-signs with builder to propose the blob. ## Custom types From a664402e992f663de0858975ee3f159defb05098 Mon Sep 17 00:00:00 2001 From: Dankrad Feist Date: Mon, 17 Jan 2022 19:19:09 +0000 Subject: [PATCH 12/66] Add shard samples + P2P --- specs/sharding/beacon-chain.md | 13 +++ specs/sharding/p2p-interface.md | 163 +++++++------------------------- 2 files changed, 45 insertions(+), 131 deletions(-) diff --git a/specs/sharding/beacon-chain.md b/specs/sharding/beacon-chain.md index 7a6f125999..24b1980b31 100644 --- a/specs/sharding/beacon-chain.md +++ b/specs/sharding/beacon-chain.md @@ -188,6 +188,19 @@ class ShardedCommitmentsContainer(Container): block_verification_kzg_proof: KZGCommitment ``` +#### `ShardSample` + +```python +class SignedShardSample(Container): + slot: Slot + row: uint64 + column: uint64 + data: Vector[BLSFieldElement, FIELD_ELEMENTS_PER_SAMPLE] + proof: KZGCommitment + builder: ValidatorIndex + signature: BLSSignature +``` + ### Extended Containers #### `BeaconState` diff --git a/specs/sharding/p2p-interface.md b/specs/sharding/p2p-interface.md index ab32c37fa9..c9d45f0e23 100644 --- a/specs/sharding/p2p-interface.md +++ b/specs/sharding/p2p-interface.md @@ -23,7 +23,6 @@ - ## Introduction The specification of these changes continues in the same format as the network specifications of previous upgrades, and assumes them as pre-requisite. @@ -33,12 +32,10 @@ The adjustments and additions for Shards are outlined in this document. ### Misc -| Name | Value | Description | -| ---- | ----- | ----------- | -| `SHARD_BLOB_SUBNET_COUNT` | `64` | The number of `shard_blob_{subnet_id}` subnets used in the gossipsub protocol. | -| `SHARD_TX_PROPAGATION_GRACE_SLOTS` | `4` | The number of slots for a late transaction to propagate | -| `SHARD_TX_PROPAGATION_BUFFER_SLOTS` | `8` | The number of slots for an early transaction to propagate | - +| Name | Value | Description | +| --------------------------- | ----- | -------------------------------------------------------------------------------- | +| `SHARD_ROW_SUBNET_COUNT` | `512` | The number of `shard_row_{subnet_id}` subnets used in the gossipsub protocol. | +| `SHARD_COLUMN_SUBNET_COUNT` | `512` | The number of `shard_column_{subnet_id}` subnets used in the gossipsub protocol. | ## Gossip domain @@ -48,130 +45,34 @@ Following the same scheme as the [Phase0 gossip topics](../phase0/p2p-interface. | Name | Message Type | |---------------------------------|--------------------------| -| `shard_blob_{subnet_id}` | `SignedShardBlob` | -| `shard_blob_header` | `SignedShardBlobHeader` | -| `shard_blob_tx` | `SignedShardBlobHeader` | -| `shard_proposer_slashing` | `ShardProposerSlashing` | +| `shard_row_{subnet_id}` | `SignedShardSample` | +| `shard_column_{subnet_id}` | `SignedShardSample` | The [DAS network specification](./das-p2p.md) defines additional topics. -#### Shard blob subnets - -Shard blob subnets are used by builders to make their blobs available after selection by shard proposers. - -##### `shard_blob_{subnet_id}` - -Shard blob data, in the form of a `SignedShardBlob` is published to the `shard_blob_{subnet_id}` subnets. - -```python -def compute_subnet_for_shard_blob(state: BeaconState, slot: Slot, shard: Shard) -> uint64: - """ - Compute the correct subnet for a shard blob publication. - Note, this mimics compute_subnet_for_attestation(). - """ - committee_index = compute_committee_index_from_shard(state, slot, shard) - committees_per_slot = get_committee_count_per_slot(state, compute_epoch_at_slot(slot)) - slots_since_epoch_start = Slot(slot % SLOTS_PER_EPOCH) - committees_since_epoch_start = committees_per_slot * slots_since_epoch_start - - return uint64((committees_since_epoch_start + committee_index) % SHARD_BLOB_SUBNET_COUNT) -``` - -The following validations MUST pass before forwarding the `signed_blob`, -on the horizontal subnet or creating samples for it. Alias `blob = signed_blob.message`. - -- _[IGNORE]_ The `blob` is published 1 slot early or later (with a `MAXIMUM_GOSSIP_CLOCK_DISPARITY` allowance) -- - i.e. validate that `blob.slot <= current_slot + 1` - (a client MAY queue future blobs for propagation at the appropriate slot). -- _[IGNORE]_ The `blob` is new enough to still be processed -- - i.e. validate that `compute_epoch_at_slot(blob.slot) >= get_previous_epoch(state)` -- _[REJECT]_ The shard blob is for an active shard -- - i.e. `blob.shard < get_active_shard_count(state, compute_epoch_at_slot(blob.slot))` -- _[REJECT]_ The `blob.shard` MUST have a committee at the `blob.slot` -- - i.e. validate that `compute_committee_index_from_shard(state, blob.slot, blob.shard)` doesn't raise an error -- _[REJECT]_ The shard blob is for the correct subnet -- - i.e. `compute_subnet_for_shard_blob(state, blob.slot, blob.shard) == subnet_id` -- _[IGNORE]_ The blob is the first blob with valid signature received for the `(blob.proposer_index, blob.slot, blob.shard)` combination. -- _[REJECT]_ The blob is not too large -- the data MUST NOT be larger than the SSZ list-limit, and a client MAY apply stricter bounds. -- _[REJECT]_ The `blob.body.data` MUST NOT contain any point `p >= MODULUS`. Although it is a `uint256`, not the full 256 bit range is valid. -- _[REJECT]_ The blob builder defined by `blob.builder_index` exists and has sufficient balance to back the fee payment. -- _[REJECT]_ The blob signature, `signed_blob.signature`, is valid for the aggregate of proposer and builder -- - i.e. `bls.FastAggregateVerify([builder_pubkey, proposer_pubkey], blob_signing_root, signed_blob.signature)`. -- _[REJECT]_ The blob is proposed by the expected `proposer_index` for the blob's `slot` and `shard`, - in the context of the current shuffling (defined by the current node head state and `blob.slot`). - If the `proposer_index` cannot immediately be verified against the expected shuffling, - the blob MAY be queued for later processing while proposers for the blob's branch are calculated -- - in such a case _do not_ `REJECT`, instead `IGNORE` this message. - -#### Global topics - -There are three additional global topics for Sharding. - -- `shard_blob_header`: co-signed headers to be included on-chain and to serve as a signal to the builder to publish full data. -- `shard_blob_tx`: builder-signed headers, also known as "data transaction". -- `shard_proposer_slashing`: slashings of duplicate shard proposals. - -##### `shard_blob_header` - -Shard header data, in the form of a `SignedShardBlobHeader` is published to the global `shard_blob_header` subnet. -Shard blob headers select shard blob bids by builders -and should be timely to ensure builders can publish the full shard blob before subsequent attestations. - -The following validations MUST pass before forwarding the `signed_blob_header` on the network. Alias `header = signed_blob_header.message`. - -- _[IGNORE]_ The `header` is published 1 slot early or later (with a `MAXIMUM_GOSSIP_CLOCK_DISPARITY` allowance) -- - i.e. validate that `header.slot <= current_slot + 1` - (a client MAY queue future headers for propagation at the appropriate slot). -- _[IGNORE]_ The header is new enough to still be processed -- - i.e. validate that `compute_epoch_at_slot(header.slot) >= get_previous_epoch(state)` -- _[REJECT]_ The shard header is for an active shard -- - i.e. `header.shard < get_active_shard_count(state, compute_epoch_at_slot(header.slot))` -- _[REJECT]_ The `header.shard` MUST have a committee at the `header.slot` -- - i.e. validate that `compute_committee_index_from_shard(state, header.slot, header.shard)` doesn't raise an error. -- _[IGNORE]_ The header is the first header with valid signature received for the `(header.proposer_index, header.slot, header.shard)` combination. -- _[REJECT]_ The blob builder defined by `blob.builder_index` exists and has sufficient balance to back the fee payment. -- _[REJECT]_ The header signature, `signed_blob_header.signature`, is valid for the aggregate of proposer and builder -- - i.e. `bls.FastAggregateVerify([builder_pubkey, proposer_pubkey], blob_signing_root, signed_blob_header.signature)`. -- _[REJECT]_ The header is proposed by the expected `proposer_index` for the blob's `header.slot` and `header.shard` - in the context of the current shuffling (defined by the current node head state and `header.slot`). - If the `proposer_index` cannot immediately be verified against the expected shuffling, - the blob MAY be queued for later processing while proposers for the blob's branch are calculated -- - in such a case _do not_ `REJECT`, instead `IGNORE` this message. - -##### `shard_blob_tx` - -Shard data-transactions in the form of a `SignedShardBlobHeader` are published to the global `shard_blob_tx` subnet. -These shard blob headers are signed solely by the blob-builder. - -The following validations MUST pass before forwarding the `signed_blob_header` on the network. Alias `header = signed_blob_header.message`. - -- _[IGNORE]_ The header is not propagating more than `SHARD_TX_PROPAGATION_BUFFER_SLOTS` slots ahead of time -- - i.e. validate that `header.slot <= current_slot + SHARD_TX_PROPAGATION_BUFFER_SLOTS`. -- _[IGNORE]_ The header is not propagating later than `SHARD_TX_PROPAGATION_GRACE_SLOTS` slots too late -- - i.e. validate that `header.slot + SHARD_TX_PROPAGATION_GRACE_SLOTS >= current_slot` -- _[REJECT]_ The shard header is for an active shard -- - i.e. `header.shard < get_active_shard_count(state, compute_epoch_at_slot(header.slot))` -- _[REJECT]_ The `header.shard` MUST have a committee at the `header.slot` -- - i.e. validate that `compute_committee_index_from_shard(state, header.slot, header.shard)` doesn't raise an error. -- _[IGNORE]_ The header is not stale -- i.e. the corresponding shard proposer has not already selected a header for `(header.slot, header.shard)`. -- _[IGNORE]_ The header is the first header with valid signature received for the `(header.builder_index, header.slot, header.shard)` combination. -- _[REJECT]_ The blob builder, define by `header.builder_index`, exists and has sufficient balance to back the fee payment. -- _[IGNORE]_ The header fee SHOULD be higher than previously seen headers for `(header.slot, header.shard)`, from any builder. - Propagating nodes MAY increase fee increments in case of spam. -- _[REJECT]_ The header signature, `signed_blob_header.signature`, is valid for ONLY the builder -- - i.e. `bls.Verify(builder_pubkey, blob_signing_root, signed_blob_header.signature)`. The signature is not an aggregate with the proposer. -- _[REJECT]_ The header is designated for proposal by the expected `proposer_index` for the blob's `header.slot` and `header.shard` - in the context of the current shuffling (defined by the current node head state and `header.slot`). - If the `proposer_index` cannot immediately be verified against the expected shuffling, - the blob MAY be queued for later processing while proposers for the blob's branch are calculated -- - in such a case _do not_ `REJECT`, instead `IGNORE` this message. - -##### `shard_proposer_slashing` - -Shard proposer slashings, in the form of `ShardProposerSlashing`, are published to the global `shard_proposer_slashing` topic. - -The following validations MUST pass before forwarding the `shard_proposer_slashing` on to the network. -- _[IGNORE]_ The shard proposer slashing is the first valid shard proposer slashing received - for the proposer with index `proposer_slashing.proposer_index`. - The `proposer_slashing.slot` and `proposer_slashing.shard` are ignored, there are no repeated or per-shard slashings. -- _[REJECT]_ All of the conditions within `process_shard_proposer_slashing` pass validation. +#### Shard sample subnets + +Shard sample (row/column) subnets are used by builders to make their samples available as part of their intermediate block release after selection by beacon block proposers. + +##### `shard_row_{subnet_id}` + +Shard sample data, in the form of a `SignedShardSample` is published to the `shard_row_{subnet_id}` and `shard_column_{subnet_id}` subnets. + +The following validations MUST pass before forwarding the `sample`. + +- _[IGNORE]_ The `sample` is published 1 slot early or later (with a `MAXIMUM_GOSSIP_CLOCK_DISPARITY` allowance) -- + i.e. validate that `sample.slot <= current_slot + 1` + (a client MAY queue future samples for propagation at the appropriate slot). +- _[IGNORE]_ The `sample` is new enough to still be processed -- + i.e. validate that `compute_epoch_at_slot(sample.slot) >= get_previous_epoch(state)` +- _[REJECT]_ The shard sample is for the correct subnet -- + i.e. `sample.row == subnet_id` for `shard_row_{subnet_id}` and `sample.column == subnet_id` for `shard_column_{subnet_id}` +- _[IGNORE]_ The sample is the first sample with valid signature received for the `(sample.builder, sample.slot, sample.row, sample.column)` combination. +- _[REJECT]_ The `sample.data` MUST NOT contain any point `x >= MODULUS`. Although it is a `uint256`, not the full 256 bit range is valid. +- _[REJECT]_ The validator defined by `sample.builder` exists and is slashable. +- _[REJECT]_ The sample signature, `sample.signature`, is valid for the builder -- + i.e. `bls.Verify(builder_pubkey, sample_signing_root, samplev.signature)`. +- _[REJECT]_ The sample is proposed by the expected `builder` for the sample's `slot`. + i.e., the beacon block at `sample.slot - 1` according to the node's fork choise contains an `IntermediateBlockBid` + with `intermediate_block_bid.validator_index == sample.builder` + From e121508b250f794592ea8b645c5cf205adcd730a Mon Sep 17 00:00:00 2001 From: Dankrad Feist Date: Tue, 18 Jan 2022 11:56:36 +0000 Subject: [PATCH 13/66] Add validator guide for attestations and reconstruction --- specs/sharding/beacon-chain.md | 33 +++++++- specs/sharding/p2p-interface.md | 5 +- specs/sharding/validator.md | 130 ++++++++++++++++++++++++++++++++ 3 files changed, 164 insertions(+), 4 deletions(-) create mode 100644 specs/sharding/validator.md diff --git a/specs/sharding/beacon-chain.md b/specs/sharding/beacon-chain.md index 24b1980b31..98166fb773 100644 --- a/specs/sharding/beacon-chain.md +++ b/specs/sharding/beacon-chain.md @@ -90,6 +90,12 @@ The following values are (non-configurable) constants used throughout the specif | `FIELD_ELEMENTS_PER_SAMPLE` | `uint64(2**4)` (= 16) | 31 * 16 = 496 bytes | | `MODULUS` | `0x73eda753299d7d483339d80809a1d80553bda402fffe5bfeffffffff00000001` (curve order of BLS12_381) | +### Domain types + +| Name | Value | +| - | - | +| `DOMAIN_SHARD_SAMPLE` | `DomainType('0x10000000')` | + ## Preset ### Misc @@ -188,7 +194,7 @@ class ShardedCommitmentsContainer(Container): block_verification_kzg_proof: KZGCommitment ``` -#### `ShardSample` +#### `SignedShardSample` ```python class SignedShardSample(Container): @@ -260,7 +266,30 @@ def verify_kzg_proof(commitment: KZGCommitment, x: BLSFieldElement, y: BLSFieldE assert ( bls.Pairing(proof, zero_poly) - == bls.Pairing(commitment, G2_SETUP[-degree + 1]) + == bls.Pairing(commitment.add(G1_SETUP[0].mult(y).neg), G2_SETUP[0]) + ) +``` + +#### `interpolate_poly` + +```python +def interpolate_poly(xs: List[BLSFieldElement], ys: List[BLSFieldElement]): + """ + Lagrange interpolation + """ + # TODO! +``` + +#### `verify_kzg_multiproof` + +```python +def verify_kzg_multiproof(commitment: KZGCommitment, xs: List[BLSFieldElement], ys: List[BLSFieldElement], proof: KZGCommitment) -> None: + zero_poly = elliptic_curve_lincomb(G2_SETUP[:len(xs)], interpolate_poly(xs, [0] * len(ys))) + interpolated_poly = elliptic_curve_lincomb(G2_SETUP[:len(xs)], interpolate_poly(xs, ys)) + + assert ( + bls.Pairing(proof, zero_poly) + == bls.Pairing(commitment.add(interpolated_poly.neg()), G2_SETUP[0]) ) ``` diff --git a/specs/sharding/p2p-interface.md b/specs/sharding/p2p-interface.md index c9d45f0e23..7de55247ed 100644 --- a/specs/sharding/p2p-interface.md +++ b/specs/sharding/p2p-interface.md @@ -70,9 +70,10 @@ The following validations MUST pass before forwarding the `sample`. - _[IGNORE]_ The sample is the first sample with valid signature received for the `(sample.builder, sample.slot, sample.row, sample.column)` combination. - _[REJECT]_ The `sample.data` MUST NOT contain any point `x >= MODULUS`. Although it is a `uint256`, not the full 256 bit range is valid. - _[REJECT]_ The validator defined by `sample.builder` exists and is slashable. -- _[REJECT]_ The sample signature, `sample.signature`, is valid for the builder -- - i.e. `bls.Verify(builder_pubkey, sample_signing_root, samplev.signature)`. - _[REJECT]_ The sample is proposed by the expected `builder` for the sample's `slot`. i.e., the beacon block at `sample.slot - 1` according to the node's fork choise contains an `IntermediateBlockBid` with `intermediate_block_bid.validator_index == sample.builder` +- _[REJECT]_ The sample signature, `sample.signature`, is valid for the builder -- + i.e. `bls.Verify(builder_pubkey, sample_signing_root, sample.signature)` OR `sample.signature == Bytes96(b"\0" * 96)` AND + the sample verification `verify_sample` passes diff --git a/specs/sharding/validator.md b/specs/sharding/validator.md new file mode 100644 index 0000000000..87b97d71d0 --- /dev/null +++ b/specs/sharding/validator.md @@ -0,0 +1,130 @@ +# Sharding -- Honest Validator + +**Notice**: This document is a work-in-progress for researchers and implementers. + +## Table of contents + + + + + +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Helpers](#helpers) + - [`get_pow_block_at_terminal_total_difficulty`](#get_pow_block_at_terminal_total_difficulty) + - [`get_terminal_pow_block`](#get_terminal_pow_block) +- [Protocols](#protocols) + - [`ExecutionEngine`](#executionengine) + - [`get_payload`](#get_payload) +- [Beacon chain responsibilities](#beacon-chain-responsibilities) + - [Block proposal](#block-proposal) + - [Constructing the `BeaconBlockBody`](#constructing-the-beaconblockbody) + - [ExecutionPayload](#executionpayload) + + + + +## Introduction + +This document represents the changes to be made in the code of an "honest validator" to implement executable beacon chain proposal. + +## Prerequisites + +This document is an extension of the [Bellatrix -- Honest Validator](../bellatrix/validator.md) guide. +All behaviors and definitions defined in this document, and documents it extends, carry over unless explicitly noted or overridden. + +All terminology, constants, functions, and protocol mechanics defined in the updated Beacon Chain doc of [Sharding](./beacon-chain.md) are requisite for this document and used throughout. +Please see related Beacon Chain doc before continuing and use them as a reference throughout. + +## Constants + +### Sample counts + +| Name | Value | +| - | - | +| `VALIDATOR_SAMPLE_ROW_COUNT` | `2` | +| `VALIDATOR_SAMPLE_COLUMN_COUNT` | `2` | + +## Helpers + +### `get_validator_row_subnets` + +TODO: Currently the subnets are public (i.e. anyone can derive them.) This is good for a proof of custody with public verifiability, but bad for validator privacy. + +```python +def get_validator_row_subnets(validator: Validator, epoch: Epoch) -> List[uint64]: + return [int.from_bytes(hash_tree_root([validator.pubkey, 0, i])) for i in range(VALIDATOR_SAMPLE_ROW_COUNT)] +``` + +### `get_validator_column_subnets` + +```python +def get_validator_column_subnets(validator: Validator, epoch: Epoch) -> List[uint64]: + return [int.from_bytes(hash_tree_root([validator.pubkey, 1, i])) for i in range(VALIDATOR_SAMPLE_COLUMN_COUNT)] +``` + +### `reconstruct_polynomial` + +```python +def reconstruct_polynomial(samples: List[SignedShardSample]) -> List[SignedShardSample]: + """ + Reconstructs one full row/column from at least 1/2 of the samples + """ + +``` + +## Sample verification + +### `verify_sample` + +```python +def verify_sample(state: BeaconState, block: BeaconBlock, sample: SignedShardSample): + assert sample.row < 2 * get_active_shard_count(state, get_current_epoch(block.slot)) + assert sample.column < 2 * SAMPLES_PER_BLOB + assert block.slot == sample.slot + + # Verify builder signature. + # TODO: We should probably not do this. This should only be done by p2p to verify samples *before* intermediate block is in + # builder = state.validators[signed_block.message.proposer_index] + # signing_root = compute_signing_root(sample, get_domain(state, DOMAIN_SHARD_SAMPLE)) + # assert bls.Verify(sample.builder, signing_root, sample.signature) + + # Verify KZG proof + verify_kzg_multiproof(block.body.sharded_commitments_container.value.sharded_commitments[sample.row], + ??? # TODO! Compute the roots of unity for this sample + sample.data, + sample.proof) +``` + +# Beacon chain responsibilities + +## Validator assignments + +### Attesting + +Every attester is assigned `VALIDATOR_SAMPLE_ROW_COUNT` rows and `VALIDATOR_SAMPLE_COLUMN_COUNT` columns of shard samples. As part of their validator duties, they should subscribe to the subnets given by `get_validator_row_subnets` and `get_validator_column_subnets`, for the whole epoch. + +A row or column is *available* for a `slot` if at least half of the total number of samples were received on the subnet and passed `verify_sample`. Otherwise it is called unavailable. + +If a validator is assigned to an attestation at slot `attestation_slot` and had his previous attestation duty at `previous_attestation_slot`, then they should only attest under the following conditions: + + * For all intermediate blocks `block` with `previous_attestation_slot < block.slot <= attestation_slot`: All sample rows and columns assigned to the validator were available. + +If this condition is not fulfilled, then the validator should instead attest to the last block for which the condition holds. + +This leads to the security property that a chain that is not fully available cannot have more than 1/16th of all validators voting for it. TODO: This claim is for an "infinite number" of validators. Compute the concrete security due to sampling bias. + +# Sample reconstruction + +A validator that has received enough samples of a row or column to mark it as available, should reconstruct all samples in that row/column (if they aren't all available already.) The function `reconstruct_polynomial` gives an example implementation for this. + +Once they have run the reconstruction function, they should distribute the samples that they reconstructed on all pubsub that +the local node is subscribed to, if they have not already received that sample on that pubsub. As an example: + + * The validator is subscribed to row `2` and column `5` + * The sample `(row, column) = (2, 5)` is missing in the column `5` pubsub + * After they have reconstruction of row `2`, the validator should send the sample `(2, 5)` on to the row `2` pubsub (if it was missing) as well as the column `5` pubsub. + +TODO: We need to verify the total complexity of doing this and make sure this does not cause too much load on a validator + +TODO: Compute what the minimum number of validators online would be that guarantees reconstruction of all samples From 5a2c45e93317ffb0e2a5cadeb9bc277b26e94566 Mon Sep 17 00:00:00 2001 From: dankrad Date: Fri, 21 Jan 2022 10:50:52 +0000 Subject: [PATCH 14/66] Update specs/sharding/beacon-chain.md Co-authored-by: terence tsao --- specs/sharding/beacon-chain.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/specs/sharding/beacon-chain.md b/specs/sharding/beacon-chain.md index 98166fb773..8696e83a3f 100644 --- a/specs/sharding/beacon-chain.md +++ b/specs/sharding/beacon-chain.md @@ -526,7 +526,7 @@ def verify_intermediate_block_bid(state: BeaconState, block: BeaconBlock) -> Non assert intermediate_block_bid.execution_payload_root == hash_tree_root(block.body.execution_payload) - assert intermediate_block_bid.sharded_data_commitment_number == block.body.sharded_commitments_container.included_sharded_data_commitments + assert intermediate_block_bid.sharded_data_commitment_count == block.body.sharded_commitments_container.included_sharded_data_commitments assert intermediate_block_bid.sharded_data_commitment_root == hash_tree_root(block.body.sharded_commitments_container.sharded_commitments[-intermediate_block_bid.sharded_data_commitments:]) From 6ad009495466a1f4641928f25719a53070f5d5a0 Mon Sep 17 00:00:00 2001 From: dankrad Date: Fri, 21 Jan 2022 10:52:38 +0000 Subject: [PATCH 15/66] Update specs/sharding/beacon-chain.md Co-authored-by: vbuterin --- specs/sharding/beacon-chain.md | 1 + 1 file changed, 1 insertion(+) diff --git a/specs/sharding/beacon-chain.md b/specs/sharding/beacon-chain.md index 8696e83a3f..ec1f1bb4ad 100644 --- a/specs/sharding/beacon-chain.md +++ b/specs/sharding/beacon-chain.md @@ -256,6 +256,7 @@ def compute_powers(x: BLSFieldElement, n: uint64) -> List[BLSFieldElement]: for i in range(n): powers.append(BLSFieldElement(current_power)) current_power = current_power * int(x) % MODULUS + return powers ``` #### `verify_kzg_proof` From a1d809012767a7df2754b7d49fe933fae902b56e Mon Sep 17 00:00:00 2001 From: dankrad Date: Fri, 21 Jan 2022 10:58:49 +0000 Subject: [PATCH 16/66] Update specs/sharding/beacon-chain.md Co-authored-by: vbuterin --- specs/sharding/beacon-chain.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/specs/sharding/beacon-chain.md b/specs/sharding/beacon-chain.md index ec1f1bb4ad..6d8538b5ce 100644 --- a/specs/sharding/beacon-chain.md +++ b/specs/sharding/beacon-chain.md @@ -560,7 +560,7 @@ def process_sharded_data(state: BeaconState, block: BeaconBlock) -> None: verify_degree_proof(combined_commitment, SAMPLES_PER_BLOB * FIELD_ELEMENTS_PER_SAMPLE, sharded_commitments_container.degree_proof) - # Verify that the 2*N commitments lie on a degree N-1 polynomial + # Verify that the 2*N commitments lie on a degree < N polynomial low_degree_check(sharded_commitments_container.sharded_commitments) # Verify that blocks since the last intermediate block have been included From 76640ede88b7cf6536dcf0c2b395bacafc131244 Mon Sep 17 00:00:00 2001 From: Dankrad Feist Date: Fri, 21 Jan 2022 14:12:26 +0000 Subject: [PATCH 17/66] Refactor polynomial operations into separate file --- specs/sharding/beacon-chain.md | 237 +---------------- specs/sharding/polynomial-commitments.md | 325 +++++++++++++++++++++++ 2 files changed, 329 insertions(+), 233 deletions(-) create mode 100644 specs/sharding/polynomial-commitments.md diff --git a/specs/sharding/beacon-chain.md b/specs/sharding/beacon-chain.md index 6d8538b5ce..91a0ea4beb 100644 --- a/specs/sharding/beacon-chain.md +++ b/specs/sharding/beacon-chain.md @@ -68,14 +68,6 @@ using KZG10 commitments to commit to data to remove any need for fraud proofs (a - **Data**: A list of KZG points, to translate a byte string into - **Blob**: Data with commitments and meta-data, like a flattened bundle of L2 transactions. -## Custom types - -We define the following Python custom types for type hinting and readability: - -| Name | SSZ equivalent | Description | -| - | - | - | -| `KZGCommitment` | `Bytes48` | A G1 curve point | -| `BLSFieldElement` | `uint256` | A number `x` in the range `0 <= x < MODULUS` | ## Constants @@ -85,10 +77,7 @@ The following values are (non-configurable) constants used throughout the specif | Name | Value | Notes | | - | - | - | -| `PRIMITIVE_ROOT_OF_UNITY` | `7` | Primitive root of unity of the BLS12_381 (inner) modulus | -| `DATA_AVAILABILITY_INVERSE_CODING_RATE` | `2**1` (= 2) | Factor by which samples are extended for data availability encoding | | `FIELD_ELEMENTS_PER_SAMPLE` | `uint64(2**4)` (= 16) | 31 * 16 = 496 bytes | -| `MODULUS` | `0x73eda753299d7d483339d80809a1d80553bda402fffe5bfeffffffff00000001` (curve order of BLS12_381) | ### Domain types @@ -119,13 +108,9 @@ With the introduction of intermediate blocks the number of slots per epoch is do | - | - | - | | `SAMPLES_PER_BLOB` | `uint64(2**9)` (= 512) | 248 * 512 = 126,976 bytes | -### Precomputed size verification points +### Precomputed root of unity -| Name | Value | -| - | - | -| `G1_SETUP` | Type `List[G1]`. The G1-side trusted setup `[G, G*s, G*s**2....]`; note that the first point is the generator. | -| `G2_SETUP` | Type `List[G2]`. The G2-side trusted setup `[G, G*s, G*s**2....]` | -| `ROOT_OF_UNITY` | `pow(PRIMITIVE_ROOT_OF_UNITY, (MODULUS - 1) // int(SAMPLES_PER_BLOB * FIELD_ELEMENTS_PER_SAMPLE), MODULUS)` | +| `ROOT_OF_UNITY` | `pow(PRIMITIVE_ROOT_OF_UNITY, (BLS_MODULUS - 1) // int(SAMPLES_PER_BLOB * FIELD_ELEMENTS_PER_SAMPLE), BLS_MODULUS)` | ## Configuration @@ -138,7 +123,6 @@ E.g. `ACTIVE_SHARDS` and `SAMPLES_PER_BLOB`. | - | - | :-: | :-: | | `SECONDS_PER_SLOT` | `uint64(8)` | seconds | 8 seconds | - ## Containers ### New Containers @@ -238,221 +222,6 @@ def is_intermediate_block_slot(slot: Slot): return slot % 2 == 1 ``` -### KZG - -#### `hash_to_field` - -```python -def hash_to_field(x: Container): - return int.from_bytes(hash_tree_root(x), "little") % MODULUS -``` - -#### `compute_powers` - -```python -def compute_powers(x: BLSFieldElement, n: uint64) -> List[BLSFieldElement]: - current_power = 1 - powers = [] - for i in range(n): - powers.append(BLSFieldElement(current_power)) - current_power = current_power * int(x) % MODULUS - return powers -``` - -#### `verify_kzg_proof` - -```python -def verify_kzg_proof(commitment: KZGCommitment, x: BLSFieldElement, y: BLSFieldElement, proof: KZGCommitment) -> None: - zero_poly = G2_SETUP[1].add(G2_SETUP[0].mult(x).neg()) - - assert ( - bls.Pairing(proof, zero_poly) - == bls.Pairing(commitment.add(G1_SETUP[0].mult(y).neg), G2_SETUP[0]) - ) -``` - -#### `interpolate_poly` - -```python -def interpolate_poly(xs: List[BLSFieldElement], ys: List[BLSFieldElement]): - """ - Lagrange interpolation - """ - # TODO! -``` - -#### `verify_kzg_multiproof` - -```python -def verify_kzg_multiproof(commitment: KZGCommitment, xs: List[BLSFieldElement], ys: List[BLSFieldElement], proof: KZGCommitment) -> None: - zero_poly = elliptic_curve_lincomb(G2_SETUP[:len(xs)], interpolate_poly(xs, [0] * len(ys))) - interpolated_poly = elliptic_curve_lincomb(G2_SETUP[:len(xs)], interpolate_poly(xs, ys)) - - assert ( - bls.Pairing(proof, zero_poly) - == bls.Pairing(commitment.add(interpolated_poly.neg()), G2_SETUP[0]) - ) -``` - -#### `verify_degree_proof` - -```python -def verify_degree_proof(commitment: KZGCommitment, degree: uint64, proof: KZGCommitment): - """ - Verifies that the commitment is of polynomial degree <= degree. - """ - - assert ( - bls.Pairing(proof, G2_SETUP[0]) - == bls.Pairing(commitment, G2_SETUP[-degree - 1]) - ) -``` - -#### `block_to_field_elements` - -```python -def block_to_field_elements(block: bytes) -> List[BLSFieldElement]: - """ - Slices a block into 31 byte chunks that can fit into field elements - """ - sliced_block = [block[i:i + 31] for i in range(0, len(bytes), 31)] - return [BLSFieldElement(int.from_bytes(x, "little")) for x in sliced_block] -``` - -#### `roots_of_unity` - -```python -def roots_of_unity(order: uint64) -> List[BLSFieldElement]: - r = [] - root_of_unity = pow(PRIMITIVE_ROOT_OF_UNITY, (MODULUS - 1) // order, MODULUS) - - current_root_of_unity = 1 - for i in range(len(SAMPLES_PER_BLOB * FIELD_ELEMENTS_PER_SAMPLE)): - r.append(current_root_of_unity) - current_root_of_unity = current_root_of_unity * root_of_unity % MODULUS - return r -``` - -#### `modular_inverse` - -```python -def modular_inverse(a): - assert(a == 0): - lm, hm = 1, 0 - low, high = a % MODULUS, MODULUS - while low > 1: - r = high // low - nm, new = hm - lm * r, high - low * r - lm, low, hm, high = nm, new, lm, low - return lm % MODULUS -``` - -#### `eval_poly_at` - -```python -def eval_poly_at(poly: List[BLSFieldElement], x: BLSFieldElement) -> BLSFieldElement: - """ - Evaluates a polynomial (in evaluation form) at an arbitrary point - """ - field_elements_per_blob = SAMPLES_PER_BLOB * FIELD_ELEMENTS_PER_SAMPLE - roots = roots_of_unity(field_elements_per_blob) - - def A(z): - r = 1 - for w in roots: - r = r * (z - w) % MODULUS - return r - - def Aprime(z): - return field_elements_per_blob * pow(z, field_elements_per_blob - 1, MODULUS) - - r = 0 - inverses = [modular_inverse(z - x) for z in roots] - for i, x in enumerate(inverses): - r += f[i] * modular_inverse(Aprime(roots[i])) * x % self.MODULUS - r = r * A(x) % self.MODULUS - return r -``` - -#### `next_power_of_two` - -```python -def next_power_of_two(x: int) -> int: - return 2 ** ((x - 1).bit_length()) -``` - -#### `low_degree_check` - -```python -def low_degree_check(commitments: List[KZGCommitment]): - """ - Checks that the commitments are on a low-degree polynomial - If there are 2*N commitments, that means they should lie on a polynomial - of degree d = K - N - 1, where K = next_power_of_two(2*N) - (The remaining positions are filled with 0, this is to make FFTs usable) - - For details see here: https://notes.ethereum.org/@dankrad/barycentric_low_degree_check - """ - assert len(commitments) % 2 == 0 - N = len(commitments) // 2 - r = hash_to_field(commitments) - K = next_power_of_two(2 * N) - d = K - N - 1 - r_to_K = pow(r, N, K) - roots = roots_of_unity(K) - - # For an efficient implementation, B and Bprime should be precomputed - def B(z): - r = 1 - for w in roots[:d + 1]: - r = r * (z - w) % MODULUS - return r - - def Bprime(z): - r = 0 - for i in range(d + 1): - m = 1 - for w in roots[:i] + roots[i+1:d + 1]: - m = m * (z - w) % MODULUS - r = (r + M) % MODULUS - return r - - coefs = [] - for i in range(K): - coefs.append( - (r_to_K - 1) * modular_inverse(K * roots[i * (K - 1) % K] * (r - roots[i])) % MODULUS) - for i in range(d + 1): - coefs[i] = (coefs[i] + B(r) * modular_inverse(Bprime(r) * (r - roots[i]))) % MODULUS - - assert elliptic_curve_lincomb(commitments, coefs) == bls.Z1() -``` - -#### `vector_lincomb` - -```python -def vector_lincomb(vectors: List[List[BLSFieldElement]], scalars: List[BLSFieldElement]) -> List[BLSFieldElement]: - """ - Compute a linear combination of field element vectors - """ - r = [0 for i in len(vectors[0])] - for v, a in zip(vectors, scalars): - for i, x in enumerate(v): - r[i] = (r[i] + a * x) % MODULUS - return [BLSFieldElement(x) for x in r] -``` - -#### `elliptic_curve_lincomb` - -```python -def elliptic_curve_lincomb(points: List[KZGCommitment], scalars: List[BLSFieldElement]) -> KZGCommitment: - """ - BLS multiscalar multiplication. This function can be optimized using Pippenger's algorithm and variants. This is a non-optimized implementation. - """ - r = bls.Z1() - for x, a in zip(points, scalars): - r = r.add(x.mult(a)) - return r -``` - ### Beacon state accessors #### `get_active_shard_count` @@ -466,6 +235,8 @@ def get_active_shard_count(state: BeaconState, epoch: Epoch) -> uint64: return ACTIVE_SHARDS ``` +## Beacon chain state transition function + ### Block processing #### `process_block` diff --git a/specs/sharding/polynomial-commitments.md b/specs/sharding/polynomial-commitments.md new file mode 100644 index 0000000000..26aff4c977 --- /dev/null +++ b/specs/sharding/polynomial-commitments.md @@ -0,0 +1,325 @@ +# Sharding -- Polynomial Commitments + +**Notice**: This document is a work-in-progress for researchers and implementers. + +## Table of contents + + + + + +- [Introduction](#introduction) + - [Glossary](#glossary) +- [Custom types](#custom-types) +- [Constants](#constants) + - [Misc](#misc) +- [Preset](#preset) + - [Misc](#misc-1) + - [Time parameters](#time-parameters) + - [Shard blob samples](#shard-blob-samples) + - [Precomputed size verification points](#precomputed-size-verification-points) +- [Configuration](#configuration) + - [Time parameters](#time-parameters-1) +- [Containers](#containers) + - [New Containers](#new-containers) + - [`IntermediateBlockBid`](#intermediateblockbid) + - [`IntermediateBlockBidWithRecipientAddress`](#intermediateblockbidwithrecipientaddress) + - [`ShardedCommitmentsContainer`](#shardedcommitmentscontainer) + - [Extended Containers](#extended-containers) + - [`BeaconState`](#beaconstate) + - [`BeaconBlockBody`](#beaconblockbody) +- [Helper functions](#helper-functions) + - [Block processing](#block-processing) + - [`is_intermediate_block_slot`](#is_intermediate_block_slot) + - [KZG](#kzg) + - [`hash_to_field`](#hash_to_field) + - [`compute_powers`](#compute_powers) + - [`verify_kzg_proof`](#verify_kzg_proof) + - [`verify_degree_proof`](#verify_degree_proof) + - [`block_to_field_elements`](#block_to_field_elements) + - [`roots_of_unity`](#roots_of_unity) + - [`modular_inverse`](#modular_inverse) + - [`eval_poly_at`](#eval_poly_at) + - [`next_power_of_two`](#next_power_of_two) + - [`low_degree_check`](#low_degree_check) + - [`vector_lincomb`](#vector_lincomb) + - [`elliptic_curve_lincomb`](#elliptic_curve_lincomb) + - [Beacon state accessors](#beacon-state-accessors) + - [`get_active_shard_count`](#get_active_shard_count) + - [Block processing](#block-processing-1) + - [`process_block`](#process_block) + - [Block header](#block-header) + - [Intermediate Block Bid](#intermediate-block-bid) + - [Sharded data](#sharded-data) + - [Execution payload](#execution-payload) + + + + + +## Introduction + +This document specifies basic polynomial operations and KZG polynomial commitment operations as they are needed for the sharding specification. The implementations are not optimized for performance, but readability. All practical implementations shoul optimize the polynomial operations, and hints what the best known algorithms for these implementations are are included below. + +## Constants + +### BLS Field + +| Name | Value | Notes | +| - | - | - | +| `BLS_MODULUS` | `0x73eda753299d7d483339d80809a1d80553bda402fffe5bfeffffffff00000001` (curve order of BLS12_381) | +| `PRIMITIVE_ROOT_OF_UNITY` | `7` | Primitive root of unity of the BLS12_381 (inner) BLS_MODULUS | + +### KZG Trusted setup + +| Name | Value | +| - | - | +| `G1_SETUP` | Type `List[G1]`. The G1-side trusted setup `[G, G*s, G*s**2....]`; note that the first point is the generator. | +| `G2_SETUP` | Type `List[G2]`. The G2-side trusted setup `[G, G*s, G*s**2....]` | + +## Custom types + +We define the following Python custom types for type hinting and readability: + +| Name | SSZ equivalent | Description | +| - | - | - | +| `KZGCommitment` | `Bytes48` | A G1 curve point | +| `BLSFieldElement` | `uint256` | A number `x` in the range `0 <= x < BLS_MODULUS` | +| `BLSPolynomialCoefficients` | `List[BLSFieldElement]` | A polynomial over the BLS field, given in coefficient form | +| `BLSPolynomialEvaluations` | `List[BLSFieldElement]` | A polynomial over the BLS field, given in evaluation form | + +## Helper functions + +#### `next_power_of_two` + +```python +def next_power_of_two(x: int) -> int: + return 2 ** ((x - 1).bit_length()) +``` + +## Field operations + +### Generic field operations + +#### `modular_inverse` + +```python +def modular_inverse(a): + assert(a == 0): + lm, hm = 1, 0 + low, high = a % BLS_MODULUS, BLS_MODULUS + while low > 1: + r = high // low + nm, new = hm - lm * r, high - low * r + lm, low, hm, high = nm, new, lm, low + return lm % BLS_MODULUS +``` + +#### `roots_of_unity` + +```python +def roots_of_unity(order: uint64) -> List[BLSFieldElement]: + r = [] + root_of_unity = pow(PRIMITIVE_ROOT_OF_UNITY, (BLS_MODULUS - 1) // order, BLS_MODULUS) + + current_root_of_unity = 1 + for i in range(len(SAMPLES_PER_BLOB * FIELD_ELEMENTS_PER_SAMPLE)): + r.append(current_root_of_unity) + current_root_of_unity = current_root_of_unity * root_of_unity % BLS_MODULUS + return r +``` + +### Field helper functions + +#### `compute_powers` + +```python +def compute_powers(x: BLSFieldElement, n: uint64) -> List[BLSFieldElement]: + current_power = 1 + powers = [] + for i in range(n): + powers.append(BLSFieldElement(current_power)) + current_power = current_power * int(x) % BLS_MODULUS + return powers +``` + +#### `low_degree_check` + +```python +def low_degree_check(commitments: List[KZGCommitment]): + """ + Checks that the commitments are on a low-degree polynomial + If there are 2*N commitments, that means they should lie on a polynomial + of degree d = K - N - 1, where K = next_power_of_two(2*N) + (The remaining positions are filled with 0, this is to make FFTs usable) + + For details see here: https://notes.ethereum.org/@dankrad/barycentric_low_degree_check + """ + assert len(commitments) % 2 == 0 + N = len(commitments) // 2 + r = hash_to_field(commitments) + K = next_power_of_two(2 * N) + d = K - N - 1 + r_to_K = pow(r, N, K) + roots = roots_of_unity(K) + + # For an efficient implementation, B and Bprime should be precomputed + def B(z): + r = 1 + for w in roots[:d + 1]: + r = r * (z - w) % BLS_MODULUS + return r + + def Bprime(z): + r = 0 + for i in range(d + 1): + m = 1 + for w in roots[:i] + roots[i+1:d + 1]: + m = m * (z - w) % BLS_MODULUS + r = (r + M) % BLS_MODULUS + return r + + coefs = [] + for i in range(K): + coefs.append( - (r_to_K - 1) * modular_inverse(K * roots[i * (K - 1) % K] * (r - roots[i])) % BLS_MODULUS) + for i in range(d + 1): + coefs[i] = (coefs[i] + B(r) * modular_inverse(Bprime(r) * (r - roots[i]))) % BLS_MODULUS + + assert elliptic_curve_lincomb(commitments, coefs) == bls.Z1() +``` + +#### `vector_lincomb` + +```python +def vector_lincomb(vectors: List[List[BLSFieldElement]], scalars: List[BLSFieldElement]) -> List[BLSFieldElement]: + """ + Compute a linear combination of field element vectors + """ + r = [0 for i in len(vectors[0])] + for v, a in zip(vectors, scalars): + for i, x in enumerate(v): + r[i] = (r[i] + a * x) % BLS_MODULUS + return [BLSFieldElement(x) for x in r] +``` + +#### `block_to_field_elements` + +```python +def block_to_field_elements(block: bytes) -> List[BLSFieldElement]: + """ + Slices a block into 31 byte chunks that can fit into field elements + """ + sliced_block = [block[i:i + 31] for i in range(0, len(bytes), 31)] + return [BLSFieldElement(int.from_bytes(x, "little")) for x in sliced_block] +``` + + +## Polynomial operations + +#### `interpolate_poly` + +```python +def interpolate_poly(xs: List[BLSFieldElement], ys: List[BLSFieldElement]): + """ + Lagrange interpolation + """ + # TODO! +``` + +#### `eval_poly_at` + +```python +def eval_poly_at(poly: List[BLSFieldElement], x: BLSFieldElement) -> BLSFieldElement: + """ + Evaluates a polynomial (in evaluation form) at an arbitrary point + """ + field_elements_per_blob = SAMPLES_PER_BLOB * FIELD_ELEMENTS_PER_SAMPLE + roots = roots_of_unity(field_elements_per_blob) + + def A(z): + r = 1 + for w in roots: + r = r * (z - w) % BLS_MODULUS + return r + + def Aprime(z): + return field_elements_per_blob * pow(z, field_elements_per_blob - 1, BLS_MODULUS) + + r = 0 + inverses = [modular_inverse(z - x) for z in roots] + for i, x in enumerate(inverses): + r += f[i] * modular_inverse(Aprime(roots[i])) * x % self.BLS_MODULUS + r = r * A(x) % self.BLS_MODULUS + return r +``` + +# KZG Operations + +We are using the KZG10 polynomial commitment scheme (Kate, Zaverucha and Goldberg, 2010: https://www.iacr.org/archive/asiacrypt2010/6477178/6477178.pdf). + +## Elliptic curve helper functoins + +#### `elliptic_curve_lincomb` + +```python +def elliptic_curve_lincomb(points: List[KZGCommitment], scalars: List[BLSFieldElement]) -> KZGCommitment: + """ + BLS multiscalar multiplication. This function can be optimized using Pippenger's algorithm and variants. This is a non-optimized implementation. + """ + r = bls.Z1() + for x, a in zip(points, scalars): + r = r.add(x.mult(a)) + return r +``` + +## Hash to field + +#### `hash_to_field` + +```python +def hash_to_field(x: Container): + return int.from_bytes(hash_tree_root(x), "little") % BLS_MODULUS +``` + +## KZG operations + + +#### `verify_kzg_proof` + +```python +def verify_kzg_proof(commitment: KZGCommitment, x: BLSFieldElement, y: BLSFieldElement, proof: KZGCommitment) -> None: + zero_poly = G2_SETUP[1].add(G2_SETUP[0].mult(x).neg()) + + assert ( + bls.Pairing(proof, zero_poly) + == bls.Pairing(commitment.add(G1_SETUP[0].mult(y).neg), G2_SETUP[0]) + ) +``` + + +#### `verify_kzg_multiproof` + +```python +def verify_kzg_multiproof(commitment: KZGCommitment, xs: List[BLSFieldElement], ys: List[BLSFieldElement], proof: KZGCommitment) -> None: + zero_poly = elliptic_curve_lincomb(G2_SETUP[:len(xs)], interpolate_poly(xs, [0] * len(ys))) + interpolated_poly = elliptic_curve_lincomb(G2_SETUP[:len(xs)], interpolate_poly(xs, ys)) + + assert ( + bls.Pairing(proof, zero_poly) + == bls.Pairing(commitment.add(interpolated_poly.neg()), G2_SETUP[0]) + ) +``` + +#### `verify_degree_proof` + +```python +def verify_degree_proof(commitment: KZGCommitment, degree: uint64, proof: KZGCommitment): + """ + Verifies that the commitment is of polynomial degree <= degree. + """ + + assert ( + bls.Pairing(proof, G2_SETUP[0]) + == bls.Pairing(commitment, G2_SETUP[-degree - 1]) + ) +``` \ No newline at end of file From 5705a1c8ac4d9d30aebafb914e23659cf765d2a0 Mon Sep 17 00:00:00 2001 From: Dankrad Feist Date: Fri, 21 Jan 2022 15:31:51 +0000 Subject: [PATCH 18/66] Add missing polynomial functions --- specs/sharding/beacon-chain.md | 30 ++-- specs/sharding/p2p-interface.md | 11 +- specs/sharding/polynomial-commitments.md | 182 ++++++++++++++--------- specs/sharding/validator.md | 25 ++-- 4 files changed, 137 insertions(+), 111 deletions(-) diff --git a/specs/sharding/beacon-chain.md b/specs/sharding/beacon-chain.md index 91a0ea4beb..9bc4e32625 100644 --- a/specs/sharding/beacon-chain.md +++ b/specs/sharding/beacon-chain.md @@ -8,16 +8,17 @@ + - [Introduction](#introduction) - [Glossary](#glossary) -- [Custom types](#custom-types) - [Constants](#constants) - [Misc](#misc) + - [Domain types](#domain-types) - [Preset](#preset) - [Misc](#misc-1) - [Time parameters](#time-parameters) - [Shard blob samples](#shard-blob-samples) - - [Precomputed size verification points](#precomputed-size-verification-points) + - [Precomputed root of unity](#precomputed-root-of-unity) - [Configuration](#configuration) - [Time parameters](#time-parameters-1) - [Containers](#containers) @@ -25,27 +26,16 @@ - [`IntermediateBlockBid`](#intermediateblockbid) - [`IntermediateBlockBidWithRecipientAddress`](#intermediateblockbidwithrecipientaddress) - [`ShardedCommitmentsContainer`](#shardedcommitmentscontainer) + - [`SignedShardSample`](#signedshardsample) - [Extended Containers](#extended-containers) - [`BeaconState`](#beaconstate) - [`BeaconBlockBody`](#beaconblockbody) - [Helper functions](#helper-functions) - [Block processing](#block-processing) - [`is_intermediate_block_slot`](#is_intermediate_block_slot) - - [KZG](#kzg) - - [`hash_to_field`](#hash_to_field) - - [`compute_powers`](#compute_powers) - - [`verify_kzg_proof`](#verify_kzg_proof) - - [`verify_degree_proof`](#verify_degree_proof) - - [`block_to_field_elements`](#block_to_field_elements) - - [`roots_of_unity`](#roots_of_unity) - - [`modular_inverse`](#modular_inverse) - - [`eval_poly_at`](#eval_poly_at) - - [`next_power_of_two`](#next_power_of_two) - - [`low_degree_check`](#low_degree_check) - - [`vector_lincomb`](#vector_lincomb) - - [`elliptic_curve_lincomb`](#elliptic_curve_lincomb) - [Beacon state accessors](#beacon-state-accessors) - [`get_active_shard_count`](#get_active_shard_count) +- [Beacon chain state transition function](#beacon-chain-state-transition-function) - [Block processing](#block-processing-1) - [`process_block`](#process_block) - [Block header](#block-header) @@ -325,7 +315,7 @@ def process_sharded_data(state: BeaconState, block: BeaconBlock) -> None: assert len(sharded_commitments_container.sharded_commitments) // 2 <= get_active_shard_count(state, get_current_epoch(state)) # Verify the degree proof - r = hash_to_field(sharded_commitments_container.sharded_commitments) + r = hash_to_bls_field(sharded_commitments_container.sharded_commitments, 0) r_powers = compute_powers(r, len(sharded_commitments_container.sharded_commitments)) combined_commitment = elliptic_curve_lincomb(sharded_commitments_container.sharded_commitments, r_powers) @@ -335,7 +325,7 @@ def process_sharded_data(state: BeaconState, block: BeaconBlock) -> None: low_degree_check(sharded_commitments_container.sharded_commitments) # Verify that blocks since the last intermediate block have been included - blocks_chunked = [block_to_field_elements(ssz_serialize(block)) for block in state.blocks_since_intermediate_block] + blocks_chunked = [bytes_to_field_elements(ssz_serialize(block)) for block in state.blocks_since_intermediate_block] block_vectors = [] field_elements_per_blob = SAMPLES_PER_BLOB * FIELD_ELEMENTS_PER_SAMPLE for block_chunked in blocks_chunked: @@ -343,13 +333,13 @@ def process_sharded_data(state: BeaconState, block: BeaconBlock) -> None: block_vectors.append(block_chunked[i:i + field_elements_per_blob]) number_of_blobs = len(block_vectors) - r = hash_to_field([sharded_commitments_container.sharded_commitments[:number_of_blobs], 0]) - x = hash_to_field([sharded_commitments_container.sharded_commitments[:number_of_blobs], 1]) + r = hash_to_bls_field(sharded_commitments_container.sharded_commitments[:number_of_blobs], 0) + x = hash_to_bls_field(sharded_commitments_container.sharded_commitments[:number_of_blobs], 1) r_powers = compute_powers(r, number_of_blobs) combined_vector = vector_lincomb(block_vectors, r_powers) combined_commitment = elliptic_curve_lincomb(sharded_commitments_container.sharded_commitments[:number_of_blobs], r_powers) - y = eval_poly_at(combined_vector, x) + y = evaluate_polynomial_in_evaluation_form(combined_vector, x) verify_kzg_proof(combined_commitment, x, y, block_verification_kzg_proof) diff --git a/specs/sharding/p2p-interface.md b/specs/sharding/p2p-interface.md index 7de55247ed..56a13db648 100644 --- a/specs/sharding/p2p-interface.md +++ b/specs/sharding/p2p-interface.md @@ -8,17 +8,14 @@ + - [Introduction](#introduction) - [Constants](#constants) - [Misc](#misc) - [Gossip domain](#gossip-domain) - [Topics and messages](#topics-and-messages) - - [Shard blob subnets](#shard-blob-subnets) - - [`shard_blob_{subnet_id}`](#shard_blob_subnet_id) - - [Global topics](#global-topics) - - [`shard_blob_header`](#shard_blob_header) - - [`shard_blob_tx`](#shard_blob_tx) - - [`shard_proposer_slashing`](#shard_proposer_slashing) + - [Shard sample subnets](#shard-sample-subnets) + - [`shard_row_{subnet_id}`](#shard_row_subnet_id) @@ -68,7 +65,7 @@ The following validations MUST pass before forwarding the `sample`. - _[REJECT]_ The shard sample is for the correct subnet -- i.e. `sample.row == subnet_id` for `shard_row_{subnet_id}` and `sample.column == subnet_id` for `shard_column_{subnet_id}` - _[IGNORE]_ The sample is the first sample with valid signature received for the `(sample.builder, sample.slot, sample.row, sample.column)` combination. -- _[REJECT]_ The `sample.data` MUST NOT contain any point `x >= MODULUS`. Although it is a `uint256`, not the full 256 bit range is valid. +- _[REJECT]_ The `sample.data` MUST NOT contain any point `x >= BLS_MODULUS`. Although it is a `uint256`, not the full 256 bit range is valid. - _[REJECT]_ The validator defined by `sample.builder` exists and is slashable. - _[REJECT]_ The sample is proposed by the expected `builder` for the sample's `slot`. i.e., the beacon block at `sample.slot - 1` according to the node's fork choise contains an `IntermediateBlockBid` diff --git a/specs/sharding/polynomial-commitments.md b/specs/sharding/polynomial-commitments.md index 26aff4c977..ef9dd688d3 100644 --- a/specs/sharding/polynomial-commitments.md +++ b/specs/sharding/polynomial-commitments.md @@ -8,50 +8,37 @@ -- [Introduction](#introduction) - - [Glossary](#glossary) -- [Custom types](#custom-types) -- [Constants](#constants) - - [Misc](#misc) -- [Preset](#preset) - - [Misc](#misc-1) - - [Time parameters](#time-parameters) - - [Shard blob samples](#shard-blob-samples) - - [Precomputed size verification points](#precomputed-size-verification-points) -- [Configuration](#configuration) - - [Time parameters](#time-parameters-1) -- [Containers](#containers) - - [New Containers](#new-containers) - - [`IntermediateBlockBid`](#intermediateblockbid) - - [`IntermediateBlockBidWithRecipientAddress`](#intermediateblockbidwithrecipientaddress) - - [`ShardedCommitmentsContainer`](#shardedcommitmentscontainer) - - [Extended Containers](#extended-containers) - - [`BeaconState`](#beaconstate) - - [`BeaconBlockBody`](#beaconblockbody) -- [Helper functions](#helper-functions) - - [Block processing](#block-processing) - - [`is_intermediate_block_slot`](#is_intermediate_block_slot) - - [KZG](#kzg) - - [`hash_to_field`](#hash_to_field) - - [`compute_powers`](#compute_powers) - - [`verify_kzg_proof`](#verify_kzg_proof) - - [`verify_degree_proof`](#verify_degree_proof) - - [`block_to_field_elements`](#block_to_field_elements) - - [`roots_of_unity`](#roots_of_unity) - - [`modular_inverse`](#modular_inverse) - - [`eval_poly_at`](#eval_poly_at) - - [`next_power_of_two`](#next_power_of_two) - - [`low_degree_check`](#low_degree_check) - - [`vector_lincomb`](#vector_lincomb) - - [`elliptic_curve_lincomb`](#elliptic_curve_lincomb) - - [Beacon state accessors](#beacon-state-accessors) - - [`get_active_shard_count`](#get_active_shard_count) - - [Block processing](#block-processing-1) - - [`process_block`](#process_block) - - [Block header](#block-header) - - [Intermediate Block Bid](#intermediate-block-bid) - - [Sharded data](#sharded-data) - - [Execution payload](#execution-payload) + + - [Introduction](#introduction) + - [Constants](#constants) + - [BLS Field](#bls-field) + - [KZG Trusted setup](#kzg-trusted-setup) + - [Custom types](#custom-types) + - [Helper functions](#helper-functions) + - [`next_power_of_two`](#next_power_of_two) + - [Field operations](#field-operations) + - [Generic field operations](#generic-field-operations) + - [`bls_modular_inverse`](#bls_modular_inverse) + - [`roots_of_unity`](#roots_of_unity) + - [Field helper functions](#field-helper-functions) + - [`compute_powers`](#compute_powers) + - [`low_degree_check`](#low_degree_check) + - [`vector_lincomb`](#vector_lincomb) + - [`bytes_to_field_elements`](#bytes_to_field_elements) + - [Polynomial operations](#polynomial-operations) + - [`add_polynomials`](#add_polynomials) + - [`multiply_polynomials`](#multiply_polynomials) + - [`interpolate_polynomial`](#interpolate_polynomial) + - [`evaluate_polynomial_in_evaluation_form`](#evaluate_polynomial_in_evaluation_form) +- [KZG Operations](#kzg-operations) + - [Elliptic curve helper functoins](#elliptic-curve-helper-functoins) + - [`elliptic_curve_lincomb`](#elliptic_curve_lincomb) + - [Hash to field](#hash-to-field) + - [`hash_to_bls_field`](#hash_to_bls_field) + - [KZG operations](#kzg-operations) + - [`verify_kzg_proof`](#verify_kzg_proof) + - [`verify_kzg_multiproof`](#verify_kzg_multiproof) + - [`verify_degree_proof`](#verify_degree_proof) @@ -85,8 +72,8 @@ We define the following Python custom types for type hinting and readability: | - | - | - | | `KZGCommitment` | `Bytes48` | A G1 curve point | | `BLSFieldElement` | `uint256` | A number `x` in the range `0 <= x < BLS_MODULUS` | -| `BLSPolynomialCoefficients` | `List[BLSFieldElement]` | A polynomial over the BLS field, given in coefficient form | -| `BLSPolynomialEvaluations` | `List[BLSFieldElement]` | A polynomial over the BLS field, given in evaluation form | +| `BLSPolynomialByCoefficients` | `List[BLSFieldElement]` | A polynomial over the BLS field, given in coefficient form | +| `BLSPolynomialByEvaluations` | `List[BLSFieldElement]` | A polynomial over the BLS field, given in evaluation form | ## Helper functions @@ -101,13 +88,15 @@ def next_power_of_two(x: int) -> int: ### Generic field operations -#### `modular_inverse` +#### `bls_modular_inverse` ```python -def modular_inverse(a): - assert(a == 0): +def bls_modular_inverse(x: BLSFieldElement) -> BLSFieldElement: + """ + Computes the modular inverse of x, i.e. y such that x * y % BLS_MODULUS == 1. Returns 0 for x == 0 + """ lm, hm = 1, 0 - low, high = a % BLS_MODULUS, BLS_MODULUS + low, high = x % BLS_MODULUS, BLS_MODULUS while low > 1: r = high // low nm, new = hm - lm * r, high - low * r @@ -119,12 +108,16 @@ def modular_inverse(a): ```python def roots_of_unity(order: uint64) -> List[BLSFieldElement]: - r = [] + """ + Computes a list of roots of unity for a given order. The order must divide the BLS multiplicative group order, i.e. BLS_MODULUS - 1 + """ + assert (BLS_MODULUS - 1) % order == 0 + roots = [] root_of_unity = pow(PRIMITIVE_ROOT_OF_UNITY, (BLS_MODULUS - 1) // order, BLS_MODULUS) current_root_of_unity = 1 for i in range(len(SAMPLES_PER_BLOB * FIELD_ELEMENTS_PER_SAMPLE)): - r.append(current_root_of_unity) + roots.append(current_root_of_unity) current_root_of_unity = current_root_of_unity * root_of_unity % BLS_MODULUS return r ``` @@ -148,7 +141,7 @@ def compute_powers(x: BLSFieldElement, n: uint64) -> List[BLSFieldElement]: ```python def low_degree_check(commitments: List[KZGCommitment]): """ - Checks that the commitments are on a low-degree polynomial + Checks that the commitments are on a low-degree polynomial. If there are 2*N commitments, that means they should lie on a polynomial of degree d = K - N - 1, where K = next_power_of_two(2*N) (The remaining positions are filled with 0, this is to make FFTs usable) @@ -157,7 +150,7 @@ def low_degree_check(commitments: List[KZGCommitment]): """ assert len(commitments) % 2 == 0 N = len(commitments) // 2 - r = hash_to_field(commitments) + r = hash_to_bls_field(commitments, 0) K = next_power_of_two(2 * N) d = K - N - 1 r_to_K = pow(r, N, K) @@ -181,9 +174,9 @@ def low_degree_check(commitments: List[KZGCommitment]): coefs = [] for i in range(K): - coefs.append( - (r_to_K - 1) * modular_inverse(K * roots[i * (K - 1) % K] * (r - roots[i])) % BLS_MODULUS) + coefs.append( - (r_to_K - 1) * bls_modular_inverse(K * roots[i * (K - 1) % K] * (r - roots[i])) % BLS_MODULUS) for i in range(d + 1): - coefs[i] = (coefs[i] + B(r) * modular_inverse(Bprime(r) * (r - roots[i]))) % BLS_MODULUS + coefs[i] = (coefs[i] + B(r) * bls_modular_inverse(Bprime(r) * (r - roots[i]))) % BLS_MODULUS assert elliptic_curve_lincomb(commitments, coefs) == bls.Z1() ``` @@ -202,10 +195,10 @@ def vector_lincomb(vectors: List[List[BLSFieldElement]], scalars: List[BLSFieldE return [BLSFieldElement(x) for x in r] ``` -#### `block_to_field_elements` +#### `bytes_to_field_elements` ```python -def block_to_field_elements(block: bytes) -> List[BLSFieldElement]: +def bytes_to_field_elements(block: bytes) -> List[BLSFieldElement]: """ Slices a block into 31 byte chunks that can fit into field elements """ @@ -213,23 +206,59 @@ def block_to_field_elements(block: bytes) -> List[BLSFieldElement]: return [BLSFieldElement(int.from_bytes(x, "little")) for x in sliced_block] ``` - ## Polynomial operations -#### `interpolate_poly` +#### `add_polynomials` + +```python +def add_polynomials(a: BLSPolynomialByCoefficients, b: BLSPolynomialByCoefficients) -> BLSPolynomialByCoefficients: + """ + Sums the polynomials `a` and `b` given by their coefficients + """ + a, b = (a, b) if len(a >= b) else (b, a) + return [(a[i] + (b[i] if i < len(b) else 0) % BLS_MODULUS for i in range(len(a))] +``` + +#### `multiply_polynomials` + +```python +def multiply_polynomials(a: BLSPolynomialByCoefficients, b: BLSPolynomialByCoefficients) -> BLSPolynomialByCoefficients: + """ + Multiplies the polynomials `a` and `b` given by their coefficients + """ + r = [0] + for power, coef in enumerate(a): + summand = [0] * power + [coef * x % BLS_MODULUS for x in b] + r = add_polynomials(r, summand) + return r +``` + + +#### `interpolate_polynomial` ```python -def interpolate_poly(xs: List[BLSFieldElement], ys: List[BLSFieldElement]): +def interpolate_polynomial(xs: List[BLSFieldElement], ys: List[BLSFieldElement]) -> BLSPolynomialByCoefficients: """ Lagrange interpolation """ - # TODO! + assert len(xs) == len(ys) + r = [0] + + for i in range(len(xs)): + summand = [ys[i]] + for j in range(len(ys)): + if j != i: + weight_adjustment = bls_modular_inverse(xs[j] - xs[i]) + summand = multiply_polynomials(summand, [weight_adjustment, ((MODULUS - weight_adjustment) * xs[i])]) + r = add_polynomials(r, summand) + + return r ``` -#### `eval_poly_at` +#### `evaluate_polynomial_in_evaluation_form` ```python -def eval_poly_at(poly: List[BLSFieldElement], x: BLSFieldElement) -> BLSFieldElement: +def evaluate_polynomial_in_evaluation_form(poly: BLSPolynomialByEvaluations, x: BLSFieldElement) -> BLSFieldElement: """ Evaluates a polynomial (in evaluation form) at an arbitrary point """ @@ -246,9 +275,9 @@ def eval_poly_at(poly: List[BLSFieldElement], x: BLSFieldElement) -> BLSFieldEle return field_elements_per_blob * pow(z, field_elements_per_blob - 1, BLS_MODULUS) r = 0 - inverses = [modular_inverse(z - x) for z in roots] + inverses = [bls_modular_inverse(z - x) for z in roots] for i, x in enumerate(inverses): - r += f[i] * modular_inverse(Aprime(roots[i])) * x % self.BLS_MODULUS + r += f[i] * bls_modular_inverse(Aprime(roots[i])) * x % self.BLS_MODULUS r = r * A(x) % self.BLS_MODULUS return r ``` @@ -274,20 +303,25 @@ def elliptic_curve_lincomb(points: List[KZGCommitment], scalars: List[BLSFieldEl ## Hash to field -#### `hash_to_field` +#### `hash_to_bls_field` ```python -def hash_to_field(x: Container): - return int.from_bytes(hash_tree_root(x), "little") % BLS_MODULUS +def hash_to_bls_field(x: Container, challenge_number: uint64) -> BLSFieldElement: + """ + This function is used to generate Fiat-Shamir challenges. The output is not uniform over the BLS field. + """ + return int.from_bytes(hash(hash_tree_root(x) + int.to_bytes(challenge_number, 32, "little")), "little") % BLS_MODULUS ``` ## KZG operations - #### `verify_kzg_proof` ```python def verify_kzg_proof(commitment: KZGCommitment, x: BLSFieldElement, y: BLSFieldElement, proof: KZGCommitment) -> None: + """ + Checks that `proof` is a valid KZG proof for the polynomial committed to by `commitment` evaluated at `x` equals `y` + """ zero_poly = G2_SETUP[1].add(G2_SETUP[0].mult(x).neg()) assert ( @@ -296,13 +330,15 @@ def verify_kzg_proof(commitment: KZGCommitment, x: BLSFieldElement, y: BLSFieldE ) ``` - #### `verify_kzg_multiproof` ```python def verify_kzg_multiproof(commitment: KZGCommitment, xs: List[BLSFieldElement], ys: List[BLSFieldElement], proof: KZGCommitment) -> None: - zero_poly = elliptic_curve_lincomb(G2_SETUP[:len(xs)], interpolate_poly(xs, [0] * len(ys))) - interpolated_poly = elliptic_curve_lincomb(G2_SETUP[:len(xs)], interpolate_poly(xs, ys)) + """ + Verifies a KZG multiproof. + """ + zero_poly = elliptic_curve_lincomb(G2_SETUP[:len(xs)], interpolate_polynomial(xs, [0] * len(ys))) + interpolated_poly = elliptic_curve_lincomb(G2_SETUP[:len(xs)], interpolate_polynomial(xs, ys)) assert ( bls.Pairing(proof, zero_poly) diff --git a/specs/sharding/validator.md b/specs/sharding/validator.md index 87b97d71d0..bc6153616e 100644 --- a/specs/sharding/validator.md +++ b/specs/sharding/validator.md @@ -8,18 +8,21 @@ -- [Introduction](#introduction) -- [Prerequisites](#prerequisites) -- [Helpers](#helpers) - - [`get_pow_block_at_terminal_total_difficulty`](#get_pow_block_at_terminal_total_difficulty) - - [`get_terminal_pow_block`](#get_terminal_pow_block) -- [Protocols](#protocols) - - [`ExecutionEngine`](#executionengine) - - [`get_payload`](#get_payload) + + - [Introduction](#introduction) + - [Prerequisites](#prerequisites) + - [Constants](#constants) + - [Sample counts](#sample-counts) + - [Helpers](#helpers) + - [`get_validator_row_subnets`](#get_validator_row_subnets) + - [`get_validator_column_subnets`](#get_validator_column_subnets) + - [`reconstruct_polynomial`](#reconstruct_polynomial) + - [Sample verification](#sample-verification) + - [`verify_sample`](#verify_sample) - [Beacon chain responsibilities](#beacon-chain-responsibilities) - - [Block proposal](#block-proposal) - - [Constructing the `BeaconBlockBody`](#constructing-the-beaconblockbody) - - [ExecutionPayload](#executionpayload) + - [Validator assignments](#validator-assignments) + - [Attesting](#attesting) +- [Sample reconstruction](#sample-reconstruction) From fd423c1bf3fc1473d747451c5d7a693176fe0639 Mon Sep 17 00:00:00 2001 From: Dankrad Feist Date: Fri, 21 Jan 2022 15:34:41 +0000 Subject: [PATCH 19/66] Fix polynomial commitment file toc levels --- specs/sharding/polynomial-commitments.md | 60 ++++++++++++------------ 1 file changed, 30 insertions(+), 30 deletions(-) diff --git a/specs/sharding/polynomial-commitments.md b/specs/sharding/polynomial-commitments.md index ef9dd688d3..f639108799 100644 --- a/specs/sharding/polynomial-commitments.md +++ b/specs/sharding/polynomial-commitments.md @@ -9,36 +9,36 @@ - - [Introduction](#introduction) - - [Constants](#constants) - - [BLS Field](#bls-field) - - [KZG Trusted setup](#kzg-trusted-setup) - - [Custom types](#custom-types) - - [Helper functions](#helper-functions) - - [`next_power_of_two`](#next_power_of_two) - - [Field operations](#field-operations) - - [Generic field operations](#generic-field-operations) - - [`bls_modular_inverse`](#bls_modular_inverse) - - [`roots_of_unity`](#roots_of_unity) - - [Field helper functions](#field-helper-functions) - - [`compute_powers`](#compute_powers) - - [`low_degree_check`](#low_degree_check) - - [`vector_lincomb`](#vector_lincomb) - - [`bytes_to_field_elements`](#bytes_to_field_elements) - - [Polynomial operations](#polynomial-operations) - - [`add_polynomials`](#add_polynomials) - - [`multiply_polynomials`](#multiply_polynomials) - - [`interpolate_polynomial`](#interpolate_polynomial) - - [`evaluate_polynomial_in_evaluation_form`](#evaluate_polynomial_in_evaluation_form) +- [Introduction](#introduction) +- [Constants](#constants) + - [BLS Field](#bls-field) + - [KZG Trusted setup](#kzg-trusted-setup) +- [Custom types](#custom-types) +- [Helper functions](#helper-functions) + - [`next_power_of_two`](#next_power_of_two) +- [Field operations](#field-operations) + - [Generic field operations](#generic-field-operations) + - [`bls_modular_inverse`](#bls_modular_inverse) + - [`roots_of_unity`](#roots_of_unity) + - [Field helper functions](#field-helper-functions) + - [`compute_powers`](#compute_powers) + - [`low_degree_check`](#low_degree_check) + - [`vector_lincomb`](#vector_lincomb) + - [`bytes_to_field_elements`](#bytes_to_field_elements) +- [Polynomial operations](#polynomial-operations) + - [`add_polynomials`](#add_polynomials) + - [`multiply_polynomials`](#multiply_polynomials) + - [`interpolate_polynomial`](#interpolate_polynomial) + - [`evaluate_polynomial_in_evaluation_form`](#evaluate_polynomial_in_evaluation_form) - [KZG Operations](#kzg-operations) - [Elliptic curve helper functoins](#elliptic-curve-helper-functoins) - - [`elliptic_curve_lincomb`](#elliptic_curve_lincomb) + - [`elliptic_curve_lincomb`](#elliptic_curve_lincomb) - [Hash to field](#hash-to-field) - - [`hash_to_bls_field`](#hash_to_bls_field) + - [`hash_to_bls_field`](#hash_to_bls_field) - [KZG operations](#kzg-operations) - - [`verify_kzg_proof`](#verify_kzg_proof) - - [`verify_kzg_multiproof`](#verify_kzg_multiproof) - - [`verify_degree_proof`](#verify_degree_proof) + - [`verify_kzg_proof`](#verify_kzg_proof) + - [`verify_kzg_multiproof`](#verify_kzg_multiproof) + - [`verify_degree_proof`](#verify_degree_proof) @@ -282,11 +282,11 @@ def evaluate_polynomial_in_evaluation_form(poly: BLSPolynomialByEvaluations, x: return r ``` -# KZG Operations +## KZG Operations We are using the KZG10 polynomial commitment scheme (Kate, Zaverucha and Goldberg, 2010: https://www.iacr.org/archive/asiacrypt2010/6477178/6477178.pdf). -## Elliptic curve helper functoins +### Elliptic curve helper functoins #### `elliptic_curve_lincomb` @@ -301,7 +301,7 @@ def elliptic_curve_lincomb(points: List[KZGCommitment], scalars: List[BLSFieldEl return r ``` -## Hash to field +### Hash to field #### `hash_to_bls_field` @@ -313,7 +313,7 @@ def hash_to_bls_field(x: Container, challenge_number: uint64) -> BLSFieldElement return int.from_bytes(hash(hash_tree_root(x) + int.to_bytes(challenge_number, 32, "little")), "little") % BLS_MODULUS ``` -## KZG operations +### KZG operations #### `verify_kzg_proof` From e97fd176220e42d792f344d7fb6f46630d2aa3c2 Mon Sep 17 00:00:00 2001 From: Dankrad Feist Date: Fri, 21 Jan 2022 16:04:16 +0000 Subject: [PATCH 20/66] Refactor the payload to make better use of unions --- specs/sharding/beacon-chain.md | 52 +++++++++++++++++++--------------- 1 file changed, 29 insertions(+), 23 deletions(-) diff --git a/specs/sharding/beacon-chain.md b/specs/sharding/beacon-chain.md index 9bc4e32625..d62f3841a8 100644 --- a/specs/sharding/beacon-chain.md +++ b/specs/sharding/beacon-chain.md @@ -29,6 +29,7 @@ - [`SignedShardSample`](#signedshardsample) - [Extended Containers](#extended-containers) - [`BeaconState`](#beaconstate) + - [`IntermediatBlockData`](#intermediatblockdata) - [`BeaconBlockBody`](#beaconblockbody) - [Helper functions](#helper-functions) - [Block processing](#block-processing) @@ -190,19 +191,23 @@ class BeaconState(bellatrix.BeaconState): blocks_since_intermediate_block: List[BeaconBlock, MAX_BEACON_BLOCKS_BETWEEN_INTERMEDIATE_BLOCKS] ``` +#### `IntermediatBlockData` + +```python +class IntermediateBlockData(Container): + execution_payload: ExecutionPayload + sharded_commitments_container: ShardedCommitmentsContainer +``` + #### `BeaconBlockBody` ```python -class BeaconBlockBody(bellatrix.BeaconBlockBody): - execution_payload: Union[None, ExecutionPayload] - sharded_commitments_container: Union[None, ShardedCommitmentsContainer] - intermediate_block_bid_with_recipient_address: Union[None, IntermediateBlockBidWithRecipientAddress] +class BeaconBlockBody(altair.BeaconBlockBody): + payload_data: Union[IntermediateBlockBid, IntermediateBlockData] ``` ## Helper functions -*Note*: The definitions below are for specification purposes and are not necessarily optimal implementations. - ### Block processing #### `is_intermediate_block_slot` @@ -282,25 +287,31 @@ def process_block_header(state: BeaconState, block: BeaconBlock) -> None: def verify_intermediate_block_bid(state: BeaconState, block: BeaconBlock) -> None: if is_intermediate_block_slot(block.slot): # Get last intermediate block bid - assert state.blocks_since_intermediate_block[-1].body.intermediate_block_bid_with_recipient_address.selector == 1 - intermediate_block_bid = state.blocks_since_intermediate_block[-1].body.intermediate_block_bid_with_recipient_address.value.intermediate_block_bid + assert state.blocks_since_intermediate_block[-1].body.payload_data.selector == 0 + intermediate_block_bid = state.blocks_since_intermediate_block[-1].body.payload_data.value.intermediate_block_bid assert intermediate_block_bid.slot + 1 == block.slot - assert intermediate_block_bid.execution_payload_root == hash_tree_root(block.body.execution_payload) + assert block.body.payload_data.selector == 1 # Verify that intermediate block does not contain bid + + intermediate_block_data = block.body.payload_data.value - assert intermediate_block_bid.sharded_data_commitment_count == block.body.sharded_commitments_container.included_sharded_data_commitments + assert intermediate_block_bid.execution_payload_root == hash_tree_root(intermediate_block_data.execution_payload) - assert intermediate_block_bid.sharded_data_commitment_root == hash_tree_root(block.body.sharded_commitments_container.sharded_commitments[-intermediate_block_bid.sharded_data_commitments:]) + assert intermediate_block_bid.sharded_data_commitment_count == intermediate_block_data.included_sharded_data_commitments + + assert intermediate_block_bid.sharded_data_commitment_root == hash_tree_root(intermediate_block_data.sharded_commitments[-intermediate_block_bid.sharded_data_commitments:]) assert intermediate_block_bid.validator_index == block.proposer_index - assert block.body.intermediate_block_bid_with_recipient_address.selector == 0 # Verify that intermediate block does not contain bid else: - assert block.body.intermediate_block_bid_with_recipient_address.selector == 1 + assert block.body.payload_data.selector == 0 - intermediate_block_bid = block.body.intermediate_block_bid_with_recipient_address.value.intermediate_block_bid + intermediate_block_bid = block.body.payload_data.value.intermediate_block_bid assert intermediate_block_bid.slot == block.slot assert intermediate_block_bid.parent_block_root == block.parent_root + # We do not check that the builder address exists or has sufficient balance here. + # If it does not have sufficient balance, the block proposer loses out, so it is their + # responsibility to check. ``` #### Sharded data @@ -308,8 +319,8 @@ def verify_intermediate_block_bid(state: BeaconState, block: BeaconBlock) -> Non ```python def process_sharded_data(state: BeaconState, block: BeaconBlock) -> None: if is_intermediate_block_slot(block.slot): - assert block.body.sharded_commitments_container.selector == 1 - sharded_commitments_container = block.body.sharded_commitments_container.value + assert block.body.payload_data.selector == 1 + sharded_commitments_container = block.body.payload_data.value.sharded_commitments_container # Verify not too many commitments assert len(sharded_commitments_container.sharded_commitments) // 2 <= get_active_shard_count(state, get_current_epoch(state)) @@ -345,9 +356,6 @@ def process_sharded_data(state: BeaconState, block: BeaconBlock) -> None: # Verify that number of sharded data commitments is correctly indicated assert 2 * (number_of_blobs + included_sharded_data_commitments) == len(sharded_commitments_container.sharded_commitments) - - else: - assert block.body.sharded_commitments_container.selector == 0 # Only intermediate blocks have sharded commitments ``` #### Execution payload @@ -355,8 +363,8 @@ def process_sharded_data(state: BeaconState, block: BeaconBlock) -> None: ```python def process_execution_payload(state: BeaconState, block: BeaconBlock, execution_engine: ExecutionEngine) -> None: if is_intermediate_block_slot(block.slot): - assert block.body.execution_payload.selector == 1 - payload = block.body.execution_payload.value + assert block.body.payload_data.selector == 1 + payload = block.body.payload_data.value.execution_payload # Verify consistency of the parent hash with respect to the previous execution payload header if is_merge_transition_complete(state): @@ -401,6 +409,4 @@ def process_execution_payload(state: BeaconState, block: BeaconBlock, execution_ block_hash=payload.block_hash, transactions_root=hash_tree_root(payload.transactions), ) - else: - assert block.body.execution_payload.selector == 0 # Only intermediate blocks have execution payloads ``` \ No newline at end of file From db02d183f2424aef99d0224ae8ef89be4e8af55e Mon Sep 17 00:00:00 2001 From: Dankrad Feist Date: Fri, 21 Jan 2022 09:14:24 -0800 Subject: [PATCH 21/66] Add reverse bit order convention --- specs/sharding/polynomial-commitments.md | 24 +++++++++++++++++++++++- specs/sharding/validator.md | 6 ++++-- 2 files changed, 27 insertions(+), 3 deletions(-) diff --git a/specs/sharding/polynomial-commitments.md b/specs/sharding/polynomial-commitments.md index f639108799..305f212fc3 100644 --- a/specs/sharding/polynomial-commitments.md +++ b/specs/sharding/polynomial-commitments.md @@ -84,6 +84,28 @@ def next_power_of_two(x: int) -> int: return 2 ** ((x - 1).bit_length()) ``` +#### `reverse_bit_order` + +```python +def reverse_bit_order(n, order): + """ + Reverse the bit order of an integer n + """ + assert is_power_of_two(order) + # Convert n to binary with the same number of bits as "order" - 1, then reverse its bit order + return int(('{:0' + str(order.bit_length() - 1) + 'b}').format(n)[::-1], 2) +``` + +#### `list_to_reverse_bit_order` + +```python +def list_to_reverse_bit_order(l): + """ + Convert a list between normal and reverse bit order. This operation is idempotent. + """ + return [l[reverse_bit_order(i, len(l))] for i in range(len(l))] +``` + ## Field operations ### Generic field operations @@ -154,7 +176,7 @@ def low_degree_check(commitments: List[KZGCommitment]): K = next_power_of_two(2 * N) d = K - N - 1 r_to_K = pow(r, N, K) - roots = roots_of_unity(K) + roots = list_to_reverse_bit_order(roots_of_unity(K)) # For an efficient implementation, B and Bprime should be precomputed def B(z): diff --git a/specs/sharding/validator.md b/specs/sharding/validator.md index bc6153616e..6697d3cd96 100644 --- a/specs/sharding/validator.md +++ b/specs/sharding/validator.md @@ -92,9 +92,11 @@ def verify_sample(state: BeaconState, block: BeaconBlock, sample: SignedShardSam # signing_root = compute_signing_root(sample, get_domain(state, DOMAIN_SHARD_SAMPLE)) # assert bls.Verify(sample.builder, signing_root, sample.signature) + roots_in_rbo = list_to_reverse_bit_order(roots_of_unity(SAMPLES_PER_BLOB * FIELD_ELEMENTS_PER_SAMPLE)) + # Verify KZG proof - verify_kzg_multiproof(block.body.sharded_commitments_container.value.sharded_commitments[sample.row], - ??? # TODO! Compute the roots of unity for this sample + verify_kzg_multiproof(block.body.payload_data.value.sharded_commitments_container.sharded_commitments[sample.row], + roots_in_rbo[sample.column * FIELD_ELEMENTS_PER_SAMPLE:(sample.column + 1) * FIELD_ELEMENTS_PER_SAMPLE] sample.data, sample.proof) ``` From 2454ab57db33773eba7f46c22406628a1651f623 Mon Sep 17 00:00:00 2001 From: Dankrad Feist Date: Fri, 21 Jan 2022 09:16:35 -0800 Subject: [PATCH 22/66] Correct inequality in verify_degree_proof --- specs/sharding/polynomial-commitments.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/specs/sharding/polynomial-commitments.md b/specs/sharding/polynomial-commitments.md index 305f212fc3..4468c68218 100644 --- a/specs/sharding/polynomial-commitments.md +++ b/specs/sharding/polynomial-commitments.md @@ -371,13 +371,13 @@ def verify_kzg_multiproof(commitment: KZGCommitment, xs: List[BLSFieldElement], #### `verify_degree_proof` ```python -def verify_degree_proof(commitment: KZGCommitment, degree: uint64, proof: KZGCommitment): +def verify_degree_proof(commitment: KZGCommitment, degree_bound: uint64, proof: KZGCommitment): """ - Verifies that the commitment is of polynomial degree <= degree. + Verifies that the commitment is of polynomial degree < degree_bound. """ assert ( bls.Pairing(proof, G2_SETUP[0]) - == bls.Pairing(commitment, G2_SETUP[-degree - 1]) + == bls.Pairing(commitment, G2_SETUP[-degree_bound]) ) ``` \ No newline at end of file From c32229fbce33bd39b3be8f9083b5a0dadcd2e4cc Mon Sep 17 00:00:00 2001 From: Dankrad Feist Date: Fri, 21 Jan 2022 09:28:40 -0800 Subject: [PATCH 23/66] Small fix --- specs/sharding/beacon-chain.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/specs/sharding/beacon-chain.md b/specs/sharding/beacon-chain.md index d62f3841a8..9db855b5eb 100644 --- a/specs/sharding/beacon-chain.md +++ b/specs/sharding/beacon-chain.md @@ -299,7 +299,7 @@ def verify_intermediate_block_bid(state: BeaconState, block: BeaconBlock) -> Non assert intermediate_block_bid.sharded_data_commitment_count == intermediate_block_data.included_sharded_data_commitments - assert intermediate_block_bid.sharded_data_commitment_root == hash_tree_root(intermediate_block_data.sharded_commitments[-intermediate_block_bid.sharded_data_commitments:]) + assert intermediate_block_bid.sharded_data_commitment_root == hash_tree_root(intermediate_block_data.sharded_commitments[-intermediate_block_bid.included_sharded_data_commitments:]) assert intermediate_block_bid.validator_index == block.proposer_index From 1c3ff9c1af5445a943999a75d201c7f8616a4f73 Mon Sep 17 00:00:00 2001 From: Dankrad Feist Date: Fri, 21 Jan 2022 09:35:39 -0800 Subject: [PATCH 24/66] Fix polynomial evaluation --- specs/sharding/polynomial-commitments.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/specs/sharding/polynomial-commitments.md b/specs/sharding/polynomial-commitments.md index 4468c68218..32230e2974 100644 --- a/specs/sharding/polynomial-commitments.md +++ b/specs/sharding/polynomial-commitments.md @@ -299,7 +299,7 @@ def evaluate_polynomial_in_evaluation_form(poly: BLSPolynomialByEvaluations, x: r = 0 inverses = [bls_modular_inverse(z - x) for z in roots] for i, x in enumerate(inverses): - r += f[i] * bls_modular_inverse(Aprime(roots[i])) * x % self.BLS_MODULUS + r += poly[i] * bls_modular_inverse(Aprime(roots[i])) * x % self.BLS_MODULUS r = r * A(x) % self.BLS_MODULUS return r ``` From 86b212dfe44342d87403a60f68c023d54bcb8a8f Mon Sep 17 00:00:00 2001 From: dankrad Date: Fri, 21 Jan 2022 17:50:19 +0000 Subject: [PATCH 25/66] Update specs/sharding/beacon-chain.md Co-authored-by: George Kadianakis --- specs/sharding/beacon-chain.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/specs/sharding/beacon-chain.md b/specs/sharding/beacon-chain.md index 9db855b5eb..69b36ff8d2 100644 --- a/specs/sharding/beacon-chain.md +++ b/specs/sharding/beacon-chain.md @@ -352,7 +352,7 @@ def process_sharded_data(state: BeaconState, block: BeaconBlock) -> None: combined_commitment = elliptic_curve_lincomb(sharded_commitments_container.sharded_commitments[:number_of_blobs], r_powers) y = evaluate_polynomial_in_evaluation_form(combined_vector, x) - verify_kzg_proof(combined_commitment, x, y, block_verification_kzg_proof) + verify_kzg_proof(combined_commitment, x, y, sharded_commitments_container.block_verification_kzg_proof) # Verify that number of sharded data commitments is correctly indicated assert 2 * (number_of_blobs + included_sharded_data_commitments) == len(sharded_commitments_container.sharded_commitments) From 32f1e2ac372d5899739ef6c1d710604a2b0f2d0e Mon Sep 17 00:00:00 2001 From: Dankrad Feist Date: Fri, 21 Jan 2022 09:40:05 -0800 Subject: [PATCH 26/66] MAX_BEACON_BLOCKS_BETWEEN_INTERMEDIATE_BLOCKS definition added --- specs/sharding/beacon-chain.md | 1 + 1 file changed, 1 insertion(+) diff --git a/specs/sharding/beacon-chain.md b/specs/sharding/beacon-chain.md index 69b36ff8d2..d079fc1a7a 100644 --- a/specs/sharding/beacon-chain.md +++ b/specs/sharding/beacon-chain.md @@ -84,6 +84,7 @@ The following values are (non-configurable) constants used throughout the specif | - | - | - | | `MAX_SHARDS` | `uint64(2**12)` (= 4,096) | Theoretical max shard count (used to determine data structure sizes) | | `ACTIVE_SHARDS` | `uint64(2**8)` (= 256) | Initial shard count | +| `MAX_BEACON_BLOCKS_BETWEEN_INTERMEDIATE_BLOCKS` | `uint64(2**4)` (= 16) | TODO: Need to define what happens if there were more blocks without intermediate blocks | ### Time parameters From 4e2f88ec54d75a13af59a3ccd15ac6a94a33e2c9 Mon Sep 17 00:00:00 2001 From: Dankrad Feist Date: Fri, 21 Jan 2022 12:39:36 -0800 Subject: [PATCH 27/66] Sample reconstruction estimate --- specs/sharding/validator.md | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/specs/sharding/validator.md b/specs/sharding/validator.md index 6697d3cd96..2572f8b9c0 100644 --- a/specs/sharding/validator.md +++ b/specs/sharding/validator.md @@ -132,4 +132,10 @@ the local node is subscribed to, if they have not already received that sample o TODO: We need to verify the total complexity of doing this and make sure this does not cause too much load on a validator -TODO: Compute what the minimum number of validators online would be that guarantees reconstruction of all samples +## Minimum online validator requirement + +The data availability construction guarantees that reconstruction is possible if 75% of all samples are available. In this case, at least 50% of all rows and 50% of all columns are independently available. In practice, it is likely that some supernodes will centrally collect all samples and fill in any gaps. However, we want to build a system that reliably reconstructs even absent all supernodes. Any row or column with 50% of samples will easily be reconstructed even with only 100s of validators online; so the only question is how we get to 50% of samples for all rows and columns, when some of them might be completely unseeded. + +Each validator will transfer 4 samples between rows and columns where there is overlap. Without loss of generality, look at row 0. Each validator has 1/128 chance of having a sample in this row, and we need 256 samples to reconstruct it. So we expect that we need ~256 * 128 = 32,768 validators to have a fair chance of reconstructing it if it was completely unseeded. + +A more elaborate estimate [here](https://notes.ethereum.org/@dankrad/minimum-reconstruction-validators) needs about 55,000 validators to be online for high safety that each row and column will be reconstructed. \ No newline at end of file From 457cb1c0634a4bb8955d9f58e01bc3ded29c1bda Mon Sep 17 00:00:00 2001 From: dankrad Date: Sun, 23 Jan 2022 14:08:33 -0800 Subject: [PATCH 28/66] Update specs/sharding/beacon-chain.md Co-authored-by: Hsiao-Wei Wang --- specs/sharding/beacon-chain.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/specs/sharding/beacon-chain.md b/specs/sharding/beacon-chain.md index d079fc1a7a..6facd62448 100644 --- a/specs/sharding/beacon-chain.md +++ b/specs/sharding/beacon-chain.md @@ -138,9 +138,9 @@ class IntermediateBlockBid(Container): # Block builders use an Eth1 address -- need signature as # block builder fees and data gas base fees will be charged to this address - signature_y_parity: bool - signature_r: uint256 - signature_s: uint256 + signature_y_parity: bool + signature_r: uint256 + signature_s: uint256 ``` #### `IntermediateBlockBidWithRecipientAddress` From 21f4e3860e110ecc68e4c01100e7bd3dd61dd9c7 Mon Sep 17 00:00:00 2001 From: dankrad Date: Sun, 23 Jan 2022 14:09:15 -0800 Subject: [PATCH 29/66] Update specs/sharding/polynomial-commitments.md Co-authored-by: Hsiao-Wei Wang --- specs/sharding/polynomial-commitments.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/specs/sharding/polynomial-commitments.md b/specs/sharding/polynomial-commitments.md index 32230e2974..fba64cb101 100644 --- a/specs/sharding/polynomial-commitments.md +++ b/specs/sharding/polynomial-commitments.md @@ -191,7 +191,7 @@ def low_degree_check(commitments: List[KZGCommitment]): m = 1 for w in roots[:i] + roots[i+1:d + 1]: m = m * (z - w) % BLS_MODULUS - r = (r + M) % BLS_MODULUS + r = (r + m) % BLS_MODULUS return r coefs = [] From 373e36c42039fbdbbfca1c011fac0181d719129b Mon Sep 17 00:00:00 2001 From: Dankrad Feist Date: Sun, 23 Jan 2022 14:12:15 -0800 Subject: [PATCH 30/66] Fix return value of roots_of_unity() --- specs/sharding/polynomial-commitments.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/specs/sharding/polynomial-commitments.md b/specs/sharding/polynomial-commitments.md index fba64cb101..4377611b9d 100644 --- a/specs/sharding/polynomial-commitments.md +++ b/specs/sharding/polynomial-commitments.md @@ -141,7 +141,7 @@ def roots_of_unity(order: uint64) -> List[BLSFieldElement]: for i in range(len(SAMPLES_PER_BLOB * FIELD_ELEMENTS_PER_SAMPLE)): roots.append(current_root_of_unity) current_root_of_unity = current_root_of_unity * root_of_unity % BLS_MODULUS - return r + return roots ``` ### Field helper functions From ed24410b928e856120e0b820341465fb5a534154 Mon Sep 17 00:00:00 2001 From: dankrad Date: Sun, 23 Jan 2022 14:14:28 -0800 Subject: [PATCH 31/66] Update specs/sharding/polynomial-commitments.md Co-authored-by: Hsiao-Wei Wang --- specs/sharding/polynomial-commitments.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/specs/sharding/polynomial-commitments.md b/specs/sharding/polynomial-commitments.md index 4377611b9d..82a3d5852b 100644 --- a/specs/sharding/polynomial-commitments.md +++ b/specs/sharding/polynomial-commitments.md @@ -222,7 +222,7 @@ def vector_lincomb(vectors: List[List[BLSFieldElement]], scalars: List[BLSFieldE ```python def bytes_to_field_elements(block: bytes) -> List[BLSFieldElement]: """ - Slices a block into 31 byte chunks that can fit into field elements + Slices a block into 31-byte chunks that can fit into field elements. """ sliced_block = [block[i:i + 31] for i in range(0, len(bytes), 31)] return [BLSFieldElement(int.from_bytes(x, "little")) for x in sliced_block] From 1a8b1f31997326aa7e3b7d4e06bd87eb78db6b19 Mon Sep 17 00:00:00 2001 From: dankrad Date: Sun, 23 Jan 2022 14:16:26 -0800 Subject: [PATCH 32/66] Update specs/sharding/polynomial-commitments.md Co-authored-by: Hsiao-Wei Wang --- specs/sharding/polynomial-commitments.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/specs/sharding/polynomial-commitments.md b/specs/sharding/polynomial-commitments.md index 82a3d5852b..e5f7cb39c5 100644 --- a/specs/sharding/polynomial-commitments.md +++ b/specs/sharding/polynomial-commitments.md @@ -238,7 +238,7 @@ def add_polynomials(a: BLSPolynomialByCoefficients, b: BLSPolynomialByCoefficien Sums the polynomials `a` and `b` given by their coefficients """ a, b = (a, b) if len(a >= b) else (b, a) - return [(a[i] + (b[i] if i < len(b) else 0) % BLS_MODULUS for i in range(len(a))] + return [(a[i] + (b[i] if i < len(b) else 0)) % BLS_MODULUS for i in range(len(a))] ``` #### `multiply_polynomials` From f6b142a0717b6daed808980beae14e17d4011ea5 Mon Sep 17 00:00:00 2001 From: dankrad Date: Sun, 23 Jan 2022 14:16:57 -0800 Subject: [PATCH 33/66] Update specs/sharding/polynomial-commitments.md Co-authored-by: Hsiao-Wei Wang --- specs/sharding/polynomial-commitments.md | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/specs/sharding/polynomial-commitments.md b/specs/sharding/polynomial-commitments.md index e5f7cb39c5..c78561d1cd 100644 --- a/specs/sharding/polynomial-commitments.md +++ b/specs/sharding/polynomial-commitments.md @@ -332,7 +332,10 @@ def hash_to_bls_field(x: Container, challenge_number: uint64) -> BLSFieldElement """ This function is used to generate Fiat-Shamir challenges. The output is not uniform over the BLS field. """ - return int.from_bytes(hash(hash_tree_root(x) + int.to_bytes(challenge_number, 32, "little")), "little") % BLS_MODULUS + return ( + (int.from_bytes(hash(hash_tree_root(x) + int.to_bytes(challenge_number, 32, "little")), "little")) + % BLS_MODULUS + ) ``` ### KZG operations From 096c04b77b6bdb7ffabc6dd2739cd75e9ff9ec98 Mon Sep 17 00:00:00 2001 From: dankrad Date: Sun, 23 Jan 2022 14:17:10 -0800 Subject: [PATCH 34/66] Update specs/sharding/polynomial-commitments.md Co-authored-by: Hsiao-Wei Wang --- specs/sharding/polynomial-commitments.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/specs/sharding/polynomial-commitments.md b/specs/sharding/polynomial-commitments.md index c78561d1cd..c802968d5c 100644 --- a/specs/sharding/polynomial-commitments.md +++ b/specs/sharding/polynomial-commitments.md @@ -189,7 +189,7 @@ def low_degree_check(commitments: List[KZGCommitment]): r = 0 for i in range(d + 1): m = 1 - for w in roots[:i] + roots[i+1:d + 1]: + for w in roots[:i] + roots[i + 1:d + 1]: m = m * (z - w) % BLS_MODULUS r = (r + m) % BLS_MODULUS return r From e765c4a3e10156b91e2e5ffa058a9edda54e97a3 Mon Sep 17 00:00:00 2001 From: dankrad Date: Sun, 23 Jan 2022 14:17:35 -0800 Subject: [PATCH 35/66] Update specs/sharding/polynomial-commitments.md Co-authored-by: Hsiao-Wei Wang --- specs/sharding/polynomial-commitments.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/specs/sharding/polynomial-commitments.md b/specs/sharding/polynomial-commitments.md index c802968d5c..54ff998950 100644 --- a/specs/sharding/polynomial-commitments.md +++ b/specs/sharding/polynomial-commitments.md @@ -131,7 +131,8 @@ def bls_modular_inverse(x: BLSFieldElement) -> BLSFieldElement: ```python def roots_of_unity(order: uint64) -> List[BLSFieldElement]: """ - Computes a list of roots of unity for a given order. The order must divide the BLS multiplicative group order, i.e. BLS_MODULUS - 1 + Compute a list of roots of unity for a given order. + The order must divide the BLS multiplicative group order, i.e. BLS_MODULUS - 1 """ assert (BLS_MODULUS - 1) % order == 0 roots = [] From 7ac881ed942beab0c544475f77bc03cc472cb0de Mon Sep 17 00:00:00 2001 From: dankrad Date: Sun, 23 Jan 2022 14:19:22 -0800 Subject: [PATCH 36/66] Update specs/sharding/beacon-chain.md Co-authored-by: Hsiao-Wei Wang --- specs/sharding/beacon-chain.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/specs/sharding/beacon-chain.md b/specs/sharding/beacon-chain.md index 6facd62448..0ba5df8307 100644 --- a/specs/sharding/beacon-chain.md +++ b/specs/sharding/beacon-chain.md @@ -102,6 +102,8 @@ With the introduction of intermediate blocks the number of slots per epoch is do ### Precomputed root of unity +| Name | Value | Notes | +| - | - | - | | `ROOT_OF_UNITY` | `pow(PRIMITIVE_ROOT_OF_UNITY, (BLS_MODULUS - 1) // int(SAMPLES_PER_BLOB * FIELD_ELEMENTS_PER_SAMPLE), BLS_MODULUS)` | ## Configuration From 41756704dec3ab1346db434d9cd5e99a90c1f4d4 Mon Sep 17 00:00:00 2001 From: dankrad Date: Sun, 23 Jan 2022 14:19:34 -0800 Subject: [PATCH 37/66] Update specs/sharding/beacon-chain.md Co-authored-by: Hsiao-Wei Wang --- specs/sharding/beacon-chain.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/specs/sharding/beacon-chain.md b/specs/sharding/beacon-chain.md index 0ba5df8307..19a7a99e4d 100644 --- a/specs/sharding/beacon-chain.md +++ b/specs/sharding/beacon-chain.md @@ -150,7 +150,7 @@ class IntermediateBlockBid(Container): ```python class IntermediateBlockBidWithRecipientAddress(Container): intermediate_block_bid: Union[None, IntermediateBlockBid] - ethereum_address: Bytes[20] # Address to receive the block builder bid + recipient_address: ExecutionAddress # Address to receive the block builder bid ``` #### `ShardedCommitmentsContainer` From b99642c3d27d8e7f392603f1c73f8c2b0eef86e7 Mon Sep 17 00:00:00 2001 From: dankrad Date: Sun, 23 Jan 2022 14:19:53 -0800 Subject: [PATCH 38/66] Update specs/sharding/polynomial-commitments.md Co-authored-by: Hsiao-Wei Wang --- specs/sharding/polynomial-commitments.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/specs/sharding/polynomial-commitments.md b/specs/sharding/polynomial-commitments.md index 54ff998950..bf7002f6fd 100644 --- a/specs/sharding/polynomial-commitments.md +++ b/specs/sharding/polynomial-commitments.md @@ -115,7 +115,7 @@ def list_to_reverse_bit_order(l): ```python def bls_modular_inverse(x: BLSFieldElement) -> BLSFieldElement: """ - Computes the modular inverse of x, i.e. y such that x * y % BLS_MODULUS == 1. Returns 0 for x == 0 + Compute the modular inverse of x, i.e. y such that x * y % BLS_MODULUS == 1 and return 0 for x == 0 """ lm, hm = 1, 0 low, high = x % BLS_MODULUS, BLS_MODULUS From 8fe2111ed13870536e2544884ddb995fac2a6ed9 Mon Sep 17 00:00:00 2001 From: dankrad Date: Sun, 23 Jan 2022 14:20:52 -0800 Subject: [PATCH 39/66] Update specs/sharding/beacon-chain.md Co-authored-by: Hsiao-Wei Wang --- specs/sharding/beacon-chain.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/specs/sharding/beacon-chain.md b/specs/sharding/beacon-chain.md index 19a7a99e4d..db13695177 100644 --- a/specs/sharding/beacon-chain.md +++ b/specs/sharding/beacon-chain.md @@ -216,7 +216,7 @@ class BeaconBlockBody(altair.BeaconBlockBody): #### `is_intermediate_block_slot` ```python -def is_intermediate_block_slot(slot: Slot): +def is_intermediate_block_slot(slot: Slot) -> bool: return slot % 2 == 1 ``` From 2740ca43412744734acdcc0626239a03b3d38b18 Mon Sep 17 00:00:00 2001 From: Dankrad Feist Date: Sun, 13 Feb 2022 23:13:49 -0700 Subject: [PATCH 40/66] Intermediate block -> Builder block --- specs/sharding/beacon-chain.md | 98 +++++++++++++++++----------------- 1 file changed, 49 insertions(+), 49 deletions(-) diff --git a/specs/sharding/beacon-chain.md b/specs/sharding/beacon-chain.md index db13695177..20105a6437 100644 --- a/specs/sharding/beacon-chain.md +++ b/specs/sharding/beacon-chain.md @@ -23,8 +23,8 @@ - [Time parameters](#time-parameters-1) - [Containers](#containers) - [New Containers](#new-containers) - - [`IntermediateBlockBid`](#intermediateblockbid) - - [`IntermediateBlockBidWithRecipientAddress`](#intermediateblockbidwithrecipientaddress) + - [`BuilderBlockBid`](#BuilderBlockbid) + - [`BuilderBlockBidWithRecipientAddress`](#BuilderBlockbidwithrecipientaddress) - [`ShardedCommitmentsContainer`](#shardedcommitmentscontainer) - [`SignedShardSample`](#signedshardsample) - [Extended Containers](#extended-containers) @@ -33,14 +33,14 @@ - [`BeaconBlockBody`](#beaconblockbody) - [Helper functions](#helper-functions) - [Block processing](#block-processing) - - [`is_intermediate_block_slot`](#is_intermediate_block_slot) + - [`is_builder_block_slot`](#is_builder_block_slot) - [Beacon state accessors](#beacon-state-accessors) - [`get_active_shard_count`](#get_active_shard_count) - [Beacon chain state transition function](#beacon-chain-state-transition-function) - [Block processing](#block-processing-1) - [`process_block`](#process_block) - [Block header](#block-header) - - [Intermediate Block Bid](#intermediate-block-bid) + - [Intermediate Block Bid](#builder-block-bid) - [Sharded data](#sharded-data) - [Execution payload](#execution-payload) @@ -84,11 +84,11 @@ The following values are (non-configurable) constants used throughout the specif | - | - | - | | `MAX_SHARDS` | `uint64(2**12)` (= 4,096) | Theoretical max shard count (used to determine data structure sizes) | | `ACTIVE_SHARDS` | `uint64(2**8)` (= 256) | Initial shard count | -| `MAX_BEACON_BLOCKS_BETWEEN_INTERMEDIATE_BLOCKS` | `uint64(2**4)` (= 16) | TODO: Need to define what happens if there were more blocks without intermediate blocks | +| `MAX_PROPOSER_BLOCKS_BETWEEN_BUILDER_BLOCKS` | `uint64(2**4)` (= 16) | TODO: Need to define what happens if there were more blocks without builder blocks | ### Time parameters -With the introduction of intermediate blocks the number of slots per epoch is doubled (it counts beacon blocks and intermediate blocks). +With the introduction of builder blocks the number of slots per epoch is doubled (it counts beacon blocks and builder blocks). | Name | Value | Unit | Duration | | - | - | :-: | :-: | @@ -121,16 +121,16 @@ E.g. `ACTIVE_SHARDS` and `SAMPLES_PER_BLOB`. ### New Containers -#### `IntermediateBlockBid` +#### `BuilderBlockBid` ```python -class IntermediateBlockBid(Container): +class BuilderBlockBid(Container): slot: Slot parent_block_root: Root execution_payload_root: Root - sharded_data_commitment_root: Root # Root of the sharded data (only data, not beacon/intermediate block commitments) + sharded_data_commitment_root: Root # Root of the sharded data (only data, not beacon/builder block commitments) sharded_data_commitment_count: uint64 # Count of sharded data commitments @@ -145,11 +145,11 @@ class IntermediateBlockBid(Container): signature_s: uint256 ``` -#### `IntermediateBlockBidWithRecipientAddress` +#### `BuilderBlockBidWithRecipientAddress` ```python -class IntermediateBlockBidWithRecipientAddress(Container): - intermediate_block_bid: Union[None, IntermediateBlockBid] +class BuilderBlockBidWithRecipientAddress(Container): + builder_block_bid: Union[None, BuilderBlockBid] recipient_address: ExecutionAddress # Address to receive the block builder bid ``` @@ -162,8 +162,8 @@ class ShardedCommitmentsContainer(Container): # Aggregate degree proof for all sharded_commitments degree_proof: KZGCommitment - # The sizes of the blocks encoded in the commitments (last intermediate and all beacon blocks since) - included_block_sizes: List[uint64, MAX_BEACON_BLOCKS_BETWEEN_INTERMEDIATE_BLOCKS + 1] + # The sizes of the blocks encoded in the commitments (last builder and all beacon blocks since) + included_block_sizes: List[uint64, MAX_PROPOSER_BLOCKS_BETWEEN_BUILDER_BLOCKS + 1] # Number of commitments that are for sharded data (no blocks) included_sharded_data_commitments: uint64 @@ -191,13 +191,13 @@ class SignedShardSample(Container): ```python class BeaconState(bellatrix.BeaconState): - blocks_since_intermediate_block: List[BeaconBlock, MAX_BEACON_BLOCKS_BETWEEN_INTERMEDIATE_BLOCKS] + blocks_since_builder_block: List[BeaconBlock, MAX_PROPOSER_BLOCKS_BETWEEN_BUILDER_BLOCKS] ``` #### `IntermediatBlockData` ```python -class IntermediateBlockData(Container): +class BuilderBlockData(Container): execution_payload: ExecutionPayload sharded_commitments_container: ShardedCommitmentsContainer ``` @@ -206,17 +206,17 @@ class IntermediateBlockData(Container): ```python class BeaconBlockBody(altair.BeaconBlockBody): - payload_data: Union[IntermediateBlockBid, IntermediateBlockData] + payload_data: Union[BuilderBlockBid, BuilderBlockData] ``` ## Helper functions ### Block processing -#### `is_intermediate_block_slot` +#### `is_builder_block_slot` ```python -def is_intermediate_block_slot(slot: Slot) -> bool: +def is_builder_block_slot(slot: Slot) -> bool: return slot % 2 == 1 ``` @@ -242,7 +242,7 @@ def get_active_shard_count(state: BeaconState, epoch: Epoch) -> uint64: ```python def process_block(state: BeaconState, block: BeaconBlock) -> None: process_block_header(state, block) - verify_intermediate_block_bid(state, block) + verify_builder_block_bid(state, block) process_sharded_data(state, block) if is_execution_enabled(state, block.body): process_execution_payload(state, block, EXECUTION_ENGINE) @@ -252,9 +252,9 @@ def process_block(state: BeaconState, block: BeaconBlock) -> None: process_operations(state, block.body) process_sync_aggregate(state, block.body.sync_aggregate) - if is_intermediate_block_slot(block.slot): - state.blocks_since_intermediate_block = [] - state.blocks_since_intermediate_block.append(block) + if is_builder_block_slot(block.slot): + state.blocks_since_builder_block = [] + state.blocks_since_builder_block.append(block) ``` #### Block header @@ -266,7 +266,7 @@ def process_block_header(state: BeaconState, block: BeaconBlock) -> None: # Verify that the block is newer than latest block header assert block.slot > state.latest_block_header.slot # Verify that proposer index is the correct index - if not is_intermediate_block_slot(block.slot): + if not is_builder_block_slot(block.slot): assert block.proposer_index == get_beacon_proposer_index(state) # Verify that the parent matches assert block.parent_root == hash_tree_root(state.latest_block_header) @@ -287,31 +287,31 @@ def process_block_header(state: BeaconState, block: BeaconBlock) -> None: #### Intermediate Block Bid ```python -def verify_intermediate_block_bid(state: BeaconState, block: BeaconBlock) -> None: - if is_intermediate_block_slot(block.slot): - # Get last intermediate block bid - assert state.blocks_since_intermediate_block[-1].body.payload_data.selector == 0 - intermediate_block_bid = state.blocks_since_intermediate_block[-1].body.payload_data.value.intermediate_block_bid - assert intermediate_block_bid.slot + 1 == block.slot +def verify_builder_block_bid(state: BeaconState, block: BeaconBlock) -> None: + if is_builder_block_slot(block.slot): + # Get last builder block bid + assert state.blocks_since_builder_block[-1].body.payload_data.selector == 0 + builder_block_bid = state.blocks_since_builder_block[-1].body.payload_data.value.builder_block_bid + assert builder_block_bid.slot + 1 == block.slot - assert block.body.payload_data.selector == 1 # Verify that intermediate block does not contain bid + assert block.body.payload_data.selector == 1 # Verify that builder block does not contain bid - intermediate_block_data = block.body.payload_data.value + builder_block_data = block.body.payload_data.value - assert intermediate_block_bid.execution_payload_root == hash_tree_root(intermediate_block_data.execution_payload) + assert builder_block_bid.execution_payload_root == hash_tree_root(builder_block_data.execution_payload) - assert intermediate_block_bid.sharded_data_commitment_count == intermediate_block_data.included_sharded_data_commitments + assert builder_block_bid.sharded_data_commitment_count == builder_block_data.included_sharded_data_commitments - assert intermediate_block_bid.sharded_data_commitment_root == hash_tree_root(intermediate_block_data.sharded_commitments[-intermediate_block_bid.included_sharded_data_commitments:]) + assert builder_block_bid.sharded_data_commitment_root == hash_tree_root(builder_block_data.sharded_commitments[-builder_block_bid.included_sharded_data_commitments:]) - assert intermediate_block_bid.validator_index == block.proposer_index + assert builder_block_bid.validator_index == block.proposer_index else: assert block.body.payload_data.selector == 0 - intermediate_block_bid = block.body.payload_data.value.intermediate_block_bid - assert intermediate_block_bid.slot == block.slot - assert intermediate_block_bid.parent_block_root == block.parent_root + builder_block_bid = block.body.payload_data.value.builder_block_bid + assert builder_block_bid.slot == block.slot + assert builder_block_bid.parent_block_root == block.parent_root # We do not check that the builder address exists or has sufficient balance here. # If it does not have sufficient balance, the block proposer loses out, so it is their # responsibility to check. @@ -321,7 +321,7 @@ def verify_intermediate_block_bid(state: BeaconState, block: BeaconBlock) -> Non ```python def process_sharded_data(state: BeaconState, block: BeaconBlock) -> None: - if is_intermediate_block_slot(block.slot): + if is_builder_block_slot(block.slot): assert block.body.payload_data.selector == 1 sharded_commitments_container = block.body.payload_data.value.sharded_commitments_container @@ -338,8 +338,8 @@ def process_sharded_data(state: BeaconState, block: BeaconBlock) -> None: # Verify that the 2*N commitments lie on a degree < N polynomial low_degree_check(sharded_commitments_container.sharded_commitments) - # Verify that blocks since the last intermediate block have been included - blocks_chunked = [bytes_to_field_elements(ssz_serialize(block)) for block in state.blocks_since_intermediate_block] + # Verify that blocks since the last builder block have been included + blocks_chunked = [bytes_to_field_elements(ssz_serialize(block)) for block in state.blocks_since_builder_block] block_vectors = [] field_elements_per_blob = SAMPLES_PER_BLOB * FIELD_ELEMENTS_PER_SAMPLE for block_chunked in blocks_chunked: @@ -365,7 +365,7 @@ def process_sharded_data(state: BeaconState, block: BeaconBlock) -> None: ```python def process_execution_payload(state: BeaconState, block: BeaconBlock, execution_engine: ExecutionEngine) -> None: - if is_intermediate_block_slot(block.slot): + if is_builder_block_slot(block.slot): assert block.body.payload_data.selector == 1 payload = block.body.payload_data.value.execution_payload @@ -381,19 +381,19 @@ def process_execution_payload(state: BeaconState, block: BeaconBlock, execution_ sharded_commitments_container = block.body.sharded_commitments_container sharded_data_commitments = sharded_commitments_container.sharded_commitments[-sharded_commitments_container.included_sharded_data_commitments:] - # Get all unprocessed intermediate block bids - unprocessed_intermediate_block_bid_with_recipient_addresses = [] - for block in state.blocks_since_intermediate_block[1:]: - unprocessed_intermediate_block_bid_with_recipient_addresses.append(block.body.intermediate_block_bid_with_recipient_address.value) + # Get all unprocessed builder block bids + unprocessed_builder_block_bid_with_recipient_addresses = [] + for block in state.blocks_since_builder_block[1:]: + unprocessed_builder_block_bid_with_recipient_addresses.append(block.body.builder_block_bid_with_recipient_address.value) # Verify the execution payload is valid # The execution engine gets two extra payloads: One for the sharded data commitments (these are needed to verify type 3 transactions) - # and one for all so far unprocessed intermediate block bids: + # and one for all so far unprocessed builder block bids: # * The execution engine needs to transfer the balance from the bidder to the proposer. # * The execution engine needs to deduct data gas fees from the bidder balances assert execution_engine.execute_payload(payload, sharded_data_commitments, - unprocessed_intermediate_block_bid_with_recipient_addresses) + unprocessed_builder_block_bid_with_recipient_addresses) # Cache execution payload header state.latest_execution_payload_header = ExecutionPayloadHeader( From 2178b5bc6dbb650498c165682480676c5c2d519c Mon Sep 17 00:00:00 2001 From: Dankrad Feist Date: Sun, 13 Feb 2022 23:17:49 -0700 Subject: [PATCH 41/66] Some small omissions in intermediate -> builder --- specs/sharding/beacon-chain.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/specs/sharding/beacon-chain.md b/specs/sharding/beacon-chain.md index 20105a6437..decc6ef258 100644 --- a/specs/sharding/beacon-chain.md +++ b/specs/sharding/beacon-chain.md @@ -29,7 +29,7 @@ - [`SignedShardSample`](#signedshardsample) - [Extended Containers](#extended-containers) - [`BeaconState`](#beaconstate) - - [`IntermediatBlockData`](#intermediatblockdata) + - [`BuilderBlockData`](#builderblockdata) - [`BeaconBlockBody`](#beaconblockbody) - [Helper functions](#helper-functions) - [Block processing](#block-processing) @@ -40,7 +40,7 @@ - [Block processing](#block-processing-1) - [`process_block`](#process_block) - [Block header](#block-header) - - [Intermediate Block Bid](#builder-block-bid) + - [Builder Block Bid](#builder-block-bid) - [Sharded data](#sharded-data) - [Execution payload](#execution-payload) @@ -194,7 +194,7 @@ class BeaconState(bellatrix.BeaconState): blocks_since_builder_block: List[BeaconBlock, MAX_PROPOSER_BLOCKS_BETWEEN_BUILDER_BLOCKS] ``` -#### `IntermediatBlockData` +#### `BuilderBlockData` ```python class BuilderBlockData(Container): @@ -284,7 +284,7 @@ def process_block_header(state: BeaconState, block: BeaconBlock) -> None: assert not proposer.slashed ``` -#### Intermediate Block Bid +#### Builder Block Bid ```python def verify_builder_block_bid(state: BeaconState, block: BeaconBlock) -> None: From 5b4c9135171bda02b539c3956e7eac2e2263400a Mon Sep 17 00:00:00 2001 From: dankrad Date: Wed, 6 Jul 2022 18:13:13 +0100 Subject: [PATCH 42/66] Update specs/sharding/polynomial-commitments.md Co-authored-by: Hsiao-Wei Wang --- specs/sharding/polynomial-commitments.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/specs/sharding/polynomial-commitments.md b/specs/sharding/polynomial-commitments.md index bf7002f6fd..a742a2a0ed 100644 --- a/specs/sharding/polynomial-commitments.md +++ b/specs/sharding/polynomial-commitments.md @@ -236,7 +236,7 @@ def bytes_to_field_elements(block: bytes) -> List[BLSFieldElement]: ```python def add_polynomials(a: BLSPolynomialByCoefficients, b: BLSPolynomialByCoefficients) -> BLSPolynomialByCoefficients: """ - Sums the polynomials `a` and `b` given by their coefficients + Sum the polynomials ``a`` and ``b`` given by their coefficients. """ a, b = (a, b) if len(a >= b) else (b, a) return [(a[i] + (b[i] if i < len(b) else 0)) % BLS_MODULUS for i in range(len(a))] From 2f54db712c00aee03cedfaef641128002d9a5619 Mon Sep 17 00:00:00 2001 From: dankrad Date: Wed, 6 Jul 2022 18:13:44 +0100 Subject: [PATCH 43/66] Update specs/sharding/polynomial-commitments.md Co-authored-by: George Kadianakis --- specs/sharding/polynomial-commitments.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/specs/sharding/polynomial-commitments.md b/specs/sharding/polynomial-commitments.md index a742a2a0ed..4ae0ddce97 100644 --- a/specs/sharding/polynomial-commitments.md +++ b/specs/sharding/polynomial-commitments.md @@ -238,7 +238,7 @@ def add_polynomials(a: BLSPolynomialByCoefficients, b: BLSPolynomialByCoefficien """ Sum the polynomials ``a`` and ``b`` given by their coefficients. """ - a, b = (a, b) if len(a >= b) else (b, a) + a, b = (a, b) if len(a) >= len(b) else (b, a) return [(a[i] + (b[i] if i < len(b) else 0)) % BLS_MODULUS for i in range(len(a))] ``` From ccfca9b694d341cbf8795f72a177ef048a6926cc Mon Sep 17 00:00:00 2001 From: dankrad Date: Wed, 6 Jul 2022 18:17:01 +0100 Subject: [PATCH 44/66] Update specs/sharding/polynomial-commitments.md Co-authored-by: George Kadianakis --- specs/sharding/polynomial-commitments.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/specs/sharding/polynomial-commitments.md b/specs/sharding/polynomial-commitments.md index 4ae0ddce97..e113c1396e 100644 --- a/specs/sharding/polynomial-commitments.md +++ b/specs/sharding/polynomial-commitments.md @@ -139,7 +139,7 @@ def roots_of_unity(order: uint64) -> List[BLSFieldElement]: root_of_unity = pow(PRIMITIVE_ROOT_OF_UNITY, (BLS_MODULUS - 1) // order, BLS_MODULUS) current_root_of_unity = 1 - for i in range(len(SAMPLES_PER_BLOB * FIELD_ELEMENTS_PER_SAMPLE)): + for i in range(SAMPLES_PER_BLOB * FIELD_ELEMENTS_PER_SAMPLE): roots.append(current_root_of_unity) current_root_of_unity = current_root_of_unity * root_of_unity % BLS_MODULUS return roots From 6e97e5532fc9d8fd10e90148659ca159b2cf27e3 Mon Sep 17 00:00:00 2001 From: dankrad Date: Wed, 6 Jul 2022 18:19:36 +0100 Subject: [PATCH 45/66] Update specs/sharding/polynomial-commitments.md Co-authored-by: Hsiao-Wei Wang --- specs/sharding/polynomial-commitments.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/specs/sharding/polynomial-commitments.md b/specs/sharding/polynomial-commitments.md index e113c1396e..e2550fd3dd 100644 --- a/specs/sharding/polynomial-commitments.md +++ b/specs/sharding/polynomial-commitments.md @@ -209,7 +209,7 @@ def low_degree_check(commitments: List[KZGCommitment]): ```python def vector_lincomb(vectors: List[List[BLSFieldElement]], scalars: List[BLSFieldElement]) -> List[BLSFieldElement]: """ - Compute a linear combination of field element vectors + Compute a linear combination of field element vectors. """ r = [0 for i in len(vectors[0])] for v, a in zip(vectors, scalars): From ac3b91d5f1aca859d29c057779a0851fec5fa3d5 Mon Sep 17 00:00:00 2001 From: dankrad Date: Wed, 6 Jul 2022 18:22:38 +0100 Subject: [PATCH 46/66] Update specs/sharding/polynomial-commitments.md Co-authored-by: Hsiao-Wei Wang --- specs/sharding/polynomial-commitments.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/specs/sharding/polynomial-commitments.md b/specs/sharding/polynomial-commitments.md index e2550fd3dd..133d509470 100644 --- a/specs/sharding/polynomial-commitments.md +++ b/specs/sharding/polynomial-commitments.md @@ -87,7 +87,7 @@ def next_power_of_two(x: int) -> int: #### `reverse_bit_order` ```python -def reverse_bit_order(n, order): +def reverse_bit_order(n: int, order: int) -> int: """ Reverse the bit order of an integer n """ From f1d1305c7eb66896dd5b2f399cdb36858045483c Mon Sep 17 00:00:00 2001 From: dankrad Date: Wed, 6 Jul 2022 18:23:40 +0100 Subject: [PATCH 47/66] Update specs/sharding/polynomial-commitments.md Co-authored-by: Hsiao-Wei Wang --- specs/sharding/polynomial-commitments.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/specs/sharding/polynomial-commitments.md b/specs/sharding/polynomial-commitments.md index 133d509470..7ac4d9e71d 100644 --- a/specs/sharding/polynomial-commitments.md +++ b/specs/sharding/polynomial-commitments.md @@ -99,7 +99,7 @@ def reverse_bit_order(n: int, order: int) -> int: #### `list_to_reverse_bit_order` ```python -def list_to_reverse_bit_order(l): +def list_to_reverse_bit_order(l: List[int]) -> List[int]: """ Convert a list between normal and reverse bit order. This operation is idempotent. """ From cd87bf26064f63075ad80bf11e8c1fb1e451ea47 Mon Sep 17 00:00:00 2001 From: dankrad Date: Wed, 6 Jul 2022 18:24:23 +0100 Subject: [PATCH 48/66] Update specs/sharding/polynomial-commitments.md Co-authored-by: Hsiao-Wei Wang --- specs/sharding/polynomial-commitments.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/specs/sharding/polynomial-commitments.md b/specs/sharding/polynomial-commitments.md index 7ac4d9e71d..b777c6292c 100644 --- a/specs/sharding/polynomial-commitments.md +++ b/specs/sharding/polynomial-commitments.md @@ -300,8 +300,8 @@ def evaluate_polynomial_in_evaluation_form(poly: BLSPolynomialByEvaluations, x: r = 0 inverses = [bls_modular_inverse(z - x) for z in roots] for i, x in enumerate(inverses): - r += poly[i] * bls_modular_inverse(Aprime(roots[i])) * x % self.BLS_MODULUS - r = r * A(x) % self.BLS_MODULUS + r += poly[i] * bls_modular_inverse(Aprime(roots[i])) * x % BLS_MODULUS + r = r * A(x) % BLS_MODULUS return r ``` From 96787031409a187cd4e5e3d203e64fc860df7079 Mon Sep 17 00:00:00 2001 From: dankrad Date: Wed, 6 Jul 2022 18:24:57 +0100 Subject: [PATCH 49/66] Update specs/sharding/polynomial-commitments.md Co-authored-by: Hsiao-Wei Wang --- specs/sharding/polynomial-commitments.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/specs/sharding/polynomial-commitments.md b/specs/sharding/polynomial-commitments.md index b777c6292c..110a5a5884 100644 --- a/specs/sharding/polynomial-commitments.md +++ b/specs/sharding/polynomial-commitments.md @@ -272,7 +272,9 @@ def interpolate_polynomial(xs: List[BLSFieldElement], ys: List[BLSFieldElement]) for j in range(len(ys)): if j != i: weight_adjustment = bls_modular_inverse(xs[j] - xs[i]) - summand = multiply_polynomials(summand, [weight_adjustment, ((MODULUS - weight_adjustment) * xs[i])]) + summand = multiply_polynomials( + summand, [weight_adjustment, ((BLS_MODULUS - weight_adjustment) * xs[i])] + ) r = add_polynomials(r, summand) return r From bc307a037fc73ad3948b43c519317a1e533f5c4d Mon Sep 17 00:00:00 2001 From: dankrad Date: Wed, 6 Jul 2022 18:25:29 +0100 Subject: [PATCH 50/66] Update specs/sharding/polynomial-commitments.md Co-authored-by: Hsiao-Wei Wang --- specs/sharding/polynomial-commitments.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/specs/sharding/polynomial-commitments.md b/specs/sharding/polynomial-commitments.md index 110a5a5884..b98ed720d8 100644 --- a/specs/sharding/polynomial-commitments.md +++ b/specs/sharding/polynomial-commitments.md @@ -363,7 +363,7 @@ def verify_kzg_proof(commitment: KZGCommitment, x: BLSFieldElement, y: BLSFieldE ```python def verify_kzg_multiproof(commitment: KZGCommitment, xs: List[BLSFieldElement], ys: List[BLSFieldElement], proof: KZGCommitment) -> None: """ - Verifies a KZG multiproof. + Verify a KZG multiproof. """ zero_poly = elliptic_curve_lincomb(G2_SETUP[:len(xs)], interpolate_polynomial(xs, [0] * len(ys))) interpolated_poly = elliptic_curve_lincomb(G2_SETUP[:len(xs)], interpolate_polynomial(xs, ys)) From e5eefe3b0825f932f80383a259c3d4c9c210fb1d Mon Sep 17 00:00:00 2001 From: dankrad Date: Wed, 6 Jul 2022 18:26:10 +0100 Subject: [PATCH 51/66] Update specs/sharding/polynomial-commitments.md Co-authored-by: Hsiao-Wei Wang --- specs/sharding/polynomial-commitments.md | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/specs/sharding/polynomial-commitments.md b/specs/sharding/polynomial-commitments.md index b98ed720d8..9b4ed2e782 100644 --- a/specs/sharding/polynomial-commitments.md +++ b/specs/sharding/polynomial-commitments.md @@ -361,7 +361,10 @@ def verify_kzg_proof(commitment: KZGCommitment, x: BLSFieldElement, y: BLSFieldE #### `verify_kzg_multiproof` ```python -def verify_kzg_multiproof(commitment: KZGCommitment, xs: List[BLSFieldElement], ys: List[BLSFieldElement], proof: KZGCommitment) -> None: +def verify_kzg_multiproof(commitment: KZGCommitment, + xs: List[BLSFieldElement], + ys: List[BLSFieldElement], + proof: KZGCommitment) -> None: """ Verify a KZG multiproof. """ From 96f85999923fe47ca56341d8ef7f355cf1aaa250 Mon Sep 17 00:00:00 2001 From: dankrad Date: Wed, 6 Jul 2022 18:26:47 +0100 Subject: [PATCH 52/66] Update specs/sharding/polynomial-commitments.md Co-authored-by: Hsiao-Wei Wang --- specs/sharding/polynomial-commitments.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/specs/sharding/polynomial-commitments.md b/specs/sharding/polynomial-commitments.md index 9b4ed2e782..68c7fabde2 100644 --- a/specs/sharding/polynomial-commitments.md +++ b/specs/sharding/polynomial-commitments.md @@ -348,7 +348,8 @@ def hash_to_bls_field(x: Container, challenge_number: uint64) -> BLSFieldElement ```python def verify_kzg_proof(commitment: KZGCommitment, x: BLSFieldElement, y: BLSFieldElement, proof: KZGCommitment) -> None: """ - Checks that `proof` is a valid KZG proof for the polynomial committed to by `commitment` evaluated at `x` equals `y` + Check that `proof` is a valid KZG proof for the polynomial committed to by `commitment` evaluated + at `x` equals `y`. """ zero_poly = G2_SETUP[1].add(G2_SETUP[0].mult(x).neg()) From f03bb2cdebdd96e794a624c88484f60f19f8b65c Mon Sep 17 00:00:00 2001 From: dankrad Date: Wed, 6 Jul 2022 18:27:52 +0100 Subject: [PATCH 53/66] Update specs/sharding/polynomial-commitments.md Co-authored-by: Hsiao-Wei Wang --- specs/sharding/polynomial-commitments.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/specs/sharding/polynomial-commitments.md b/specs/sharding/polynomial-commitments.md index 68c7fabde2..fe6f8bb321 100644 --- a/specs/sharding/polynomial-commitments.md +++ b/specs/sharding/polynomial-commitments.md @@ -318,7 +318,8 @@ We are using the KZG10 polynomial commitment scheme (Kate, Zaverucha and Goldber ```python def elliptic_curve_lincomb(points: List[KZGCommitment], scalars: List[BLSFieldElement]) -> KZGCommitment: """ - BLS multiscalar multiplication. This function can be optimized using Pippenger's algorithm and variants. This is a non-optimized implementation. + BLS multiscalar multiplication. This function can be optimized using Pippenger's algorithm and variants. + This is a non-optimized implementation. """ r = bls.Z1() for x, a in zip(points, scalars): From b236e36200664b44e47543562304b110441a6bcc Mon Sep 17 00:00:00 2001 From: Dankrad Feist Date: Thu, 7 Jul 2022 12:22:53 +0100 Subject: [PATCH 54/66] Z1 -> inf_G1 --- specs/sharding/polynomial-commitments.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/specs/sharding/polynomial-commitments.md b/specs/sharding/polynomial-commitments.md index fe6f8bb321..fd9b588122 100644 --- a/specs/sharding/polynomial-commitments.md +++ b/specs/sharding/polynomial-commitments.md @@ -201,7 +201,7 @@ def low_degree_check(commitments: List[KZGCommitment]): for i in range(d + 1): coefs[i] = (coefs[i] + B(r) * bls_modular_inverse(Bprime(r) * (r - roots[i]))) % BLS_MODULUS - assert elliptic_curve_lincomb(commitments, coefs) == bls.Z1() + assert elliptic_curve_lincomb(commitments, coefs) == bls.inf_G1() ``` #### `vector_lincomb` @@ -321,7 +321,7 @@ def elliptic_curve_lincomb(points: List[KZGCommitment], scalars: List[BLSFieldEl BLS multiscalar multiplication. This function can be optimized using Pippenger's algorithm and variants. This is a non-optimized implementation. """ - r = bls.Z1() + r = bls.inf_G1() for x, a in zip(points, scalars): r = r.add(x.mult(a)) return r From 70997256c321a985177e50859001af05bd3161d4 Mon Sep 17 00:00:00 2001 From: Dankrad Feist Date: Thu, 7 Jul 2022 16:42:51 +0100 Subject: [PATCH 55/66] Fix degree proof bound --- specs/sharding/beacon-chain.md | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/specs/sharding/beacon-chain.md b/specs/sharding/beacon-chain.md index decc6ef258..a0b6ad314c 100644 --- a/specs/sharding/beacon-chain.md +++ b/specs/sharding/beacon-chain.md @@ -333,7 +333,9 @@ def process_sharded_data(state: BeaconState, block: BeaconBlock) -> None: r_powers = compute_powers(r, len(sharded_commitments_container.sharded_commitments)) combined_commitment = elliptic_curve_lincomb(sharded_commitments_container.sharded_commitments, r_powers) - verify_degree_proof(combined_commitment, SAMPLES_PER_BLOB * FIELD_ELEMENTS_PER_SAMPLE, sharded_commitments_container.degree_proof) + payload_field_elements_per_blob = SAMPLES_PER_BLOB * FIELD_ELEMENTS_PER_SAMPLE // 2 + + verify_degree_proof(combined_commitment, payload_field_elements_per_blob, sharded_commitments_container.degree_proof) # Verify that the 2*N commitments lie on a degree < N polynomial low_degree_check(sharded_commitments_container.sharded_commitments) @@ -341,11 +343,11 @@ def process_sharded_data(state: BeaconState, block: BeaconBlock) -> None: # Verify that blocks since the last builder block have been included blocks_chunked = [bytes_to_field_elements(ssz_serialize(block)) for block in state.blocks_since_builder_block] block_vectors = [] - field_elements_per_blob = SAMPLES_PER_BLOB * FIELD_ELEMENTS_PER_SAMPLE + for block_chunked in blocks_chunked: - for i in range(0, len(block_chunked), field_elements_per_blob): - block_vectors.append(block_chunked[i:i + field_elements_per_blob]) - + for i in range(0, len(block_chunked), payload_field_elements_per_blob): + block_vectors.append(block_chunked[i:i + payload_field_elements_per_blob]) + number_of_blobs = len(block_vectors) r = hash_to_bls_field(sharded_commitments_container.sharded_commitments[:number_of_blobs], 0) x = hash_to_bls_field(sharded_commitments_container.sharded_commitments[:number_of_blobs], 1) From 8dfaf4c8b67aca8c6ac079eba21f79536f906df0 Mon Sep 17 00:00:00 2001 From: Dankrad Feist Date: Thu, 7 Jul 2022 16:55:49 +0100 Subject: [PATCH 56/66] SignedShardSample -> ShardSample --- specs/sharding/beacon-chain.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/specs/sharding/beacon-chain.md b/specs/sharding/beacon-chain.md index a0b6ad314c..3c6cd94bae 100644 --- a/specs/sharding/beacon-chain.md +++ b/specs/sharding/beacon-chain.md @@ -26,7 +26,7 @@ - [`BuilderBlockBid`](#BuilderBlockbid) - [`BuilderBlockBidWithRecipientAddress`](#BuilderBlockbidwithrecipientaddress) - [`ShardedCommitmentsContainer`](#shardedcommitmentscontainer) - - [`SignedShardSample`](#signedshardsample) + - [`ShardSample`](#ShardSample) - [Extended Containers](#extended-containers) - [`BeaconState`](#beaconstate) - [`BuilderBlockData`](#builderblockdata) @@ -172,10 +172,10 @@ class ShardedCommitmentsContainer(Container): block_verification_kzg_proof: KZGCommitment ``` -#### `SignedShardSample` +#### `ShardSample` ```python -class SignedShardSample(Container): +class ShardSample(Container): slot: Slot row: uint64 column: uint64 From dc91ef3d3cbd8475cb9edf7a313fd7bfa22fa215 Mon Sep 17 00:00:00 2001 From: Dankrad Feist Date: Thu, 7 Jul 2022 17:01:15 +0100 Subject: [PATCH 57/66] Typo --- specs/sharding/polynomial-commitments.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/specs/sharding/polynomial-commitments.md b/specs/sharding/polynomial-commitments.md index fd9b588122..10b957f282 100644 --- a/specs/sharding/polynomial-commitments.md +++ b/specs/sharding/polynomial-commitments.md @@ -46,7 +46,7 @@ ## Introduction -This document specifies basic polynomial operations and KZG polynomial commitment operations as they are needed for the sharding specification. The implementations are not optimized for performance, but readability. All practical implementations shoul optimize the polynomial operations, and hints what the best known algorithms for these implementations are are included below. +This document specifies basic polynomial operations and KZG polynomial commitment operations as they are needed for the sharding specification. The implementations are not optimized for performance, but readability. All practical implementations should optimize the polynomial operations, and hints what the best known algorithms for these implementations are are included below. ## Constants From 9268a4d0d639daa75838438cd67474194ec9b06e Mon Sep 17 00:00:00 2001 From: Dankrad Feist Date: Thu, 7 Jul 2022 17:03:03 +0100 Subject: [PATCH 58/66] Don't allow 0 for `next_power_of_two` --- specs/sharding/polynomial-commitments.md | 1 + 1 file changed, 1 insertion(+) diff --git a/specs/sharding/polynomial-commitments.md b/specs/sharding/polynomial-commitments.md index 10b957f282..b061c31dc2 100644 --- a/specs/sharding/polynomial-commitments.md +++ b/specs/sharding/polynomial-commitments.md @@ -81,6 +81,7 @@ We define the following Python custom types for type hinting and readability: ```python def next_power_of_two(x: int) -> int: + assert x > 0 return 2 ** ((x - 1).bit_length()) ``` From ec68256c513153884fb5c1949cdbd5cbc7f729b4 Mon Sep 17 00:00:00 2001 From: Dankrad Feist Date: Thu, 7 Jul 2022 17:03:59 +0100 Subject: [PATCH 59/66] Remove unused `ROOT_OF_UNITY` constant --- specs/sharding/beacon-chain.md | 6 ------ 1 file changed, 6 deletions(-) diff --git a/specs/sharding/beacon-chain.md b/specs/sharding/beacon-chain.md index 3c6cd94bae..6264a17e59 100644 --- a/specs/sharding/beacon-chain.md +++ b/specs/sharding/beacon-chain.md @@ -100,12 +100,6 @@ With the introduction of builder blocks the number of slots per epoch is doubled | - | - | - | | `SAMPLES_PER_BLOB` | `uint64(2**9)` (= 512) | 248 * 512 = 126,976 bytes | -### Precomputed root of unity - -| Name | Value | Notes | -| - | - | - | -| `ROOT_OF_UNITY` | `pow(PRIMITIVE_ROOT_OF_UNITY, (BLS_MODULUS - 1) // int(SAMPLES_PER_BLOB * FIELD_ELEMENTS_PER_SAMPLE), BLS_MODULUS)` | - ## Configuration Note: Some preset variables may become run-time configurable for testnets, but default to a preset while the spec is unstable. From d172dcf1851d03ed0b4150be56161fbd7e093024 Mon Sep 17 00:00:00 2001 From: Dankrad Feist Date: Thu, 7 Jul 2022 17:06:06 +0100 Subject: [PATCH 60/66] Throwaway variable name --- specs/sharding/polynomial-commitments.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/specs/sharding/polynomial-commitments.md b/specs/sharding/polynomial-commitments.md index b061c31dc2..2e4aa21084 100644 --- a/specs/sharding/polynomial-commitments.md +++ b/specs/sharding/polynomial-commitments.md @@ -154,7 +154,7 @@ def roots_of_unity(order: uint64) -> List[BLSFieldElement]: def compute_powers(x: BLSFieldElement, n: uint64) -> List[BLSFieldElement]: current_power = 1 powers = [] - for i in range(n): + for _ in range(n): powers.append(BLSFieldElement(current_power)) current_power = current_power * int(x) % BLS_MODULUS return powers From 062443145c6a6e063e3d42ef737a301365467688 Mon Sep 17 00:00:00 2001 From: Dankrad Feist Date: Thu, 7 Jul 2022 17:09:17 +0100 Subject: [PATCH 61/66] Fix function documentation of `bls_modular_inverse` --- specs/sharding/polynomial-commitments.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/specs/sharding/polynomial-commitments.md b/specs/sharding/polynomial-commitments.md index 2e4aa21084..0e2bd62c7d 100644 --- a/specs/sharding/polynomial-commitments.md +++ b/specs/sharding/polynomial-commitments.md @@ -116,7 +116,7 @@ def list_to_reverse_bit_order(l: List[int]) -> List[int]: ```python def bls_modular_inverse(x: BLSFieldElement) -> BLSFieldElement: """ - Compute the modular inverse of x, i.e. y such that x * y % BLS_MODULUS == 1 and return 0 for x == 0 + Compute the modular inverse of x, i.e. y such that x * y % BLS_MODULUS == 1 and return 1 for x == 0 """ lm, hm = 1, 0 low, high = x % BLS_MODULUS, BLS_MODULUS From 2a8709a8cbe4ae8c3366238cf976b1b78e627b36 Mon Sep 17 00:00:00 2001 From: Dankrad Feist Date: Thu, 7 Jul 2022 22:03:20 +0100 Subject: [PATCH 62/66] Builder block bid increase by at least 1%, no RANDAO processing in builder blocks --- specs/sharding/beacon-chain.md | 11 +++++++++-- specs/sharding/p2p-interface.md | 14 ++++++++++++++ 2 files changed, 23 insertions(+), 2 deletions(-) diff --git a/specs/sharding/beacon-chain.md b/specs/sharding/beacon-chain.md index 6264a17e59..4b3ae4631c 100644 --- a/specs/sharding/beacon-chain.md +++ b/specs/sharding/beacon-chain.md @@ -133,7 +133,7 @@ class BuilderBlockBid(Container): validator_index: ValidatorIndex # Validator index for this bid # Block builders use an Eth1 address -- need signature as - # block builder fees and data gas base fees will be charged to this address + # block bid and data gas base fees will be charged to this address signature_y_parity: bool signature_r: uint256 signature_s: uint256 @@ -241,7 +241,9 @@ def process_block(state: BeaconState, block: BeaconBlock) -> None: if is_execution_enabled(state, block.body): process_execution_payload(state, block, EXECUTION_ENGINE) - process_randao(state, block.body) + if not is_builder_block_slot(block.slot): + process_randao(state, block.body) + process_eth1_data(state, block.body) process_operations(state, block.body) process_sync_aggregate(state, block.body.sync_aggregate) @@ -309,6 +311,11 @@ def verify_builder_block_bid(state: BeaconState, block: BeaconBlock) -> None: # We do not check that the builder address exists or has sufficient balance here. # If it does not have sufficient balance, the block proposer loses out, so it is their # responsibility to check. + + # Check that the builder is a slashable validator. We can probably reduce this requirement and only + # ensure that they have 1 ETH in their account as a DOS protection. + builder = state.validators[builder_block_bid.validator_index] + assert is_slashable_validator(builder, get_current_epoch(state)) ``` #### Sharded data diff --git a/specs/sharding/p2p-interface.md b/specs/sharding/p2p-interface.md index 56a13db648..c9574ca6ce 100644 --- a/specs/sharding/p2p-interface.md +++ b/specs/sharding/p2p-interface.md @@ -44,9 +44,23 @@ Following the same scheme as the [Phase0 gossip topics](../phase0/p2p-interface. |---------------------------------|--------------------------| | `shard_row_{subnet_id}` | `SignedShardSample` | | `shard_column_{subnet_id}` | `SignedShardSample` | +| `builder_block_bid` | `BuilderBlockBid` | The [DAS network specification](./das-p2p.md) defines additional topics. +#### Builder block bid + +##### `builder_block_bid` + +- _[IGNORE]_ The `bid` is published 1 slot early or later (with a `MAXIMUM_GOSSIP_CLOCK_DISPARITY` allowance) -- + i.e. validate that `bid.slot <= current_slot + 1` + (a client MAY queue future samples for propagation at the appropriate slot). +- _[IGNORE]_ The `bid` is for the current or next block + i.e. validate that `bid.slot >= current_slot` +- _[IGNORE]_ The `bid` is the first `bid` valid bid for `bid.slot`, or the bid is at least 1% higher than the previous known `bid` +- _[REJECT]_ The validator defined by `bid.validator_index` exists and is slashable. +- _[REJECT]_ The bid signature, which is an Eth1 signature, needs to be valid and the address needs to contain enough Ether to cover the bid and the data gas base fee. + #### Shard sample subnets Shard sample (row/column) subnets are used by builders to make their samples available as part of their intermediate block release after selection by beacon block proposers. From 8da1320de39c6d5b7ce1ae574522641138f37a6b Mon Sep 17 00:00:00 2001 From: Dankrad Feist Date: Thu, 7 Jul 2022 22:19:29 +0100 Subject: [PATCH 63/66] Fix tocs --- specs/sharding/beacon-chain.md | 8 +++----- specs/sharding/p2p-interface.md | 3 ++- 2 files changed, 5 insertions(+), 6 deletions(-) diff --git a/specs/sharding/beacon-chain.md b/specs/sharding/beacon-chain.md index 4b3ae4631c..7d6df51aa8 100644 --- a/specs/sharding/beacon-chain.md +++ b/specs/sharding/beacon-chain.md @@ -8,7 +8,6 @@ - - [Introduction](#introduction) - [Glossary](#glossary) - [Constants](#constants) @@ -18,15 +17,14 @@ - [Misc](#misc-1) - [Time parameters](#time-parameters) - [Shard blob samples](#shard-blob-samples) - - [Precomputed root of unity](#precomputed-root-of-unity) - [Configuration](#configuration) - [Time parameters](#time-parameters-1) - [Containers](#containers) - [New Containers](#new-containers) - - [`BuilderBlockBid`](#BuilderBlockbid) - - [`BuilderBlockBidWithRecipientAddress`](#BuilderBlockbidwithrecipientaddress) + - [`BuilderBlockBid`](#builderblockbid) + - [`BuilderBlockBidWithRecipientAddress`](#builderblockbidwithrecipientaddress) - [`ShardedCommitmentsContainer`](#shardedcommitmentscontainer) - - [`ShardSample`](#ShardSample) + - [`ShardSample`](#shardsample) - [Extended Containers](#extended-containers) - [`BeaconState`](#beaconstate) - [`BuilderBlockData`](#builderblockdata) diff --git a/specs/sharding/p2p-interface.md b/specs/sharding/p2p-interface.md index c9574ca6ce..d0cafc45bf 100644 --- a/specs/sharding/p2p-interface.md +++ b/specs/sharding/p2p-interface.md @@ -8,12 +8,13 @@ - - [Introduction](#introduction) - [Constants](#constants) - [Misc](#misc) - [Gossip domain](#gossip-domain) - [Topics and messages](#topics-and-messages) + - [Builder block bid](#builder-block-bid) + - [`builder_block_bid`](#builder_block_bid) - [Shard sample subnets](#shard-sample-subnets) - [`shard_row_{subnet_id}`](#shard_row_subnet_id) From 9008bfcb18f86672a12d964793eca7055dabe9c5 Mon Sep 17 00:00:00 2001 From: Dankrad Feist Date: Thu, 7 Jul 2022 22:24:24 +0100 Subject: [PATCH 64/66] Fix tocs --- specs/sharding/polynomial-commitments.md | 3 ++- specs/sharding/validator.md | 2 +- 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/specs/sharding/polynomial-commitments.md b/specs/sharding/polynomial-commitments.md index 0e2bd62c7d..9144b08e43 100644 --- a/specs/sharding/polynomial-commitments.md +++ b/specs/sharding/polynomial-commitments.md @@ -8,7 +8,6 @@ - - [Introduction](#introduction) - [Constants](#constants) - [BLS Field](#bls-field) @@ -16,6 +15,8 @@ - [Custom types](#custom-types) - [Helper functions](#helper-functions) - [`next_power_of_two`](#next_power_of_two) + - [`reverse_bit_order`](#reverse_bit_order) + - [`list_to_reverse_bit_order`](#list_to_reverse_bit_order) - [Field operations](#field-operations) - [Generic field operations](#generic-field-operations) - [`bls_modular_inverse`](#bls_modular_inverse) diff --git a/specs/sharding/validator.md b/specs/sharding/validator.md index 2572f8b9c0..38914095f4 100644 --- a/specs/sharding/validator.md +++ b/specs/sharding/validator.md @@ -8,7 +8,6 @@ - - [Introduction](#introduction) - [Prerequisites](#prerequisites) - [Constants](#constants) @@ -23,6 +22,7 @@ - [Validator assignments](#validator-assignments) - [Attesting](#attesting) - [Sample reconstruction](#sample-reconstruction) + - [Minimum online validator requirement](#minimum-online-validator-requirement) From 302228cfc149cc538b8c67bb484ac23b43c38e16 Mon Sep 17 00:00:00 2001 From: Dankrad Feist Date: Thu, 7 Jul 2022 22:25:29 +0100 Subject: [PATCH 65/66] Fix typo --- specs/sharding/p2p-interface.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/specs/sharding/p2p-interface.md b/specs/sharding/p2p-interface.md index d0cafc45bf..3b627a3398 100644 --- a/specs/sharding/p2p-interface.md +++ b/specs/sharding/p2p-interface.md @@ -83,7 +83,7 @@ The following validations MUST pass before forwarding the `sample`. - _[REJECT]_ The `sample.data` MUST NOT contain any point `x >= BLS_MODULUS`. Although it is a `uint256`, not the full 256 bit range is valid. - _[REJECT]_ The validator defined by `sample.builder` exists and is slashable. - _[REJECT]_ The sample is proposed by the expected `builder` for the sample's `slot`. - i.e., the beacon block at `sample.slot - 1` according to the node's fork choise contains an `IntermediateBlockBid` + i.e., the beacon block at `sample.slot - 1` according to the node's fork choice contains an `IntermediateBlockBid` with `intermediate_block_bid.validator_index == sample.builder` - _[REJECT]_ The sample signature, `sample.signature`, is valid for the builder -- i.e. `bls.Verify(builder_pubkey, sample_signing_root, sample.signature)` OR `sample.signature == Bytes96(b"\0" * 96)` AND From f62571a82e29699f17d508b71f00cb34a304a33c Mon Sep 17 00:00:00 2001 From: dankrad Date: Thu, 7 Jul 2022 22:31:15 +0100 Subject: [PATCH 66/66] Update specs/sharding/polynomial-commitments.md Co-authored-by: George Kadianakis --- specs/sharding/polynomial-commitments.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/specs/sharding/polynomial-commitments.md b/specs/sharding/polynomial-commitments.md index 9144b08e43..e2a4285caa 100644 --- a/specs/sharding/polynomial-commitments.md +++ b/specs/sharding/polynomial-commitments.md @@ -213,7 +213,7 @@ def vector_lincomb(vectors: List[List[BLSFieldElement]], scalars: List[BLSFieldE """ Compute a linear combination of field element vectors. """ - r = [0 for i in len(vectors[0])] + r = [0]*len(vectors[0]) for v, a in zip(vectors, scalars): for i, x in enumerate(v): r[i] = (r[i] + a * x) % BLS_MODULUS