+ Visual breakdown of how the L2 slot is divided into phases. With pipelining, attestation collection and L1 submission extend into the next slot (shown as overflow), and a second diagram shows the cascading dependency between consecutive proposers.
+
+
+
+
Transaction Lifecycle
This tool models the user-perceived latency from sending a transaction to seeing its effects in the proposed
@@ -337,8 +384,9 @@
Transaction Lifecycle
L2 slot. The TX must wait until the next block building window starts, since the proposer snapshots the TX pool at
the beginning of each block.
4. Block execution. The proposer executes the transactions in the block. The actual execution
- time depends on block fill (how many and how complex the transactions are). Once done, the block proposal is
- broadcast to the network without waiting for the full block window to elapse.
+ time depends on block fill (how many and how complex the transactions are). For non-last blocks, the block
+ proposal is broadcast immediately. The last block is bundled with the checkpoint proposal, which is broadcast
+ after checkpoint assembly (adding a small delay).
5. P2P propagation back. The block proposal propagates back through the P2P network to the
user's node (another one-way propagation delay).
6. Node re-execution. The user's node re-executes the block to update its local world state
@@ -346,6 +394,20 @@
Transaction Lifecycle
proposed chain.
7. Slot wrap. If the TX arrives too late in the slot and misses all block building windows, it
must wait for the next slot's proposer to pick it up, adding up to one full slot duration of extra latency.
+
Pipelining. When proposer pipelining is enabled, checkpoint finalization (attestation collection and
+ L1 publishing) is deferred to the next slot. Only checkpoint assembly and one-way proposal broadcast need
+ to complete within the build slot. This frees up time for more blocks, significantly shrinking the dead zone and
+ reducing latency. Validators re-execute the last block and return attestations during a grace period that extends
+ into the target slot.
+
Cascading pipelining. With pipelining, the next proposer (for slot N+1) also builds
+ ahead. Non-last blocks from slot N are broadcast individually via P2P, so the next proposer's node syncs and
+ re-executes them in real time. Only the last block — bundled with the checkpoint proposal — arrives later.
+ Critically, the next proposer starts re-executing the last block as soon as the checkpoint arrives,
+ even before its slot officially begins. This overlaps re-execution with the gap before the slot boundary.
+ The effective init is: max(0, checkpointArrival + lastBlockReExecTime - slotDuration).
+ If the checkpoint arrives early enough that re-execution completes before the slot starts, effective init is 0.
+ The "Cascading (early start)" overlay shows what happens if the next proposer begins building blocks immediately
+ when ready, even before the slot boundary — further reducing the dead zone spike.
Note that this models "proposed chain" visibility -- the TX effects are visible locally before checkpoint
confirmation on L1 or epoch proving.
config = validateConfig(config);
const initializationOffset = config.checkpointInitializationTime;
+ // Resolve effective last block duration: 0 or >= standard block duration means "same as standard"
+ const lastBlockDur = config.lastBlockDuration > 0 && config.lastBlockDuration < config.l2BlockDuration
+ ? config.lastBlockDuration
+ : config.l2BlockDuration;
+
const checkpointFinalizationTime =
config.checkpointAssembleTime + 2 * config.p2pPropagationTime + config.l1PublishingTime;
- const timeReservedAtEnd = config.l2BlockDuration + checkpointFinalizationTime;
+ // When pipelining, checkpoint finalization (attestation collection + L1 publishing) is
+ // deferred to the next slot. Only assembly + one-way broadcast need to fit in the build slot.
+ // Without pipelining, we also reserve a full block duration for validator re-execution
+ // plus the entire checkpoint finalization time.
+ let timeReservedAtEnd;
+ if (config.pipelining) {
+ timeReservedAtEnd = config.checkpointAssembleTime + config.p2pPropagationTime;
+ } else {
+ timeReservedAtEnd = config.l2BlockDuration + checkpointFinalizationTime;
+ }
+
const timeAvailableForBlocks = config.l2SlotDuration - initializationOffset - timeReservedAtEnd;
const maxBlocks = Math.max(1, Math.floor(timeAvailableForBlocks / config.l2BlockDuration)) || 1;
+ // Build block windows: last block uses lastBlockDur instead of l2BlockDuration
const blockWindows = [];
for (let i = 0; i < maxBlocks; i++) {
- blockWindows.push({
- start: initializationOffset + i * config.l2BlockDuration,
- end: initializationOffset + (i + 1) * config.l2BlockDuration,
- });
+ const dur = (i === maxBlocks - 1) ? lastBlockDur : config.l2BlockDuration;
+ const start = initializationOffset + i * config.l2BlockDuration;
+ blockWindows.push({ start, end: start + dur });
}
+ // When pipelining, attestation collection extends into the target slot.
+ // Grace period = validator re-execution (last block duration) + one-way attestation return.
+ const pipeliningAttestationGracePeriod = config.pipelining
+ ? lastBlockDur + config.p2pPropagationTime
+ : 0;
+
+ // When pipelining, compute the effective init offset for the NEXT proposer.
+ // Non-last blocks are broadcast individually, so the next proposer's node syncs
+ // them as they arrive. Only the last block (bundled with the checkpoint) needs
+ // re-execution after the checkpoint arrives.
+ //
+ // The next proposer starts re-executing the last block as soon as the checkpoint
+ // arrives — even if that's before the slot boundary. Re-execution overlaps with
+ // the remaining time before the slot officially starts.
+ //
+ // pipeliningEffectiveInit: delay from slot boundary until the next proposer can
+ // start building. If checkpoint processing finishes before the slot starts, this is 0.
+ //
+ // pipeliningEarlyStartInit: if the next proposer could start building blocks as
+ // soon as checkpoint processing completes (before the slot boundary), this is the
+ // offset from when it starts building. Can be negative (meaning it starts early).
+ let pipeliningEffectiveInit = undefined;
+ let pipeliningEarlyStartInit = undefined;
+ if (config.pipelining) {
+ // Checkpoint broadcast: (maxBlocks-1) standard blocks + 1 short last block + assembly
+ const checkpointBroadcastTime = initializationOffset
+ + (maxBlocks - 1) * config.l2BlockDuration + lastBlockDur
+ + config.checkpointAssembleTime;
+ const checkpointArrival = checkpointBroadcastTime + config.p2pPropagationTime;
+ const arrivalRelativeToNextBuild = checkpointArrival - config.l2SlotDuration;
+ const lastBlockReExecTime = lastBlockDur * config.l2BlockFillPercent / 100;
+
+ // Re-execution starts at checkpoint arrival, finishes lastBlockReExecTime later.
+ // The delay relative to slot boundary is (arrival + reExec - slotBoundary).
+ // If negative, the next proposer is ready before the slot starts → effectiveInit = 0.
+ pipeliningEffectiveInit = Math.max(0, arrivalRelativeToNextBuild + lastBlockReExecTime);
+
+ // Early start: the next proposer begins building as soon as it finishes
+ // processing the checkpoint, even if that's before the slot boundary.
+ // This value is the time from checkpoint processing completion to the first block.
+ // When positive, it means a delay after the slot boundary (same as effectiveInit).
+ // When zero or negative, the proposer starts building early.
+ pipeliningEarlyStartInit = arrivalRelativeToNextBuild + lastBlockReExecTime;
+ }
+
+ // Dead zone excess: how much extra peak latency a wrapping TX experiences
+ // compared to a normal block cycle. When <= 0, there's no meaningful dead zone spike.
+ const wrappedInit = (config.pipelining && pipeliningEarlyStartInit !== undefined)
+ ? pipeliningEarlyStartInit
+ : initializationOffset;
+ const deadZoneExcess = config.l2SlotDuration + wrappedInit - initializationOffset - maxBlocks * config.l2BlockDuration;
+
return {
initializationOffset,
checkpointFinalizationTime,
timeAvailableForBlocks,
+ timeReservedAtEnd,
maxBlocks,
blockWindows,
+ lastBlockDur,
+ deadZoneExcess,
+ pipelining: config.pipelining,
+ pipeliningAttestationGracePeriod,
+ pipeliningEffectiveInit,
+ pipeliningEarlyStartInit,
};
}
/**
* Computes the user-perceived latency for a single transaction sent at a given
* time within an L2 slot.
+ *
+ * Non-last blocks are broadcast individually right after execution.
+ * The last block is bundled with the checkpoint proposal, so it incurs an extra
+ * checkpointAssembleTime delay before its proposal is broadcast.
*/
function computeLatency(config, timetable, txSendTime) {
const arrivalAtProposer = txSendTime + config.p2pPropagationTime;
const executionTime = config.l2BlockDuration * config.l2BlockFillPercent / 100;
+ const lastBlockExecTime = timetable.lastBlockDur * config.l2BlockFillPercent / 100;
let blockIndex;
if (arrivalAtProposer <= timetable.initializationOffset) {
@@ -464,22 +611,55 @@
Transaction Lifecycle
blockIndex = Math.ceil((arrivalAtProposer - timetable.initializationOffset) / config.l2BlockDuration);
}
+ // Check if TX fits in the shorter last block window
+ if (blockIndex === timetable.maxBlocks - 1) {
+ const lastBlockEnd = timetable.initializationOffset + (timetable.maxBlocks - 1) * config.l2BlockDuration + timetable.lastBlockDur;
+ if (arrivalAtProposer > lastBlockEnd) {
+ blockIndex = timetable.maxBlocks; // will trigger wrap
+ }
+ }
+
let wrapsToNextSlot = false;
let blockEnd;
+ let isLastBlock = false;
+ let wrappedNextSlotInit;
if (blockIndex >= timetable.maxBlocks) {
wrapsToNextSlot = true;
- blockIndex = 0;
- blockEnd = config.l2SlotDuration + timetable.initializationOffset + executionTime;
+ // With pipelining, the next proposer starts building as soon as it processes the
+ // previous checkpoint — potentially before the slot boundary.
+ wrappedNextSlotInit = (timetable.pipelining && timetable.pipeliningEarlyStartInit !== undefined)
+ ? timetable.pipeliningEarlyStartInit
+ : timetable.initializationOffset;
+ const nextSlotBuildStart = config.l2SlotDuration + wrappedNextSlotInit;
+ if (arrivalAtProposer <= nextSlotBuildStart) {
+ blockIndex = 0;
+ } else {
+ blockIndex = Math.ceil((arrivalAtProposer - nextSlotBuildStart) / config.l2BlockDuration);
+ blockIndex = Math.min(blockIndex, timetable.maxBlocks - 1);
+ }
+ isLastBlock = blockIndex === timetable.maxBlocks - 1;
+ const execTime = isLastBlock ? lastBlockExecTime : executionTime;
+ blockEnd = nextSlotBuildStart + blockIndex * config.l2BlockDuration + execTime;
} else {
const blockStart = timetable.initializationOffset + blockIndex * config.l2BlockDuration;
- blockEnd = blockStart + executionTime;
+ isLastBlock = blockIndex === timetable.maxBlocks - 1;
+ const execTime = isLastBlock ? lastBlockExecTime : executionTime;
+ blockEnd = blockStart + execTime;
}
- const proposalArrival = blockEnd + config.p2pPropagationTime;
- const effectsVisible = proposalArrival + executionTime;
+ // The last block is bundled with the checkpoint proposal, so its broadcast is
+ // delayed by checkpoint assembly time. Non-last blocks are broadcast immediately.
+ const assemblyDelay = isLastBlock ? config.checkpointAssembleTime : 0;
+ const proposalBroadcast = blockEnd + assemblyDelay;
+ const proposalArrival = proposalBroadcast + config.p2pPropagationTime;
+ // Re-execution time depends on block size: shorter last block → faster re-exec
+ const reExecTime = isLastBlock ? lastBlockExecTime : executionTime;
+ const effectsVisible = proposalArrival + reExecTime;
const latency = effectsVisible - txSendTime;
+ const blockExecTime = isLastBlock ? lastBlockExecTime : executionTime;
+
// Build human-readable timeline
const timeline = [];
timeline.push({ time: txSendTime, event: 'TX sent to node' });
@@ -493,10 +673,11 @@
Transaction Lifecycle
time: arrivalAtProposer,
event: `TX too late for current slot (missed all ${timetable.maxBlocks} blocks)`,
});
- const blockStart = config.l2SlotDuration + timetable.initializationOffset;
+ const blockStart = config.l2SlotDuration + wrappedNextSlotInit + blockIndex * config.l2BlockDuration;
+ const earlyNote = wrappedNextSlotInit < 0 ? ` (proposer starts ${Math.abs(wrappedNextSlotInit).toFixed(1)}s early via pipelining)` : '';
timeline.push({
time: blockStart,
- event: `TX picked up for inclusion in block ${blockIndex} of next slot (block starts)`,
+ event: `TX picked up for inclusion in block ${blockIndex} of next slot${earlyNote}`,
});
} else {
const blockStart = timetable.initializationOffset + blockIndex * config.l2BlockDuration;
@@ -505,23 +686,38 @@
`;
+
+ for (const phase of phases) {
+ const left = (phase.start / totalTime) * 100;
+ const width = ((phase.end - phase.start) / totalTime) * 100;
+ if (width <= 0) continue;
+
+ const overflow = phase.end > slot;
+ const border = overflow ? '2px dashed rgba(255,255,255,0.3)' : 'none';
+
+ html += ``;
+ }
+
+ // Slot boundary vertical line
+ if (totalTime > slot) {
+ html += ``;
+ }
+
+ html += `
`;
+
+ // Phase legend
+ html += `
`;
+ for (const phase of phases) {
+ const left = (phase.start / totalTime) * 100;
+ const width = ((phase.end - phase.start) / totalTime) * 100;
+ if (width <= 0) continue;
+ html += `
`;
+ html += ``;
+ html += `${phase.label} (${formatSeconds(phase.start)}–${formatSeconds(phase.end)})`;
+ html += `
`;
+ }
+ if (config.pipelining && totalTime > slot) {
+ html += `
⟵ Dashed phases overflow into target slot
`;
+ }
+ html += `
`;
+
+ html += `
`;
+
+ // When pipelining, show a second timeline for the next proposer's cascading dependency
+ if (config.pipelining && timetable.pipeliningEffectiveInit !== undefined) {
+ html += buildCascadingTimeline(config, timetable);
+ }
+
+ container.innerHTML = html;
+ }
+
+ /**
+ * Renders a two-slot cascading pipelining visualization showing how the next
+ * proposer's build window overlaps with the current slot's checkpoint broadcast.
+ */
+ function buildCascadingTimeline(config, timetable) {
+ const slot = config.l2SlotDuration;
+ const initOffset = timetable.initializationOffset;
+
+ // Key timings for Slot N (current proposer)
+ const lastBlockDur = timetable.lastBlockDur;
+ const checkpointBroadcast = initOffset + (timetable.maxBlocks - 1) * config.l2BlockDuration + lastBlockDur + config.checkpointAssembleTime;
+ const checkpointArrival = checkpointBroadcast + config.p2pPropagationTime;
+ const lastBlockReExecTime = lastBlockDur * config.l2BlockFillPercent / 100;
+ const effectiveInit = timetable.pipeliningEffectiveInit;
+ const earlyStartInit = timetable.pipeliningEarlyStartInit;
+
+ // Re-execution starts at checkpoint arrival, which may be before the slot boundary
+ const reExecStart = checkpointArrival; // absolute time
+ const reExecEnd = checkpointArrival + lastBlockReExecTime; // when proposer is ready
+
+ // We show two slots: Slot N (0..slot) and Slot N+1 (slot..2*slot)
+ const totalTime = 2 * slot;
+ const barHeight = 28;
+ const rowGap = 8;
+ const labelHeight = 18;
+ const legendHeight = 28;
+
+ let html = `
`;
+ html += `
Cascading Pipelining
`;
+ html += `
+ The next proposer starts re-executing the last block as soon as the checkpoint arrives — even before the slot boundary.
+ Re-execution overlaps with the remaining time in Slot N.
+ ${earlyStartInit <= 0
+ ? `The next proposer is ready ${formatSeconds(Math.abs(earlyStartInit))} before its slot starts and begins building blocks immediately.`
+ : `The next proposer is ready ${formatSeconds(earlyStartInit)} after its slot starts.`}
+