Skip to content

fix: Ignore duplicate p2p messages post-validation#18027

Closed
spalladino wants to merge 1 commit intov2from
palla/ignore-duplicate-p2p-messages
Closed

fix: Ignore duplicate p2p messages post-validation#18027
spalladino wants to merge 1 commit intov2from
palla/ignore-duplicate-p2p-messages

Conversation

@spalladino
Copy link
Contributor

@spalladino spalladino commented Oct 28, 2025

Today we have 3 deduplication mechanisms for messages:

  1. The fastMsgId cache, where gossipsub deduplicates messages based on a hash computed over the compressed message buffer:
export function fastMsgIdFn(rpcMsg: RPC.Message): string {
  if (rpcMsg.data) {
    return xxhash.h64Raw(rpcMsg.data, h64Seed).toString(16);
  }
  return '0000000000000000';
}
  1. The msgId cache, where gossipsub deduplicates messages based on a hash computed over the topic and the decompressed message buffer:
export function getMsgIdFn(message: Message) {
  const { topic } = message;

  const vec = [Buffer.from(topic), message.data];
  return sha256(Buffer.concat(vec)).subarray(0, 20);
}
  1. The msgIdSeenValidators we manually implement in LibP2PService, where we deduplicate based on the msgId again (not using the custom p2pMessageIdentifier from Gossipable), but with a larger cache, based on capacity instead of TTL:
    const validator = topicType ? this.msgIdSeenValidators[topicType] : undefined;

    if (!validator || !validator.addMessage(msgId)) {
      this.instrumentation.incMessagePrevalidationStatus(false, topicType);
      this.node.services.pubsub.reportMessageValidationResult(msgId, source.toString(), TopicValidatorResult.Ignore);
      return { result: false, topicType };
    }

All of these deduplications run after any validation is done. This PR adds an additional check for duplicates after validation, where we deduplicate against the attestation and tx pools, using the object identifiers:

  const validationFunc: () => Promise<ReceivedMessageValidationResult<BlockProposal>> = async () => {
      const block = BlockProposal.fromBuffer(payloadData);
      const isValid = await this.validateBlockProposal(source, block);
      const exists = isValid && (await this.mempools.attestationPool!.hasBlockProposal(block));

      this.logger.trace(`Validate propagated block proposal`, {
        isValid,
        exists,
        [Attributes.SLOT_NUMBER]: block.payload.header.slotNumber.toString(),
        [Attributes.P2P_ID]: source.toString(),
      });

      if (!isValid) {
        return { result: TopicValidatorResult.Reject };
      } else if (exists) {
        return { result: TopicValidatorResult.Ignore, obj: block };
      } else {
        return { result: TopicValidatorResult.Accept, obj: block };
      }
    };

Returning Ignore to gossipsub causes it to delete the message from its mcache (but not from its seenCache) without penalizing the sender, and instructs it to not re-broadcast the message.

Note that we cannot rely on the identifiers before validation since they could be forged: eg an attacker could pick up an identifier for a valid message, change its data with garbage, and forward it so it fails validation. When the real message is received afterwards, it gets rejected because it had previously failed. We need to run these id-based deduplications after validation so we know that the identifier corresponds to the payload.

This change prevents a malicious message originator from slightly modifying part of its message and broadcasting it across the network repeatedly, causing it to be re-processed and re-broadcasted. Eg a malicious attester could rely on non-deterministic ecdsa signatures to produce different valid signatures for the same attestation and broadcast them all. This change would only accept the first attestation and reject all others.

Fixes A-206

@spalladino
Copy link
Contributor Author

Closing in favor of #18034

@spalladino spalladino closed this Oct 28, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant