Skip to content

chore: introduce message processing#14567

Merged
benesjan merged 2 commits intonextfrom
nv/msg-processing
May 28, 2025
Merged

chore: introduce message processing#14567
benesjan merged 2 commits intonextfrom
nv/msg-processing

Conversation

@nventuro
Copy link
Contributor

This creates a new message::processing module in aztec-nr, for us to place components related to the message processing pipeline, e.g. fetching of logs, processing of notes, etc. We need such a concept in order to properly handle batched network requests, backups, etc.

This just lays the groundwork by moving some things around, hiding the complexity of log discovery into a get_private_logs function. This will be later be expanded upon by also including a capsule array for notes to be added to pxe, etc.

@nventuro nventuro requested a review from benesjan May 27, 2025 20:58
@nventuro nventuro marked this pull request as ready for review May 27, 2025 20:58
Copy link
Contributor

@benesjan benesjan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reasonable

@benesjan benesjan enabled auto-merge May 28, 2025 07:38
@benesjan benesjan added this pull request to the merge queue May 28, 2025
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks May 28, 2025
@benesjan benesjan added this pull request to the merge queue May 28, 2025
Merged via the queue into next with commit 82ae6da May 28, 2025
4 checks passed
@benesjan benesjan deleted the nv/msg-processing branch May 28, 2025 09:15
github-merge-queue bot pushed a commit that referenced this pull request May 29, 2025
This massively improves performance of the note discovery process by
enabling batching of note validation requests (i.e. checking that notes
indeed exist in the note hash tree). We achieve this by migrating from
sequential steps to a staged approach.

This continues the trend set by
#13107 where we
began to split the message discovery process into multiple steps (now
modeled by `aztec::message::processing` from #14567), each of which
starts and ends with a capsule array. This lets us consume and produce
arbitrary amounts of values in unconstrained Noir functions, without
having to deal with the limitations of fixed sized arrays. As of this
PR, the process looks as follows:

1) (pxe): contract simulation starts
1) (nr) message discovery calls `fetchTaggedLogs` oracle
1) (pxe) log discovery is performed by comunicating with the node, logs
are placed in a capsule arrray
1) (nr) each log is read from the capsule array, decrypted, decoded and
processed. note validation requests are placed in a capsule array.
pending partial notes are placed in a capsule array.
1) (nr) pending partial notes are processed one by one (slow)
1) (pxe) note validation requests are completed by comunicating with the
node (in parallel!)

This pattern sidesteps having to deal with fixed-size oracles (which
would require e.g. pagination, an even more stateful PXE) by leveraging
the capsule array as the main building tool. It should be simple to
later extend this to e.g. produce a capsule array of tags to be searched
for, which would result in a second capsule array being populated with
the results, so that partial note completion log queries need not be
completed serially.

The obvious downside of this approach is that it introduces relatively
strange oracle semantics, and results in comunication from the contract
and pxe to occur not just through the oracle interfaces, but also the
values in the capsule arrays (including their serialization).

---------

Co-authored-by: Gregorio Juliana <gregojquiros@gmail.com>
Co-authored-by: thunkar <gregjquiros@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants