fix: not reusing tags of partially reverted txs#20817
fix: not reusing tags of partially reverted txs#20817benesjan merged 8 commits intomerge-train/fairiesfrom
Conversation
This stack of pull requests is managed by Graphite. Learn more about stacking. |
56611f8 to
41614c2
Compare
yarn-project/pxe/src/contract_function_simulator/oracle/private_execution.ts
Show resolved
Hide resolved
yarn-project/pxe/src/tagging/sender_sync/utils/load_and_store_new_tagging_indexes.test.ts
Show resolved
Hide resolved
yarn-project/pxe/src/storage/tagging_store/sender_tagging_store.ts
Outdated
Show resolved
Hide resolved
yarn-project/pxe/src/storage/tagging_store/sender_tagging_store.ts
Outdated
Show resolved
Hide resolved
| let highestSurvivingIndex: number | undefined; | ||
|
|
||
| for (let index = pendingEntry.lowestIndex; index <= pendingEntry.highestIndex; index++) { | ||
| const siloedTag = await SiloedTag.compute({ extendedSecret, index }); |
There was a problem hiding this comment.
I think this will kill the IndexedDB tx. In forEachSecretWithPendingData you execute the callback at the end, so this should be fine as long as we can be 100% sure that the callback will not interact with the DB at all. Said another way: we must assume nothing between line 446 and 483 reads or write from the DB.
That's a reason why I don't love forEachSecretWithPendingData: it makes it easy to forget that we're inside an IndexedDB transaction, which is unfortunately a big deal.
There was a problem hiding this comment.
Shouldn't have said "kill", it's more accurate to say that the environment will autocommit the TX when blocking to await this SiloedTag.compute
There was a problem hiding this comment.
mverzilli
left a comment
There was a problem hiding this comment.
Only not approving because we need to bump the PXE_DATA_SCHEMA_VERSION value since there's store structural changes
…e.ts Co-authored-by: Martin Verzilli <martin@aztec-labs.com>
f2a22b2 to
2ee2922
Compare
351fbc5 to
22646e4
Compare
|
@mverzilli bumped the schema version in f02c92a and addressed your concerns so this is ready for a re-review. |
This reverts commit 22646e4.
BEGIN_COMMIT_OVERRIDE fix: skip oracle version check for pinned protocol contracts (#21349) fix: not reusing tags of partially reverted txs (#20817) feat: move storage_slot from partial commitment to completion hash (#21351) feat: offchain reception (#20893) fix: handle workspace members in needsRecompile crate collection (#21284) fix(aztec-nr): return Option from decode functions and fix event commitment capacity (#21264) fix: handle bad note lengths on compute_note_hash_and_nullifier (#21271) fix: address review feedback from PRs #21284 and #21237 (#21369) fix: claim contract & improve nullif docs (#21234) feat!: auto-enqueue public init nullifier for contracts with public functions (#20775) fix: search for all note nonces instead of just the one for the note index (#21438) fix: set anvilSlotsInAnEpoch in e2e_offchain_payment to prevent finalization race (#21452) fix: complete legacy oracle mappings for all pinned contracts (#21404) fix: correct inverted constrained encryption check in message delivery (#21399) feat!: improve L2ToL1MessageWitness API (#21231) END_COMMIT_OVERRIDE
|
✅ Successfully backported to backport-to-v4-staging #21582. |
|
❌ Failed to cherry-pick to |

Warning: This PR became quite involved and I expect it will take a long time to review 😿
Note: Keep in mind that this is all dealing with sender sync only.
Before this PR if a tx was partially reverted then we would discard all the tags in and hence "mark" them as unused. This is problematic because then we would use those tags again which would effectively link the 2 separate txs.
For this reason in this PR i do the following: I check if a tx was finalized and if it was partially reverted I obtain the tx effect from chain, check which indexes actually made it onchain by recomputing the siloed tags from the indexes and matching which made it to the tx effect.
This sounds simple but the problem is that it largely changed how the data is modeled because before for a tx we cared only about the highest used index and now we care about the full range of indexes. This is because only some of the indexes might have been used in a partially reverted tx while before this PR txs were treated as atomic and hence only the highest pending index was relevant (since tx was either "fully included" or not at all and for the sender tagging index sync we care only about the highest indexes when choosing an index).
Because of this the diff of the PR is large and the changes are dense.
AI was surprisingly bad at tackling this problem and was getting lost in ugly abstractions so this a largely handmade, organic, crafted bio code.
Closes https://linear.app/aztec-labs/issue/F-228/handle-revertible-non-revertible-phases-in-tagging