Skip to content

feat(avm): packed field in bytecode decomposition#12015

Merged
fcarreiro merged 1 commit intomasterfrom
fc/avm-packed-field
Feb 17, 2025
Merged

feat(avm): packed field in bytecode decomposition#12015
fcarreiro merged 1 commit intomasterfrom
fc/avm-packed-field

Conversation

@fcarreiro
Copy link
Contributor

@fcarreiro fcarreiro commented Feb 14, 2025

----------------------------------------------------------------------------
Benchmark                                  Time             CPU   Iterations
----------------------------------------------------------------------------
bc_decomposition_acc                    10.9 us         10.8 us        64294
bc_decomposition_interactions_acc       1.12 us         1.12 us       635595

Copy link
Contributor Author

This stack of pull requests is managed by Graphite. Learn more about stacking.

@fcarreiro fcarreiro force-pushed the fc/avm-packed-field branch 2 times, most recently from 1088652 to 69b7bf8 Compare February 14, 2025 17:52
{ C::bc_decomposition_sel_pc_plus_34, bytecode_exists_at(i + 34) },
{ C::bc_decomposition_sel_pc_plus_35, bytecode_exists_at(i + 35) },
} });
row++;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fcarreiro Well done. Definitely more readable with this change!

if (bytecode_len - i >= 32) {
as_int = from_buffer<uint256_t>(bytecode, i);
} else {
std::vector<uint8_t> tail(bytecode.begin() + static_cast<ssize_t>(i), bytecode.end());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fcarreiro Is this cast to ssize_t really required?.
If it is indeed required, can you please add a comment to explain why.

// for PCs 0, 31, 62, ...
// The "sequencer" can choose which PCs to pack, but the bytecode hashing trace
// will use this selector (as 1) in the lookup tuple. Therefore, if the sequencer
// does not choose at least the minimum amount of rows, the lookup will fail.
Copy link
Contributor

@jeanmon jeanmon Feb 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fcarreiro I think this is too relaxed, because a malicious sequencer could inject bytecode and we would not notice as the sequencer could skip this "injected" bytecode in hashing operation.
For instance, imagine the legitimate bytecode of size 93 and we inject non-legitimate bytecode of size 5 at offset 31.

We toggle sel_packed at offset 0, 36, 67 and bytecode hashing would pass. However, for eveything else (instruction fetching, etc, ...) we might execute from non-legitimate bytecode unless we add other safeguards.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think the bytecode hashing will provide the safeguards here in terms of ordering and continutity. E.g. the bytecode hashing table will use a lookup like {pc, packed_field} to index into decomposition and locally check that its lookup_pc increments by 31 bytes. It wont be able to look up the values at 36 or 67 so the hash will fail

hashing_sel Lookup PC Packed Field
1 0 0xabc
1 31 0x123

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah makes sense. Thanks @IlyasRidhuan for the input.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought this is what the comment explained. I'll try to make it a bit clearer. @jeanmon do you still have any concern after I address your other comments?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BTW I think such a long explanation is beyond the scope of this file. This file is only really concerned with decomposing and repacking. The statement about why this is enough for hashing, I think should actually be explained in the hashing file... but I will add some more here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am fine if yo move it to the hashing file.
I do not have any other concerns beyond the comments. So I will approve to not block this PR.

Copy link
Contributor

@jeanmon jeanmon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fcarreiro Please take an attentive look at the "bytecode injection" scenario. I might have overlooked some safeguards in case explain why this is safe. Otherwise, please add safeguards.

@fcarreiro fcarreiro merged commit af8c7d6 into master Feb 17, 2025
13 checks passed
Copy link
Contributor Author

Merge activity

  • Feb 17, 7:25 AM EST: A user merged this pull request with Graphite.

@fcarreiro fcarreiro deleted the fc/avm-packed-field branch February 17, 2025 12:25
TomAFrench added a commit that referenced this pull request Feb 17, 2025
* master: (207 commits)
  chore(docs): acir formal verification final report (#12040)
  feat(avm): packed field in bytecode decomposition (#12015)
  feat: Contract updates (#11514)
  feat!: enforce fees (#11480)
  chore: redo typo PR by maximevtush (#12033)
  git subrepo push --branch=master noir-projects/aztec-nr
  git_subrepo.sh: Fix parent in .gitrepo file. [skip ci]
  chore: replace relative paths to noir-protocol-circuits
  git subrepo push --branch=master barretenberg
  fix: unexposing test fr from vkey struct ts (#12028)
  chore: structured polys in Translator (#12003)
  fix(ci3): fix ./bootstrap.sh fast in noir-projects (#12026)
  feat: new op queue logic (#11999)
  git subrepo push --branch=master noir-projects/aztec-nr
  git_subrepo.sh: Fix parent in .gitrepo file. [skip ci]
  chore: replace relative paths to noir-protocol-circuits
  git subrepo push --branch=master barretenberg
  chore: skip flakey p2p (#12021)
  chore(ci3): label handling (#12020)
  feat: Sync from noir (#12002)
  ...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants