-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implemented future notes support in block-producer #390
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you! Looks good. This is not a full review, but I left some comments inline.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good! Thank you! I left some more comments inline.
Also, we'll need to:
- Rebase from the latest
next
as protobuf files are now in a different place. - Make update needed to account for changes introduced in 0xPolygonMiden/miden-base#772.
- Update the changelog.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good! Thank you. I left some comments inline. The main ones are about not loading all the notes into memory in the store.
Also, I'm still thinking about how the logic for batching notes will work. Will write more thoughts on this later.
@@ -79,11 +100,18 @@ impl State { | |||
let nullifier_tree = load_nullifier_tree(&mut db).await?; | |||
let chain_mmr = load_mmr(&mut db).await?; | |||
let account_tree = load_accounts(&mut db).await?; | |||
let notes = load_notes(&mut db).await?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we need to add notes to the in-memory state. Instead, we can just query the database for relevant notes.
/// Loads all the note IDs from the DB. | ||
#[instrument(target = "miden-store", skip_all, ret(level = "debug"), err)] | ||
pub async fn select_note_ids(&self) -> Result<BTreeSet<NoteId>> { | ||
self.pool.get().await?.interact(sql::select_note_ids).await.map_err(|err| { | ||
DatabaseError::InteractError(format!("Select note IDs task failed: {err}")) | ||
})? | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Following the above comment, I think this will need to change to something that takes a list of note IDs and return back the note IDs which are present in the database. Or maybe for each note found note we would return (NoteId, NoteInclusionProof)
- this would be useful later on. In this case, we can call the function select_note_inclusion_proofs()
.
/// Select all note IDs from the DB using the given [Connection]. | ||
/// | ||
/// # Returns | ||
/// | ||
/// A set with note IDs, or an error. | ||
pub fn select_note_ids(conn: &mut Connection) -> Result<BTreeSet<NoteId>> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar to the previous comment, this would need to be changed.
#[instrument(target = "miden-store", skip_all)] | ||
async fn load_notes(db: &mut Db) -> Result<BTreeSet<NoteId>, StateInitializationError> { | ||
Ok(db.select_note_ids().await?) | ||
} | ||
|
||
#[instrument(target = "miden-store", skip_all)] | ||
fn filter_found_notes(notes: &[NoteId], in_memory: &BTreeSet<NoteId>) -> Vec<NoteId> { | ||
notes.iter().filter(|¬e_id| in_memory.contains(note_id)).copied().collect() | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar to the above comments, these two functions would not be needed here.
let BlockInputs { | ||
block_header, | ||
chain_peaks, | ||
account_states, | ||
nullifiers, | ||
found_unauthenticated_notes, | ||
} = self |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would implement From<BlockInputs> for GetBlockInputsResponse
so that we can avoid this destructuring here.
// TODO: this can be optimized by first computing dangling notes of the batch itself, | ||
// and only then checking against the other ready batches | ||
let dangling_notes = self.find_dangling_notes(&txs).await; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's create an issue for this.
for note in tx.input_notes() { | ||
produced_nullifiers.push(note.nullifier()); | ||
if let Some(header) = note.header() { | ||
if !unauthenticated_input_notes.insert(header.id()) { | ||
return Err(BuildBatchError::DuplicateUnauthenticatedNote( | ||
header.id(), | ||
txs, | ||
)); | ||
} | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With this logic, even if a note was produced and consumed in the same batch, we would still generate a nullifier for it. This means for all "ephemeral" notes we'll need to update the nullifier tree. It does simplify some things, but also may create quite a bit of an overhead. I'm still thinking through whether we should do something else here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm going to merge this PR to "unbreak" the next
branch - but let's create a new PR to address the outstanding things. These are:
- Address some minor comments from the previous review (e.g., this one and this one).
- Remove loading note into memory from the store.
- Fix transaction batching logic.
The last point refers to this comment, but to describe things more concretely (and add some new info):
When we combine transactions into a batch, we want to preserve the following properties:
- If a note was produced by one transaction and consumed by another transaction in the same batch, the note should not be added to the list of output notes for the batch (this already happens) and the nullifier of this note should not be added to the list of nullifiers for the batch.
a. The matching should be done based on note ID (i.e., we compare note IDs to figure out if two notes are the same), but if input/output notes with the same IDs have different hashes (i.e., the metadata is different) we should return an error. - If an unauthenticated note is present in the store at the time when we create the batch, this note should no longer be included in the list of unauthenticated notes.
a. The assumption here is that we'll verify authentication info for this note in the batch kernel. - We should make sure that the list of unauthenticated input notes and output notes across all transactions in a batch is unique. This should be done before we match an eliminate notes which are produced and consumed in the same batch.
a. We may relax this restriction later, but I think for now it is the easiest to enforce.
When we aggregate batches into a block, the logic should be slightly different. Specifically, notes which are produced and consumed in different batches of the same block do note get "erased". That is, they are still added to the block's note tree and their nullifiers are added to the list of nullifiers produced by the block.
One thing I was thinking is that maybe instead of tracking unauthenticated notes in a separate field of the TransactionBatch
struct, we could do it more like we do in the ProvenTransaction
struct. Specifically, we TransactionBatch
could look something like this:
pub struct TransactionBatch {
id: BatchId,
updated_accounts: Vec<(TransactionId, TxAccountUpdate)>,
input_notes: Vec<InputNoteCommitment>,
output_notes: Vec<OutputNote>,
output_notes_smt: BatchNoteTree,
}
Where InputNoteCommitment
is the same as the one we use in the ProvenTransaction
. The list of input notes will include the following:
- All authenticated notes (i.e., the ones missing the
header
field) would be in the list in its original form. - All unauthenticated notes which could be matched against output notes in the same batch would be removed from the list.
- All unauthenticated notes which were in the store at the time when the batch was created, would be transformed into authenticated notes (i.e., their
header
field would be cleared).
* fix(WIP): review comments * fix(WIP): review comments * docs: document under what circumstances SMT creating can fail * fix: review comments * fix: compare note hashes, return error on duplicate unauthenticated note IDs * refactor: update batch creation logic * refactor: migrate to the latest `miden-base` * refactor: store input notes commitment in transaction batch * fix: review comments * refactor: address review comments
Resolves #381