[DNM] experimental UTXO commitments #1916
Closed
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Playing around with possible approaches/implementation for "UTXO commitments". Not sure if this would be useful in its current state.
This PR allows a parallel UTXO PMMR to be built alongside the existing output (aka TXO) PMMR.
Too expensive to do this in realtime so hooked it up to run on demand.
Also too complex to maintain in realtime to handle rewind, sync etc.
We can trigger a UTXO PMMR rebuild via the
/compact
API endpoint (easiest place to hook it into) -Approx 40ms to rebuild the UTXO MMR in testnet4 based on the existing output MMR (can be pruned/compacted). This is rebuilding from scratch with no thought put into optimizing this.
The existing output MMR is "append only". spending an output does not change the data in the MMR and will not affect the root of the MMR. Only by appending new outputs to the MMR will the root change.
We can prune and compact the output MMR and leave the root unaffected by removing spent outputs.
To construct the UTXO MMR we simply replace spent outputs with a "zero" sentinel value.
Each leaf is then hashed via
hash_with_index("zero", pos)
.The interesting thing is these can roll-up to parent nodes in a deterministic way.
For example, if pos 1 and pos 2 are both spent we can hash them as -
and the parent node at pos 3 can be generated as before -
And we can recreate the hash at pos 3 without needing to know anything about the underlying data beyond them both having been removed/spent.
Related - #1883 (@tromp suggested "zeroing out" spent commitments to fix the spent hash "malleability").
Related - #1733 (proof of "unspentness")
Inspiration from - https://petertodd.org/2016/delayed-txo-commitments (Delayed TXO commitments).
Thinking out loud -
Maybe we don't need to do this in realtime - maybe there are ways we can do this in a "delayed" fashion. We commit to the UTXO MMR every 60 blocks (hourly) or 1,440 blocks (daily) say.
This would allow us to generate a Merkle proof for any unspent output older than a day, for example.
Nodes could generate proofs for their own outputs by maintaining some subset of the UTXO MMR.
Nodes could verify this without needing to maintain the full UTXO set etc.
Thoughts? @tromp @ignopeverell