-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Thv/#145 reorganization of mempool #153
Conversation
The failing test in CI is the one addressed in #150 and has nothing to do with this PR. |
2d77009
to
f97a4c8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code lgtm, with a few nits/suggestions.
I am concerned about clearing user's tx from mempool during re-org, and hoping that is a temporary situation. Either way, I'd like to see a note about it in the readme/docs, as I would put this in the realm of surprising behavior.
Also, I wonder if the wallet needs some extra logic to become aware that a transaction has been removed from the mempool and thus the user's balance and tx history may need updating? I suppose that if the wallet is always obtaining this info by querying the archival_state + mempool, then it should automatically be updated, but I just want to pose the q to ensure this is what happens and that you've considered/tested wallet/dashboard state.
Finally, I'm not up-to-speed on mutator-set stuff yet, so you may want to loop in @aszepieniec to review this pr also.
pub async fn update_with_block( | ||
&mut self, | ||
previous_mutator_set_accumulator: MutatorSetAccumulator, | ||
block: &Block, | ||
) { | ||
// If we discover a reorganization, we currently just clear the mempool, | ||
// as we don't have the ability to roll transaction removal record integrity |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is is it expected/planned that we will eventually be able to rollback transaction removal record integrity proofs? ie, is it "just" an engineering issue or a theoretical one?
I think that forgetting user's transactions during re-org could be a pretty serious drawback for neptune, so I'm hoping this can be changed by launch, or at least that we'll be able to tell users it is "coming soon".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is is it expected/planned that we will eventually be able to rollback transaction removal record integrity proofs? ie, is it "just" an engineering issue or a theoretical one?
This limitation can certainly be solved through "relatively" simple engineering. The easiest way to solve it would be for each node to store multiple copies of the same transaction, with a copy for the last
This, however, does not solve the problem of adding to the mempool transactions that were mined on the abandoned fork and now would need to be re-added to the mempool.
Consider a scenario where tip changes from 3a
to 3b
+-- block 2a <-- block 3a
|
Genesis <-- block 1 <--+
|
+-- block 2b <-- block 3b
The transactions in the mempool at the time of 3a
can be updated to be valid for 3b
provided the client knows a version of the transaction that was valid at block 1
. The transactions that are no longer mined since block 2a
and 3a
were abandoned would also have to be stored by the client in a version valid at block 1
. So it's certainly possible to preserve the mempool after a reorganization that's only a few blocks deep, but you pay a quite heavy price in terms of RAM consumption for this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This, however, does not solve the problem of adding to the mempool transactions that were mined on the abandoned fork and now would need to be re-added to the mempool
Good point.
So it's certainly possible to preserve the mempool after a reorganization that's only a few blocks deep, but you pay a quite heavy price in terms of RAM consumption for this
My thought would be that whenever we remove utxo(s) from the mempool because they were confirmed in a block, we could write them to disk, indexed by block hash. We could perhaps keep such mempool-blocks to a depth of 100 (or some configurable amount). We would only need to consult the on-disk store during a re-org and would not be using any additional RAM. And only re-orgs deeper than 100 would lose tx, which hopefully never happen anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently the mempool only lives in memory and is never persisted on disk. I'm pretty happy with that quality but I guess it's not set in stone. Feel free to open an issue about the limited reorganization capabilities of the mempool.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, so I would look at it as the mempool remains unchanged, with everything in mem, but we add a new thing, which might be called mempool-archive, that would be stored on disk, only for re-org usage.
ok, I will open an issue for it to remind us.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CI is reporting a compile error with latest changes. (doctest, I believe)
test src/models/state/mempool.rs - models::state::mempool::Mempool::get_sorted_iter (line 414) ... FAILED
test src/locks/tokio/atomic_rw.rs - locks::tokio::atomic_rw::AtomicRw<T>::lock_mut (line 261) ... ok
failures:
---- src/models/state/mempool.rs - models::state::mempool::Mempool::get_sorted_iter (line 414) stdout ----
error[E0061]: this function takes 2 arguments but 1 argument was supplied
Error: --> src/models/state/mempool.rs:419:15
|
8 | let mempool = Mempool::new(ByteSize::gb(1));
| ^^^^^^^^^^^^----------------- an argument of type `neptune_core::prelude::triton_vm::prelude::Digest` is missing
|
note: associated function defined here
--> /home/runner/work/neptune-core/neptune-core/src/models/state/mempool.rs:84:12
|
84 | pub fn new(max_total_size: ByteSize, tip_digest: Digest) -> Self {
| ^^^
help: provide the argument
|
8 | let mempool = Mempool::new(ByteSize::gb(1), /* neptune_core::prelude::triton_vm::prelude::Digest */);
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
361209d
to
2a907b0
Compare
I added a document about the program's tolerance to reorganizations. Please review. |
Apparently my Edit: The reason was that
|
Fixed the discrepancy between |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the new doc looks good. very helpful I think. I left a few suggestions and a q or two.
the mempool to include in the next block. | ||
|
||
### Wallet | ||
The wallet can handle reorganizations that are up to `n` blocks deep, where `n` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What happens if my wallet has a (monitored) utxo that gets dropped from the mempool during re-org? iiuc, it seems the wallet would be out-of-sync with the mempool, but not the wallet's fault.
edit: I guess that prune-abandoned-monitored-utxos can deal with this situation. Perhaps it deserves a mention here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The wallet currently only deals with and displays transactions that have been mined. It completely ignores anything that's in the mempool.
I'd be happy to have the wallet be able to read from the mempool eventually, but that is not a concern for this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does that address your question? If not, please elaborate :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It completely ignores anything that's in the mempool.
Ok, so that implies that spends are not reflected in the wallet prior to confirmation either.
Eg, when I send a payment P for amount A, and P is not yet confirmed in a block:
- A is not reflected in the wallet balance.
- P is not shown in the wallet's transaction history.
- The wallet would let me attempt to re-spend P's inputs because it is unaware that those utxo are already in the mempool. But any such attempt would fail, because spend() actually does check the mempool.
Is the above accurate?
If so, then I think this behavior deserves a mention in the docs somewhere also, until such time as the wallet becomes mempool-aware.
…terval Add a method that returns the highest and lowest (both inclusive) chunk indices represented in the active window.
This method splits up the absolute indices into to buckets: one for the indices that are tracked through the inactive part of the Bloom filter and one for indices that are tracked through the active part of the Bloom filter.
Don't just check all included MMR membership proofs, but also verify that the correct MMR membership proofs are included. This closes #151.
Handling reorganizations in the mempool is complicated. Not only does the mutator set data need to be rolled back, but that is also the case for the Triton VM proofs. The easiest solution would probably be to keep track of a few proofs for each transaction in the mempool such that roll backs that are only a few blocks deep could be handled. Care also has to be given to repopulating the mempool with transactions that were included in the blocks that are being rolled back. These might have to go into the mempool again. The current solution, though, is simply to clear the mempool in the case of a reorganization. This commit includes a test that this is handled in a sane way, and that the mempool does not get polluted with incorrect/ deprecated/unsynces witness data. This closes #145.
The mempool must now be initialized with a tip digest in order to keep track of blocks for which the mempools transactions are valid. This is introduced to allow the mempool to handle reorganizations without containing invalid transactions.
`nextest` doesn't run doc-tests. They need to be run separately.
Co-authored-by: danda <[email protected]>
68a96cb
to
d76d3a6
Compare
This closes #145. |
Simple solution to the inability of the mempool to handle reorganizations: Clear the mempool if that happens.
Context: all transactions need to contain proofs relating to the mutator set of the latest block. 3rd parties can update this information in the common case (new ti is child of previous tip), but this proof updating is complicated for reorganizations. So for now, we just clear the mempool in the case of a reorganization.