-
Notifications
You must be signed in to change notification settings - Fork 992
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
db/store full (forked) blocks cleanup #2585
Comments
Where would I start if I were to tackle this. |
Right now we traverse back through historical blocks and then go to the db for each one to be removed. But I think a better approach may be start at the db and simply iterate over the blocks in the db, removing any that are candidates for removal. This might be a good introduction to the lower level db access stuff, so I'd start looking around our lmdb code to see how we might use an iterator over full blocks. We do something similar in the peers db (via an iterator). |
I would like to process this. |
Signed-off-by: Mike Tang <[email protected]>
My understanding is that instead of using /// An atomic batch in which all changes can be committed all at once or
/// discarded on error.
pub struct Batch<'a> {
db: store::Batch<'a>,
} Create a LMDB iterator to access ALL the blocks, regardless of whether the block is in the chain or not. Then we need to have an efficient way to check whether a particular block exists on the chain (if not, remove it). To check whether a block is on chain, one naive way is to iterate through the chain (using |
Signed-off-by: Mike Tang <[email protected]>
This checks the header against the header at that height based on our current chain - if they match then we're good (header at height |
hi, #2810 CI prompted:
What is the meaning of it? Because in my local device, there are no these timeout error. How should I resolve this consensus timeout? |
@daogangtang had a similar issue on a resource constrained device. If you have the ability to increase the memory available on the CI box, that might help. Another option is to refactor the tests to consume less resources, which is being partly addressed by #2813 (IIUC). |
…db directly. Signed-off-by: Mike Tang <[email protected]>
…#2815) * fix: try to fix issue #2585 by adding block cleanup from db directly. Signed-off-by: Mike Tang <[email protected]> * use another effective algorithm to do old block and short-lived block cleaup. Signed-off-by: Mike Tang <[email protected]> * 1. rename iter_lived_blocks to blocks_iter; 2. comments and iterator calling optimiztions. Signed-off-by: Mike Tang <[email protected]> * Fix locking bug when calling is_on_current_chain() in batch.blocks_iter just by removing it. Because "we want to delete block older (i.e. lower height) than tail.height". Signed-off-by: Mike Tang <[email protected]>
Resolved in #2815. 🚀 |
We compact/prune the blocks db periodically to remove historical full blocks from the db older than 1 week (unless running in archive mode).
But - we do this based on the current chain by traversing back from the head of the chain.
So if we see any short lived forks or reorgs we will never actually clean these forked blocks up and they will remain in the db indefinitely.
i.e. if we have the following blocks in the chain -
B1 <- B2 <- B3
etc.We will never remove any forked blocks, say
B2'
for example.The text was updated successfully, but these errors were encountered: