-
Notifications
You must be signed in to change notification settings - Fork 1.2k
perf: cache block data for CreditPool for asset unlock limits calculation #6658
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…GetCreditDataFromBlock
WalkthroughThe changes refactor the way credit pool data is retrieved and processed from blockchain blocks. A new function, 📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🔭 Outside diff range comments (1)
src/evo/creditpool.cpp (1)
5-21:⚠️ Potential issueInclude missing header for
unordered_lru_cache
unordered_lru_cacheis used later in the file (line 68) but there is no include that guarantees the template is visible. This will break compilation on translation units that do not indirectly include the header.#include <stack> +#include <util/unordered_lru_cache.h> // <-- add the dedicated header
🧹 Nitpick comments (2)
src/evo/creditpool.cpp (2)
77-85: Missing early return lets invalid CbTx silently passIf the block has no (valid)
CCbTx, execution continues and the function
returnsblockDatawithcredit_pool == 0, silently masking corrupted
tip blocks. Consider returningstd::nullopt(or throwing) immediately
after logging so that downstream logic can make an explicit decision.
187-198: Hard-coded cache size (576*2) drifts from configurable windowThe LRU cache size is fixed to 1 152 blocks while the window length
CreditPoolPeriodBlocks()is obtained from chain parameters. When the
parameter changes on a future hard fork, the cache may suddenly become
too small (extra I/O) or unnecessarily large (RAM bloat). Bind the
capacity to the consensus parameter instead of a magic number.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/evo/creditpool.cpp(3 hunks)
🧰 Additional context used
🪛 GitHub Actions: Check Potential Conflicts
src/evo/creditpool.cpp
[error] 1-1: Merge conflict detected in this file between PR 6658 and PR 6615. Manual resolution required.
⏰ Context from checks skipped due to timeout of 90000ms (5)
- GitHub Check: linux64_nowallet-build / Build source
- GitHub Check: win64-build / Build source
- GitHub Check: mac-build / Build source
- GitHub Check: linux64_multiprocess-build / Build source
- GitHub Check: linux64_tsan-build / Build source
🔇 Additional comments (1)
src/evo/creditpool.cpp (1)
1-342: Resolve merge-conflict artefacts before mergingCI reports a conflict between PR-6658 and PR-6615 for this file. Please
double-check that no<<<<,====,>>>>markers or unintended logic
duplications remain after the manual merge resolution.🧰 Tools
🪛 GitHub Actions: Check Potential Conflicts
[error] 1-1: Merge conflict detected in this file between PR 6658 and PR 6615. Manual resolution required.
| const CBlockIndex* distant_block_index{ | ||
| block_index->GetAncestor(block_index->nHeight - Params().CreditPoolPeriodBlocks())}; | ||
| CAmount distantUnlocked{0}; | ||
| if (distant_block_index) { | ||
| if (std::optional<CBlock> distant_block = GetBlockForCreditPool(distant_block_index, consensusParams); distant_block) { | ||
| distantUnlocked = GetDataFromUnlockTxes(distant_block->vtx).unlocked; | ||
| if (std::optional<CreditPoolDataPerBlock> distant_block{GetCreditDataFromBlock(distant_block_index, consensusParams)}; | ||
| distant_block) { | ||
| distantUnlocked = distant_block->unlocked; | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
GetAncestor call can underflow for early blocks
block_index->nHeight - Params().CreditPoolPeriodBlocks() will be
negative for the first N blocks. GetAncestor() asserts on heights
< 0 in several call-sites, which would crash the node.
-const CBlockIndex* distant_block_index{
- block_index->GetAncestor(block_index->nHeight - Params().CreditPoolPeriodBlocks())};
+const int distant_height = block_index->nHeight - Params().CreditPoolPeriodBlocks();
+const CBlockIndex* distant_block_index =
+ distant_height >= 0 ? block_index->GetAncestor(distant_height) : nullptr;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const CBlockIndex* distant_block_index{ | |
| block_index->GetAncestor(block_index->nHeight - Params().CreditPoolPeriodBlocks())}; | |
| CAmount distantUnlocked{0}; | |
| if (distant_block_index) { | |
| if (std::optional<CBlock> distant_block = GetBlockForCreditPool(distant_block_index, consensusParams); distant_block) { | |
| distantUnlocked = GetDataFromUnlockTxes(distant_block->vtx).unlocked; | |
| if (std::optional<CreditPoolDataPerBlock> distant_block{GetCreditDataFromBlock(distant_block_index, consensusParams)}; | |
| distant_block) { | |
| distantUnlocked = distant_block->unlocked; | |
| } | |
| } | |
| const int distant_height = block_index->nHeight - Params().CreditPoolPeriodBlocks(); | |
| const CBlockIndex* distant_block_index = | |
| distant_height >= 0 ? block_index->GetAncestor(distant_height) : nullptr; | |
| CAmount distantUnlocked{0}; | |
| if (distant_block_index) { | |
| if (std::optional<CreditPoolDataPerBlock> distant_block{GetCreditDataFromBlock(distant_block_index, consensusParams)}; | |
| distant_block) { | |
| distantUnlocked = distant_block->unlocked; | |
| } | |
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no, there's no asserts:
if (height > nHeight || height < 0) {
return nullptr;
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@knst You're absolutely right, and I made a mistake in my review. Looking at the implementation of GetAncestor, it indeed handles negative heights gracefully by returning nullptr:
if (height > nHeight || height < 0) {
return nullptr;
}And the existing code already properly handles this case with the null check:
if (distant_block_index) {
// ...
}I apologize for the incorrect suggestion. The current implementation is safe as written.
✏️ Learnings added
Learnt from: knst
PR: dashpay/dash#6658
File: src/evo/creditpool.cpp:177-185
Timestamp: 2025-05-05T12:45:44.755Z
Learning: The GetAncestor() function in CBlockIndex safely handles negative heights by returning nullptr rather than asserting, making it safe to call with potentially negative values.
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
| static Mutex cache_mutex; | ||
| static unordered_lru_cache<uint256, CreditPoolDataPerBlock, StaticSaltedHasher> block_data_cache GUARDED_BY( | ||
| cache_mutex){576 * 2}; | ||
| { | ||
| LOCK(cache_mutex); | ||
| if (block_data_cache.get(block_index->GetBlockHash(), blockData)) { | ||
| return blockData; | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Avoid race-condition driven double work in the block-data cache
Between the first LOCK(cache_mutex) (line 71) and the second lock after expensive I/O (line 104) another thread can compute and insert the very same
entry, wasting one full ReadBlockFromDisk and parsing pass.
A single critical-section around “lookup → compute (if-miss) → insert” makes the code both simpler and cheaper:
-{
- LOCK(cache_mutex);
- if (block_data_cache.get(block_index->GetBlockHash(), blockData)) {
- return blockData;
- }
-}
-
-// potentially expensive disk read here …
-
-LOCK(cache_mutex);
-block_data_cache.insert(block_index->GetBlockHash(), blockData);
+{
+ LOCK(cache_mutex);
+ if (block_data_cache.get(block_index->GetBlockHash(), blockData)) {
+ return blockData; // fast-path
+ }
+
+ // We still hold the lock, so only one thread falls through and
+ // performs the disk read.
+}
+
+// expensive disk I/O here …
+
+{
+ LOCK(cache_mutex);
+ block_data_cache.insert(block_index->GetBlockHash(), blockData);
+}This pattern keeps the cache mutex held only once and prevents
duplicate work while retaining LRU behaviour.
Committable suggestion skipped: line range outside the PR's diff.
Co-authored-by: UdjinM6 <[email protected]>
Co-authored-by: UdjinM6 <[email protected]>
UdjinM6
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, utACK 9c5e451
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (2)
src/evo/creditpool.cpp (2)
67-69: Consider using a more explicit cache size calculation.The cache size uses a magic number
* 2which may not be immediately clear to future maintainers.Consider making the multiplier more explicit or adding a comment explaining the rationale:
- static unordered_lru_cache<uint256, CreditPoolDataPerBlock, StaticSaltedHasher> block_data_cache GUARDED_BY( - cache_mutex){static_cast<size_t>(Params().CreditPoolPeriodBlocks()) * 2}; + // Cache size accounts for current period + potential reorgs and RPC calls to historical blocks + static unordered_lru_cache<uint256, CreditPoolDataPerBlock, StaticSaltedHasher> block_data_cache GUARDED_BY( + cache_mutex){static_cast<size_t>(Params().CreditPoolPeriodBlocks()) * 2};
70-106: Address potential race condition in caching pattern.The current caching implementation has a race condition where multiple threads can perform the same expensive disk I/O operation if they all miss the cache simultaneously.
Consider using a single critical section to prevent duplicate work:
- { - LOCK(cache_mutex); - if (block_data_cache.get(block_index->GetBlockHash(), blockData)) { - return blockData; - } - } - - CBlock block; - if (!ReadBlockFromDisk(block, block_index, consensusParams)) { - throw std::runtime_error("failed-getcbforblock-read"); - } - // ... rest of expensive processing ... - - LOCK(cache_mutex); - block_data_cache.insert(block_index->GetBlockHash(), blockData); + { + LOCK(cache_mutex); + if (block_data_cache.get(block_index->GetBlockHash(), blockData)) { + return blockData; + } + + // Hold lock during expensive I/O to prevent duplicate work + CBlock block; + if (!ReadBlockFromDisk(block, block_index, consensusParams)) { + throw std::runtime_error("failed-getcbforblock-read"); + } + // ... rest of processing ... + + block_data_cache.insert(block_index->GetBlockHash(), blockData); + }This ensures only one thread performs the expensive operation per unique block hash.
🧹 Nitpick comments (2)
src/evo/creditpool.cpp (2)
77-84: Improve error handling consistency.The error handling could be more descriptive and consistent with the rest of the function.
Consider improving the error message to be more descriptive:
- if (!ReadBlockFromDisk(block, block_index, consensusParams)) { - throw std::runtime_error("failed-getcbforblock-read"); - } + if (!ReadBlockFromDisk(block, block_index, consensusParams)) { + throw std::runtime_error(strprintf("%s: failed to read block from disk at height %d", __func__, block_index->nHeight)); + }
88-90: Consider adding error handling for coinbase transaction parsing.The code assumes the coinbase transaction payload parsing will succeed, but doesn't handle the case where it might fail.
Consider adding error handling for robustness:
if (const auto opt_cbTx = GetTxPayload<CCbTx>(block.vtx[0]->vExtraPayload); opt_cbTx) { blockData.credit_pool = opt_cbTx->creditPoolBalance; + } else { + LogPrintf("%s: WARNING: failed to parse CbTx payload at height=%d\n", __func__, block_index->nHeight); }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/evo/creditpool.cpp(3 hunks)
🧰 Additional context used
🧠 Learnings (1)
src/evo/creditpool.cpp (1)
Learnt from: knst
PR: dashpay/dash#6658
File: src/evo/creditpool.cpp:177-185
Timestamp: 2025-05-05T12:45:44.781Z
Learning: The GetAncestor() function in CBlockIndex safely handles negative heights by returning nullptr rather than asserting, making it safe to call with potentially negative values.
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: predict_conflicts
- GitHub Check: check_merge
- GitHub Check: ClangFormat
- GitHub Check: Build container / Build container
🔇 Additional comments (4)
src/evo/creditpool.cpp (4)
49-53: LGTM! Struct extension is well-designed.The renaming from
UnlockDataPerBlocktoCreditPoolDataPerBlockand the addition of thecredit_poolfield correctly reflects the expanded scope of the data structure.
177-178: GetAncestor usage is safe for negative heights.The code correctly uses
GetAncestorwhich safely handles negative heights by returningnullptr. The subsequent null check properly handles early blocks.Based on the retrieved learning,
GetAncestor()safely handles negative heights by returningnullptrrather than asserting, making this implementation correct.
187-195: LGTM! Correct usage of new data structure.The code correctly uses
blockData.credit_poolfrom the new cached data structure instead of reading it separately, which aligns with the performance optimization goals.
200-206: Logging correctly updated for new data structure.The logging statements have been properly updated to use the cached credit pool data instead of separate reads, maintaining consistency with the refactoring.
| CreditPoolDataPerBlock blockData; | ||
|
|
||
| static Mutex cache_mutex; | ||
| static unordered_lru_cache<uint256, CreditPoolDataPerBlock, StaticSaltedHasher> block_data_cache GUARDED_BY( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
missing include for unordered_lru_cache
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This include is already presented in the header
Line 14 in 9c5e451
| #include <unordered_lru_cache.h> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe IWYU (or the concept behind it), would be to include it in the cpp too, but nit
Co-authored-by: PastaPastaPasta <[email protected]>
Co-authored-by: PastaPastaPasta <[email protected]>
|
@UdjinM6 @PastaPastaPasta I fixed review comments, may I get one more iteration of review? |
PastaPastaPasta
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
utACK 72cfb93
UdjinM6
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
utACK 72cfb93
…nlock limits calculation 72cfb93 feat: bail out if GetTxPayload failed (Konstantin Akimov) 095b2d8 refactor: use if statement feature (Konstantin Akimov) 9c5e451 fix: typo in error message for GetDataFromUnlockTx (Konstantin Akimov) 52ae3aa refactor: use CreditPoolPeriodBlocks for block_data_cache (Konstantin Akimov) 34e06ec fmt: apply clang-format suggestions (Konstantin Akimov) 1706270 perf: cache block data for credit pool for calculation asset unlock limits (Konstantin Akimov) e3d3783 refactor: combine GetDataFromUnlockTxes and GetBlockForCreditPool to GetCreditDataFromBlock (Konstantin Akimov) afd7f84 perf: use GetAncestor() to jump blocks back for CreditPool (Konstantin Akimov) f7749db refactor: use helper GetDataFromUnlockTxes to get credit pool amount (Konstantin Akimov) Pull request description: ## Issue being fixed or feature implemented For each new block CreditPool read blocks twice from disk: for last connected block and for distant block (576 block ago) which is used to calculate sliding window limit for withdrawals. ## What was done? Added mini cache with block data (locked amount, indexes) to avoid reading block with ReadBlockFromDisk. [It is possible](https://github.com/knst/dash/tree/perf-cp-cache-2) to avoid 2nd block reading too (for tip), but benchmark doesn't show clear improvement. It will go to the separate PR later. ## How Has This Been Tested? develop: <img width="613" alt="image" src="https://github.com/user-attachments/assets/b29382cf-2de3-4223-a85e-2623982ff86a" /> ``` 2025-05-04T18:36:29Z [bench] - ProcessSpecialTxsInBlock: 52.94ms [84.66s (14.46ms/blk)] 2025-05-04T18:36:29Z [bench] - CheckCreditPoolDiffForBlock: 0.21ms [2.11s (0.36ms/blk)] 2025-05-04T18:36:29Z [bench] - Connect total: 54.40ms [94.48s (16.14ms/blk)] ``` RP: <img width="613" alt="image" src="https://github.com/user-attachments/assets/40166b85-cc37-4bf3-a618-2931c3d0fdca" /> ``` 2025-05-05T10:11:47Z [bench] - ProcessSpecialTxsInBlock: 52.62ms [83.01s (14.18ms/blk)] 2025-05-05T10:11:47Z [bench] - CheckCreditPoolDiffForBlock: 0.21ms [2.09s (0.36ms/blk)] 2025-05-05T10:11:47Z [bench] - Connect total: 53.46ms [90.66s (15.49ms/blk)] ``` ## Breaking Changes N/A ## Checklist: - [x] I have performed a self-review of my own code - [ ] I have commented my code, particularly in hard-to-understand areas - [ ] I have added or updated relevant unit/integration/functional/e2e tests - [ ] I have made corresponding changes to the documentation - [x] I have assigned this pull request to a milestone ACKs for top commit: PastaPastaPasta: utACK 72cfb93 UdjinM6: utACK 72cfb93 Tree-SHA512: fd67770d42ed5c8b8d9a9fd542ae632d7ef425e8530902fa033310bc0da18b824e1c35b39c73daa99a480d83aef13ff709b19794c7aa7aa9f2f09d88f6b9ca6c
Issue being fixed or feature implemented
For each new block CreditPool read blocks twice from disk: for last connected block and for distant block (576 block ago) which is used to calculate sliding window limit for withdrawals.
What was done?
Added mini cache with block data (locked amount, indexes) to avoid reading block with ReadBlockFromDisk.
It is possible to avoid 2nd block reading too (for tip), but benchmark doesn't show clear improvement. It will go to the separate PR later.
How Has This Been Tested?
develop:

RP:

Breaking Changes
N/A
Checklist: