-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
1.9.0-b2 #4127
Merged
Merged
1.9.0-b2 #4127
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
The Xpring Forum shut down a while back, so it's no longer a relevant link for community discussions.
In order to preserve the Hooks ABI, it is important that field values used for hooks be stable going forward. This commit reserves the required codes so that they will not be repurposed before Hooks can be proposed for inclusion in the codebase.
This commit combines the `apply_mutex` and `read_mutex` into a single `mutex_` var. This new `mutex_` var is a `shared_mutex`, and most operations only need to lock it with a `shared_lock`. The only exception is `applyMutex`, which may need a `unique_lock`. One consequence of removing the `apply_mutex` is more than one `applyMutex` function can run at the same time. To help reduce lock contention that a `unique_lock` would cause, checks that only require reading data are run a `shared_lock` (call these the "prewriteChecks"), then the lock is released, then a `unique_lock` is acquired. Since a currently running `applyManifest` may write data between the time a `shared_lock` is released and the `write_lock` is acquired, the "prewriteChecks" need to be rerun. Duplicating this work isn't ideal, but the "prewirteChecks" are relatively inexpensive. A couple of other designs were considered. We could restrict more than one `applyMutex` function from running concurrently - either with a `applyMutex` or my setting the max number of manifest jobs on the job queue to one. The biggest issue with this is if any other function ever adds a write lock for any reason, `applyManifest` would not be broken - data could be written between the release of the `shared_lock` and the acquisition of the `unique_lock`. Note: it is tempting to solve this problem by not releasing the `shared_mutex` and simply upgrading the lock. In the presence of concurrently running `applyManifest` functions, this will deadlock (both function need to wait for the other to release their read locks before they can acquire a write lock).
The existing trust line caching code was suboptimal in that it stored redundant information, pinned SLEs into memory and required multiple memory allocations per cached object. This commit eliminates redundant data, reducing the size of cached objects and unpinning SLEs from memory, and uses value_types to avoid the need for `std::shared_ptr`. As a result of these changes, the effective size of a cached object, includes the overhead of the memory allocator and the `std::shared_ptr` should be reduced by at least 64 bytes. This is significant, as there can easily be tens of millions of these objects.
* Abort background path finding when closed or disconnected * Exit pathfinding job thread if there are no requests left * Don't bother creating the path find job if there are no requests * Refactor to remove circular dependency between InfoSub and PathRequest
* Txs with the same fee level will sort by TxID XORed with the parent ledger hash. * The TxQ is re-sorted after every ledger. * Attempt to future-proof the TxQ tie breaking test
The `SHAMapInnerNode` class had a global mutex to protect the array of node children. Profiling suggested that around 4% of all attempts to lock the global would block. This commit removes that global mutex, and replaces it with a new per-node 16-way spinlock (implemented so as not to effect the size of an inner node objet), effectively eliminating the lock contention.
When fetching ledgers, the existing code would isolate the peer that sent the most useful responses and issue follow up queries only to that peer. This commit increases the query aggressiveness, and changes the mechanism used to select which peers to issue follow-up queries to so as to more evenly spread the load along those peers which provided useful responses.
- Avoid using std::shared_ptr - Prefer using unordered maps to avoid linear searches - Increase the interval between full order book updates
manojsdoshi
force-pushed
the
develop-next
branch
2 times, most recently
from
March 30, 2022 20:24
7da7cb4
to
62e8fe6
Compare
Adds support to TaggedCache to support smart replacement (Needed to avoid race conditions with negative caching.) Create a "hotDUMMY" object that represents the absence of an object. Allow DatabaseNodeImp::asyncFetch to complete immediately if object is in cache (positive or negative). Fix a bug in asyncFetch where the object returned may not be the correct canonical version because we stash the object in the results array before we canonicalize it.
- Adjust default tree cache sizing - Various micro-optimizations
This commit optimizes the way asynchronous nodestore operations are processed both by reducing the amount of time locks are held and by minimizing the number of memory allocations and data copying.
- Eliminate `tune` member function and allow `LedgerHistory` to fully initialize itself.
This commit modernizes the `AcceptedLedger` and `AcceptedLedgerTx` classes, reduces their memory footprint and reduces unnecessary dynamic memory allocations.
manojsdoshi
force-pushed
the
develop-next
branch
from
March 30, 2022 22:16
62e8fe6
to
711608e
Compare
scottschurr
approved these changes
Mar 31, 2022
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
If merged this PR will:
close #4124
close #4122
close #4121
close #4119
close #4111
close #4077
close #4098
close #4089