Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

1.9.0-b2 #4127

Merged
merged 23 commits into from
Mar 31, 2022
Merged

1.9.0-b2 #4127

merged 23 commits into from
Mar 31, 2022

Conversation

manojsdoshi
Copy link
Contributor

@manojsdoshi manojsdoshi commented Mar 30, 2022

If merged this PR will:

close #4124
close #4122
close #4121
close #4119
close #4111
close #4077
close #4098
close #4089

solmsted and others added 11 commits March 23, 2022 10:48
The Xpring Forum shut down a while back, so it's no longer a relevant link for community discussions.
In order to preserve the Hooks ABI, it is important that field
values used for hooks be stable going forward.

This commit reserves the required codes so that they will not
be repurposed before Hooks can be proposed for inclusion in
the codebase.
This commit combines the `apply_mutex` and `read_mutex` into a single `mutex_`
var. This new `mutex_` var is a `shared_mutex`, and most operations only need to
lock it with a `shared_lock`. The only exception is `applyMutex`, which may need
a `unique_lock`.

One consequence of removing the `apply_mutex` is more than one `applyMutex`
function can run at the same time. To help reduce lock contention that a
`unique_lock` would cause, checks that only require reading data are run a
`shared_lock` (call these the "prewriteChecks"), then the lock is released, then
a `unique_lock` is acquired. Since a currently running `applyManifest` may write
data between the time a `shared_lock` is released and the `write_lock` is
acquired, the "prewriteChecks" need to be rerun. Duplicating this work isn't
ideal, but the "prewirteChecks" are relatively inexpensive.

A couple of other designs were considered. We could restrict more than one
`applyMutex` function from running concurrently - either with a `applyMutex` or
my setting the max number of manifest jobs on the job queue to one. The biggest
issue with this is if any other function ever adds a write lock for any reason,
`applyManifest` would not be broken - data could be written between the release
of the `shared_lock` and the acquisition of the `unique_lock`. Note: it is
tempting to solve this problem by not releasing the `shared_mutex` and simply
upgrading the lock. In the presence of concurrently running `applyManifest`
functions, this will deadlock (both function need to wait for the other to
release their read locks before they can acquire a write lock).
The existing trust line caching code was suboptimal in that it stored
redundant information, pinned SLEs into memory and required multiple
memory allocations per cached object.

This commit eliminates redundant data, reducing the size of cached
objects and unpinning SLEs from memory, and uses value_types to
avoid the need for `std::shared_ptr`. As a result of these changes, the
effective size of a cached object, includes the overhead of the memory
allocator and the `std::shared_ptr` should be reduced by at least 64
bytes. This is significant, as there can easily be tens of millions
of these objects.
* Abort background path finding when closed or disconnected
* Exit pathfinding job thread if there are no requests left
* Don't bother creating the path find job if there are no requests
* Refactor to remove circular dependency between InfoSub and PathRequest
* Txs with the same fee level will sort by TxID XORed with the parent
  ledger hash.
* The TxQ is re-sorted after every ledger.
* Attempt to future-proof the TxQ tie breaking test
The `SHAMapInnerNode` class had a global mutex to protect the
array of node children. Profiling suggested that around 4% of
all attempts to lock the global would block.

This commit removes that global mutex, and replaces it with a
new per-node 16-way spinlock (implemented so as not to effect
the size of an inner node objet), effectively eliminating the
lock contention.
When fetching ledgers, the existing code would isolate the peer
that sent the most useful responses and issue follow up queries
only to that peer.

This commit increases the query aggressiveness, and changes the
mechanism used to select which peers to issue follow-up queries
to so as to more evenly spread the load along those peers which
provided useful responses.
- Avoid using std::shared_ptr
- Prefer using unordered maps to avoid linear searches
- Increase the interval between full order book updates
@manojsdoshi manojsdoshi requested a review from nbougalis March 30, 2022 18:50
@manojsdoshi manojsdoshi force-pushed the develop-next branch 2 times, most recently from 7da7cb4 to 62e8fe6 Compare March 30, 2022 20:24
JoelKatz and others added 12 commits March 30, 2022 15:16
Adds support to TaggedCache to support smart replacement
(Needed to avoid race conditions with negative caching.)

Create a "hotDUMMY" object that represents the absence
of an object.

Allow DatabaseNodeImp::asyncFetch to complete immediately
if object is in cache (positive or negative).

Fix a bug in asyncFetch where the object returned may not
be the correct canonical version because we stash the
object in the results array before we canonicalize it.
- Adjust default tree cache sizing
- Various micro-optimizations
This commit optimizes the way asynchronous nodestore operations are
processed both by reducing the amount of time locks are held and by
minimizing the number of memory allocations and data copying.
- Eliminate `tune` member function and allow `LedgerHistory`
  to fully initialize itself.
This commit modernizes the `AcceptedLedger` and `AcceptedLedgerTx`
classes, reduces their memory footprint and reduces unnecessary
dynamic memory allocations.
@manojsdoshi manojsdoshi requested review from ximinez and xzhaous March 31, 2022 17:46
@manojsdoshi manojsdoshi merged commit 711608e into XRPLF:develop Mar 31, 2022
@ximinez
Copy link
Collaborator

ximinez commented Apr 1, 2022

This beta closed one of the wrong PRs. It closed #4111. It should have closed #4099.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.