Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The commit implements a variant of LRU cache with lazy eviction:
(added in submodule
lru::lazy::LruCache
)O(1)
performance.In many use cases which can allow the cache to store 2 times capacity and can tolerate amortized nature of performance, this results in better average performance as shown by the added benchmarks.
Additionally, with the existing implementation,
.get
requires a mutable reference&mut self
. In a multi-threaded setting, this requires an exclusive write-lock on the cache even on the read path, which can exacerbate lock contentions.With lazy eviction, the ordinal values can be updated using atomic operations, allowing shared lock for lookups.
Results from added benchmarks on my machine (can be reproduced by
cargo bench
):22% improvement on
get
, 38% improvement onput
.I have implemented the primary cache api but not all. I wanted to get a feedback of how likely you would be to merge this before writing more code.
Please let me know if you are open to merging this and I will complete the api.