Skip to content

Conversation

@ReubenBond
Copy link
Member

@ReubenBond ReubenBond commented May 29, 2025

This PR replaces the existing LRU implementation with an implementation based on BitFaster.Caching's ConcurrentLru. The code, including tests, was ported and simplified to remove functionality which Orleans does not leverage.

Here is a description of the ConcurrentLru implementation from the source:

A pseudo LRU based on the TU-Q eviction policy. The LRU list is composed of 3 segments: hot, warm and cold.
Cost of maintaining segments is amortized across requests. Items are only cycled when capacity is exceeded.
Pure read does not cycle items if all segments are within capacity constraints. There are no global locks.
On cache miss, a new item is added. Tail items in each segment are dequeued, examined, and are either enqueued
or discarded.
The TU-Q scheme of hot, warm and cold is similar to that used in MemCached (https://memcached.org/blog/modern-lru/)
and OpenBSD (https://flak.tedunangst.com/post/2Q-buffer-cache-algorithm), but does not use a background thread
to maintain the internal queues.

Ideally, we would have referenced RCache (whatever it ended up being called) from Microsoft.Extensions.*, but there hasn't been progress there.

This PR also changes the default Directory Cache policy from Adaptive to LRU. We anticipate removing the Adaptive policy in the future.

Fixes #8736

Microsoft Reviewers: Open in CodeFlow

@ReubenBond ReubenBond force-pushed the feature/improved-lru branch from 341ae2d to 1d25967 Compare May 29, 2025 15:37
@ReubenBond ReubenBond force-pushed the feature/improved-lru branch from 1d25967 to 7c23def Compare May 29, 2025 15:38
@ReubenBond ReubenBond requested a review from Copilot May 29, 2025 16:14
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR replaces the existing custom LRU cache with a BitFaster.Caching–based ConcurrentLruCache and updates all consumers accordingly.

  • Removes the legacy LRU<TKey,TValue> implementation and ports/simplifies BitFaster’s ConcurrentLru internals under Orleans.Caching.Internal.
  • Updates GrainDirectory caches (LRU and Adaptive) and messaging codecs to use ConcurrentLruCache.
  • Changes the default directory caching policy from Adaptive to LRU.

Reviewed Changes

Copilot reviewed 24 out of 24 changed files in this pull request and generated 1 comment.

Show a summary per file
File Description
src/Orleans.Runtime/Utilities/StripedMpscBuffer.cs Updated attribution comment and made local Padding class static
src/Orleans.Runtime/GrainDirectory/LruGrainDirectoryCache.cs New LruGrainDirectoryCache using ConcurrentLruCache
src/Orleans.Runtime/GrainDirectory/LRUBasedGrainDirectoryCache.cs Removed legacy LRU-based cache class
src/Orleans.Runtime/GrainDirectory/GrainDirectoryCacheFactory.cs Switched factory to LruGrainDirectoryCache
src/Orleans.Runtime/GrainDirectory/AdaptiveGrainDirectoryCache.cs Migrated to ConcurrentLruCache and updated method calls
src/Orleans.Runtime/Configuration/Options/GrainDirectoryOptions.cs Changed default caching strategy to LRU and formatted default values
src/Orleans.Core/Utils/LRU.cs Removed old LRU implementation
src/Orleans.Core/Messaging/CachingSiloAddressCodec.cs Switched shared cache to ConcurrentLruCache and fixed constructor
src/Orleans.Core/Messaging/CachingIdSpanCodec.cs Switched shared cache and corrected lambda parameter order
src/Orleans.Core/Caching/Internal/TypeProps.cs Added atomic‐write helper
src/Orleans.Core/Caching/Internal/Striped64.cs Added internal striped concurrency helper
src/Orleans.Core/Caching/Internal/Padding.cs Added cache‐line padding constants
src/Orleans.Core/Caching/Internal/PaddedQueueCount.cs Added padded queue counts
src/Orleans.Core/Caching/Internal/PaddedLong.cs Added padded long wrapper
src/Orleans.Core/Caching/Internal/ICacheMetrics.cs Added cache metrics interface
src/Orleans.Core/Caching/Internal/ICache.cs Added generic cache interface
src/Orleans.Core/Caching/Internal/Counter.cs Added high-throughput counter
src/Orleans.Core/Caching/Internal/ConcurrentDictionarySize.cs Added concurrent dictionary sizing helper
src/Orleans.Core/Caching/Internal/CapacityPartition.cs Added capacity partitioning scheme
src/Orleans.Core/Caching/Internal/CacheDebugView.cs Added debug view for caches
Comments suppressed due to low confidence (2)

src/Orleans.Runtime.GrainDirectory/AdaptiveGrainDirectoryCache.cs:45

  • [nitpick] The predicate is named ActivationAddressesMatches here but ActivationAddressesMatch in LruGrainDirectoryCache. Consider unifying the naming for consistency.
private static readonly Func<GrainDirectoryCacheEntry, GrainAddress, bool> ActivationAddressesMatches = (entry, addr) => GrainAddress.MatchesGrainIdAndSilo(addr, entry.Address);

src/Orleans.Core/Messaging/CachingSiloAddressCodec.cs:17

  • The capacity was reduced from 128_000 in the previous LRU to 1_024 here—verify that this smaller cache size is intentional and sufficient for your workload.
internal static ConcurrentLruCache<SiloAddress, (SiloAddress Value, byte[] Encoded)> SharedCache { get; } = new(capacity: 1024);

@ReubenBond ReubenBond force-pushed the feature/improved-lru branch from 175b5d2 to 8c531c2 Compare May 29, 2025 16:29
Copy link

@bitfaster bitfaster left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@ReubenBond ReubenBond merged commit e21a2fd into dotnet:main May 30, 2025
28 checks passed
@ReubenBond ReubenBond deleted the feature/improved-lru branch May 30, 2025 03:29
@github-actions github-actions bot locked and limited conversation to collaborators Jun 29, 2025
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Treadpool starvation on reaching the grain directory cache size limit

2 participants