-
Notifications
You must be signed in to change notification settings - Fork 4.2k
Cache frozen solution snapshots at the solution level (not the SolutionCompilationState level). #72193
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| /// Mapping of DocumentId to the frozen solution we produced for it the last time we were queried. This | ||
| /// instance should be used as its own lock when reading or writing to it. | ||
| /// </summary> | ||
| private readonly Dictionary<DocumentId, AsyncLazy<Solution>> _documentIdToFrozenSolution = []; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note:
- i've been burned by ConcurrentDictionary in the past. So i prefer just a normal dictionary with locking. Could revisit this if we see any issues.
- currently, it's just using normal locking. we could consider cancellable locking with a semaphore slim if we see any issues.
| lazySolution = CreateLazyFrozenSolution(this.CompilationState, documentId); | ||
| _documentIdToFrozenSolution.Add(documentId, lazySolution); | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this should be cheap/fast, as we're not doing anything beyond looking up or creating the lazy (not the actual computation).
| { | ||
| return new AsyncLazy<Solution>( | ||
| cancellationToken => Task.FromResult(ComputeFrozenSolution(compilationState, documentId, cancellationToken)), | ||
| cancellationToken => ComputeFrozenSolution(compilationState, documentId, cancellationToken)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a bit of an odd pattern that's been arising recently. All the actual computation work is async, but we're storing it in an AsyncLazy. This is so that we have the nice benefit that N callers can be trying to get the value at the same time, but only one will compute it. Also, if all of htem cancel, the underlying work cancels. (as opposed to if N callers are asking, and the one computing ends up cancelling, causing the next that is waiting to have to redo all the work).
| /// <summary> | ||
| /// Result of calling <see cref="WithFrozenPartialCompilationsAsync"/>. | ||
| /// </summary> | ||
| private AsyncLazy<Solution> CachedFrozenSolution { get; init; } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
now all teh cached state is always at the Solution level, not the SolutionCompilationState level.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
since every new SolutionCompilationState would always produce a new Solution, this was not useful to have in the inner layer (especially since everyone externally only works with the outher Solution/Document types).
| /// </summary> | ||
| private readonly Dictionary<DocumentId, SolutionCompilationState> _cachedFrozenDocumentState = []; | ||
|
|
||
| private readonly AsyncLazy<SolutionCompilationState> _cachedFrozenSnapshot; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removed both the document map, and the cached frozen frozen solution-level state. both concept are now entirely held in teh Solution level.
With this change, if you ask a Document instance multiple times for its frozen snapshot, you'll always get back the same instance. Previously, you'd always get a fresh solution instance back.
This means that you wouldn't ever benefit from things like being able to share data cached on a Document instance (like the SemanticModel for example). So if you had N features waking up, and freezing a Doc, and then getting sematnic models, they'd all operate on different instance. Now, they'll operate on the same instance, as long as one of htem is currently keeping it alive.