[Don't merge] RAR cache file optimization #8901
Closed
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Attempts to fix #8635
Context
The per-project RAR cache is always read from disk and deserialized prior to actual RAR execution. This is wasteful in situations when we already have all required data in memory - when RAR is hosted in a long-running process like MSBuild server or out-of-proc node. This PR attempts to implement a mechanism by which the load could be avoided, while maintaining the guarantee that the cache file has well-defined contents after RAR is done.
Changes Made
Testing
/m:1, in large solutions like OrchardCore zero loads and saves of cache files occur in incremental builds. Everything is satisfied from memory.Notes
I am opening the PR for posterity but the impact is currently not high enough to justify the churn and added complexity. The 5% cited above is achievable only in very special cases like server +
/m:1. In real-world parallel builds, due to non-deterministic scheduling of projects to nodes, it takes a very long time for all nodes to get warmed up to the point where most cache loads can actually be avoided.We could do better if we loosen up the requirements for saves (see this comment) but I'm afraid that would come back to bite us in the future.