Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[spring-index] optimize cache operations #1071

Closed
martinlippert opened this issue Jul 12, 2023 · 1 comment
Closed

[spring-index] optimize cache operations #1071

martinlippert opened this issue Jul 12, 2023 · 1 comment
Labels

Comments

@martinlippert
Copy link
Member

At the moment, the index cache that keeps symbols, bean index, and potentially additional information in a cache on disc offers room for improvement. At the moment, the cache structure is kept in memory and updated internally and written to disc on every change. This causes frequent cache writes to disc as well as consumes additional memory all the time for the full cache content.

Two thoughts:

  • the cache writes to disc don't need to happen on every event, they could be accumulated in memory and then written to disc on specific events only (once an hour, after x number of changes, or something else)

  • the internal cache structure on disc could be changed towards an incremental model, where individual change events (deltas) are written to disc in an append mode. At specific moments (initial cache read, after certain events, or something else) those delta could be merged into the overall cache

@martinlippert
Copy link
Member Author

The default cache implementation is switched over to the new delta-based cache implementation, which doesn't keep the cached data (symbols, diagnostics, etc.) in memory anymore. Instead, delta operations are appended to the cache file on disc. The cache does a compacting operation (folding all delta operations into the main cache storage) after every 20 operations.

This makes cache updates on individual file save events a lot faster than before, since not the full cache is serialized to JSON and stored on disc anymore. Instead, only the updated content of the saved file is appended as a delta object.

This also allows us to not keep the entire cached data in memory anymore. This only happens during compacting operations, where the cache is read, delta operations are folded in, and then stored again. This reduces the overall memory footprint of the language server process significantly - of course depending on the size of the projects of the workspace.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant