-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature proposal: serve completion items from cache #45
Comments
While this would be an interesting exercise, my current understanding is that we can't (easily) cache much beyond what Jedi has already cached because cache-validation is extremely complicated with code completion tools. For example, if we cache code that you've imported and the code itself has changed, it's somewhat involved to know that we can remove anything associated with that module from the cache and re-analyze at a future date. If you just want read-only completions, we already support: https://github.com/pappasam/coc-jedi#jedijedisettingsautoimportmodules That said, if you want to take a stab at something and are able to:
I'd be happy to be proven wrong! If you'd like to read more about Jedi's caching challenges, see this issue: davidhalter/jedi#1059 |
@hanspinckaers let me know if you notice any performance improvements from the latest release: https://github.com/pappasam/jedi-language-server/blob/master/CHANGELOG.md#0210 |
Hi! Sounds good! Sorry for me not replying to your message, I was on vacation. Your points/concerns sound really valid. For me, I just wanted a bit of a quick hack to first serve cached completions and also in the background ask Jedi for new completions, then update the completions (not sure if possible with language servers). This way you're never out of date for long. One quick (beginner) question: how do you develop/debug the language server? I want to just run it from the command line and be able to use breakpoints etc. |
Developing / debugging is currently not the simplest process. Abstractly:
@SERVER.feature(COMPLETION, trigger_characters=[".", "'", '"'])
def completion(
server: JediLanguageServer, params: CompletionParams
) -> Optional[CompletionList]:
"""Returns completion items."""
...
completion_items = [
jedi_utils.lsp_completion_item(
name=completion,
char_before_cursor=char_before_cursor,
enable_snippets=enable_snippets,
markup_kind=markup_kind,
)
for completion in completions_jedi
]
if completion_items:
server.show_message(str(completion_items))
return (
CompletionList(is_incomplete=False, items=completion_items)
if completion_items
else None
)
Note: it's somewhat difficult to develop your development tools (like |
I'm sorry @pappasam, I'm quite busy at the moment. I did however remove all the type-information retrieval and disabled snippets, and now autocomplete flies. It's incredibly fast. Numpy autocompletion show up for me < 200ms and instant the second time. So, for me, that is actually enough, I don't really look at the type information anyway. I do miss the snippets, but the signature help in coc.nvim helps. Maybe we want to make this an option? It's is pretty barebones though: I was thinking maybe we could also limit the type-information retrieval to the first x items in the completion list, but I don't think that will work with the trimming down of the list via fuzzy completion in some clients. |
I just tested locally and it seems like disabling snippets gives sufficient performance improvement for Numpy to complete quickly (after the first completion) when including We already have an initialization option for disabling snippets, so maybe we just need to document that disabling snippets + autoImportModules can help completion performance issues. |
Could implementation of The idea is that the The lag for I would be happy to help with implementation. |
@krassowski interesting... yes, I think the new resolution patterns in 3.16 are a great opportunity for us to speed things up a lot (and to add new features, like selecting a sane name for
Would love your thoughts on supporting LSP 3.16 features based on the aforementioned challenges! |
I agree 3.16 is fresh. What about implementing I think that it might be nice to allow for the user to configure the server so that it either returns eagerly (at a price of lower performance) or uses |
Actually clients already have to opt-in to allow for lazy resolution of anything else than "documentation" and "detail", so no backward compatibility issue here: /**
* Indicates which properties a client can resolve lazily on a
* completion item. Before version 3.16.0 only the predefined properties
* `documentation` and `details` could be resolved lazily.
*
* @since 3.16.0
*/
resolveSupport?: {
/**
* The properties that a client can resolve lazily.
*/
properties: string[];
}; |
FYI here is the implementation for pyls: https://github.com/palantir/python-language-server/pull/905/files |
@hanspinckaers does #56 solve this issue for you? |
Looks good! Will take a look later today. |
Yes, this PR with snippets disabled is just as fast as my dirty hack. Awesome work! I'm not sure if the |
I believe this is resolved as of recent releases. @hanspinckaers let me know if you think otherwise |
Hi @pappasam,
I still use this language server every day and I really like it. Since I develop with quite large frameworks, autocompletion can be slow (even after an initial completion). I was thinking; it would be relatively* easy to implement a sort of caching layer. So we could serve autocompletions fast, and maybe update them after we get the actual autocompletion items from Jedi.
What do you think of such a feature? I could take a look how difficult it is to implement and start a pull request if you want.
Thanks,
Hans
The text was updated successfully, but these errors were encountered: