Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature proposal: serve completion items from cache #45

Closed
hanspinckaers opened this issue Oct 26, 2020 · 15 comments
Closed

Feature proposal: serve completion items from cache #45

hanspinckaers opened this issue Oct 26, 2020 · 15 comments

Comments

@hanspinckaers
Copy link
Contributor

Hi @pappasam,

I still use this language server every day and I really like it. Since I develop with quite large frameworks, autocompletion can be slow (even after an initial completion). I was thinking; it would be relatively* easy to implement a sort of caching layer. So we could serve autocompletions fast, and maybe update them after we get the actual autocompletion items from Jedi.

What do you think of such a feature? I could take a look how difficult it is to implement and start a pull request if you want.

Thanks,
Hans

  • famous last words
@hanspinckaers hanspinckaers changed the title Feature proposal: server completion items from cache Feature proposal: serve completion items from cache Oct 26, 2020
@pappasam
Copy link
Owner

While this would be an interesting exercise, my current understanding is that we can't (easily) cache much beyond what Jedi has already cached because cache-validation is extremely complicated with code completion tools. For example, if we cache code that you've imported and the code itself has changed, it's somewhat involved to know that we can remove anything associated with that module from the cache and re-analyze at a future date. If you just want read-only completions, we already support: https://github.com/pappasam/coc-jedi#jedijedisettingsautoimportmodules

That said, if you want to take a stab at something and are able to:

  1. Provide a (relatively) simple, easy-to-work-with/extend jls cache implemention
  2. Show that it is demonstrably better in terms of performance
  3. Show that it doesn't introduce any horrible edge cases

I'd be happy to be proven wrong!

If you'd like to read more about Jedi's caching challenges, see this issue: davidhalter/jedi#1059

@pappasam
Copy link
Owner

pappasam commented Nov 6, 2020

@hanspinckaers let me know if you notice any performance improvements from the latest release: https://github.com/pappasam/jedi-language-server/blob/master/CHANGELOG.md#0210

@hanspinckaers
Copy link
Contributor Author

Hi! Sounds good! Sorry for me not replying to your message, I was on vacation. Your points/concerns sound really valid.

For me, I just wanted a bit of a quick hack to first serve cached completions and also in the background ask Jedi for new completions, then update the completions (not sure if possible with language servers). This way you're never out of date for long.

One quick (beginner) question: how do you develop/debug the language server? I want to just run it from the command line and be able to use breakpoints etc.

@pappasam
Copy link
Owner

pappasam commented Nov 9, 2020

Developing / debugging is currently not the simplest process. Abstractly:

  1. Make sure your language server is running the development version of jedi-language-server (eg, the one you've pip installed in the cloned repo)
  2. To see things that are happening, insert server.show_message to essentially "echo" data in your editor. For example, in jedi_language_server.server.completion, if you want to see the first completed item that's returned, add the following line:
@SERVER.feature(COMPLETION, trigger_characters=[".", "'", '"'])
def completion(
    server: JediLanguageServer, params: CompletionParams
) -> Optional[CompletionList]:
    """Returns completion items."""
    ...
    completion_items = [
        jedi_utils.lsp_completion_item(
            name=completion,
            char_before_cursor=char_before_cursor,
            enable_snippets=enable_snippets,
            markup_kind=markup_kind,
        )
        for completion in completions_jedi
    ]
    if completion_items:
        server.show_message(str(completion_items))
    return (
        CompletionList(is_incomplete=False, items=completion_items)
        if completion_items
        else None
    )
  1. If the server doesn't appear to be running / your completion / goto definition don't work, run pylint and mypy on your code to see if anything is wrong: poetry run pylint jedi_language_server and poetry run mypy jedi_language_server.

Note: it's somewhat difficult to develop your development tools (like jedi_language_server) because, if you break them, your development environment becomes hard to work with. Catch-22...

@hanspinckaers
Copy link
Contributor Author

hanspinckaers commented Nov 18, 2020

I'm sorry @pappasam, I'm quite busy at the moment. I did however remove all the type-information retrieval and disabled snippets, and now autocomplete flies. It's incredibly fast. Numpy autocompletion show up for me < 200ms and instant the second time. So, for me, that is actually enough, I don't really look at the type information anyway. I do miss the snippets, but the signature help in coc.nvim helps.

Maybe we want to make this an option? It's is pretty barebones though:
Screenshot 2020-11-18 at 14 52 10

I was thinking maybe we could also limit the type-information retrieval to the first x items in the completion list, but I don't think that will work with the trimming down of the list via fuzzy completion in some clients.

@pappasam
Copy link
Owner

I just tested locally and it seems like disabling snippets gives sufficient performance improvement for Numpy to complete quickly (after the first completion) when including numpy in https://github.com/pappasam/coc-jedi#jedijedisettingsautoimportmodules.

We already have an initialization option for disabling snippets, so maybe we just need to document that disabling snippets + autoImportModules can help completion performance issues.

@krassowski
Copy link
Contributor

Could implementation of completionItem/resolve allow for the best user experience?

The idea is that the textDocument/completion sends only the properties which are cheap to compute (say label, sortText, insertText, etc), while documentation, detail (and any other property since LSP 3.16.0) would be only sent once the clients requests to resolve the details with the completionItem/resolve request. This improves the initial speed and gives the user all information - no compromise.

The lag for completionItem/resolve can be lower than the perception threshold (dozens of milliseconds), and the client can pre-fetch a few completionItem/resolve as soon as it gets the completion result (I implemented this in my client here).

I would be happy to help with implementation.

@pappasam
Copy link
Owner

@krassowski interesting... yes, I think the new resolution patterns in 3.16 are a great opportunity for us to speed things up a lot (and to add new features, like selecting a sane name for codeAction results, etc). Two issues that might make supporting LSP 3.16 challenging in the short-term:

  1. I don't think this library's language server framework, pygls, supports LSP 3.16 yet. Its developers are more focused on improving performance with pydantic at the moment, which may also have a positive impact on completionItem speed.
  2. Many language clients don't-yet support the latest LSP version. See here for coc's progress toward supporting 3.16.

Would love your thoughts on supporting LSP 3.16 features based on the aforementioned challenges!

@krassowski
Copy link
Contributor

I agree 3.16 is fresh. What about implementing completionItem/resolve just with the documentation and detail for now, as these were there almost forever?

I think that it might be nice to allow for the user to configure the server so that it either returns eagerly (at a price of lower performance) or uses completionItem/resolve as indeed some clients may not be optimized for completionItem/resolve. It could be an array of properties, where user could specify "resolveEagerly": ["detail"] if their client supports displaying detail always, or does not support completionItem/resolve. Then once support of resolving "type" eagerly is added the user would be able to add it here as well.

@krassowski
Copy link
Contributor

Actually clients already have to opt-in to allow for lazy resolution of anything else than "documentation" and "detail", so no backward compatibility issue here:

	/**
	 * Indicates which properties a client can resolve lazily on a
	 * completion item. Before version 3.16.0 only the predefined properties
	 * `documentation` and `details` could be resolved lazily.
	 *
	 * @since 3.16.0
	 */
	resolveSupport?: {
		/**
		 * The properties that a client can resolve lazily.
		 */
		properties: string[];
	};

@krassowski
Copy link
Contributor

@pappasam
Copy link
Owner

pappasam commented Feb 1, 2021

@hanspinckaers does #56 solve this issue for you?

@hanspinckaers
Copy link
Contributor Author

Looks good! Will take a look later today.

@hanspinckaers
Copy link
Contributor Author

Yes, this PR with snippets disabled is just as fast as my dirty hack. Awesome work! I'm not sure if the resolveEagerly should be default to false while coc.nvim doesn't support the resolve yet?

@pappasam
Copy link
Owner

I believe this is resolved as of recent releases. @hanspinckaers let me know if you think otherwise

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants