Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use the faster implmentation of the 2020 syntaxtical classifications #40949

Closed
wants to merge 1 commit into from

Conversation

orta
Copy link
Contributor

@orta orta commented Oct 5, 2020

Fixes #38435

This builds on the 2020 semantic classifier by moving it to use a visitor pattern, as seen in aeschli/typescript-vscode-sh-plugin#10 which means it only looks at nodes which are relevant, based on the implementation in aeschli/typescript-vscode-sh-plugin#10

This makes minor improvements to some of the results (correctly adding local, converting module to be classified as a namespace in the protocol)

@orta orta requested a review from rbuckton October 5, 2020 18:18
@typescript-bot typescript-bot added Author: Team For Milestone Bug PRs that fix a bug with a specific milestone labels Oct 5, 2020
@orta
Copy link
Contributor Author

orta commented Oct 20, 2020

Reviewed:
1: Convert to use the ts API for visitNode etc instead of re-creating
2: Convert tokenModifiers to act more like the other flags enums in the codebase

@sandersn sandersn requested a review from sheetalkamat October 20, 2020 20:19
@sandersn sandersn assigned andrewbranch and unassigned orta Feb 24, 2022
@typescript-bot typescript-bot added For Backlog Bug PRs that fix a backlog bug and removed For Milestone Bug PRs that fix a bug with a specific milestone labels Feb 24, 2022
@andrewbranch
Copy link
Member

@alexdima @orta is there any context on this I should be aware of before diving into a review? Is this ready to go as far as you know, or is there a reason we never reviewed it at the time? 🙈

@orta
Copy link
Contributor Author

orta commented Mar 18, 2022

My guess is more that it's just an overwhelmingly big PR, and while there's a reasonable amount of tests which are not breaking it's probably a slog to be certain

@alexdima
Copy link
Member

@andrewbranch I did this as an exploration some time ago in order to investigate if TS could compute semantic tokens faster. Currently, the TS extension in VS Code defines a document range semantic tokens provider, which gets invoked for the range of the viewport together with a viewport above and one below. But if the user scrolls quickly to a different region, then, depending on theme settings and individual sensitivity, some flashing can be observed. The flashing can be eliminated by using a document semantic tokens provider, but that means computing semantic tokens for the entire document.

The exploration implements a custom AST walker which maintains custom state useful for semantic tokens for speedier access (like cached symbols, cached locals). In the end, I could measure 10x improvements for computing semantic tokens for checker.ts with this approach.

However, I realise that maintaining a separate visitor just for semantic tokens is probably not something anyone would be keen to do, plus I don't think I came across a lot of negative user feedback about the viewport semantic tokens provider. So, personally, I don't mind if you don't want to take this.

@andrewbranch
Copy link
Member

That context is very helpful, thanks! So this change essentially just ignores the span parameter identifying the viewport + buffer range, right? In other words, as a drop-in replacement, it might actually be slower on large files since it’s tokenizing the whole thing instead of a small span?

@andrewbranch
Copy link
Member

I think additional updates to the semantic classifier are going to need to be driven by concrete goals and design discussions that take into account what’s changed since this PR was opened.

@alexdima
Copy link
Member

So this change essentially just ignores the span parameter identifying the viewport + buffer range, right? In other words, as a drop-in replacement, it might actually be slower on large files since it’s tokenizing the whole thing instead of a small span?

Yes, this implementation is not meant to be a drop-in replacement for the document range semantic tokens provider. This is an implementation suitable for a document semantic tokens provider.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Author: Team For Backlog Bug PRs that fix a backlog bug
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

Supporting Efficient Semantic Highlighting
4 participants