Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RED-16] Slow intellisense in VSCode #3214

Open
dutzi opened this issue Feb 25, 2023 · 25 comments
Open

[RED-16] Slow intellisense in VSCode #3214

dutzi opened this issue Feb 25, 2023 · 25 comments
Labels
Milestone

Comments

@dutzi
Copy link
Contributor

dutzi commented Feb 25, 2023

Hey 👋

I'm trying to figure out what slows our VSCode intellisense (TS-backed code auto-complete) down and may have found the main cause.

Not 100% sure it's RTK Query (createApi) or not, but figured it's a decent-enough thing I found that I may as well share it.

Where I work at we have a pretty hefty React/RTK app (not OSS 😞), we've been dealing with slow (not unbearable, but annoying) response rate from VSCode intellisense (feels like >1s until the suggestions list shows up, but looking at the TS Server logs it's probably ~800ms).

I tried a few things, eventually landed on this:

If I any-fy the call to createApi, the TS Server logs report that completionInfo (which is in charge of computing the list of suggested items that show up in VSCode's autocomplete) drops from 840ms to 122ms.

Here's a video before the change (note how slow it takes from the time I hit . to when I see the suggestions:

CleanShot.2023-02-25.at.21.15.29.mp4

Here it is when I make the following change:

export const api = createApi({

To:

export const api = (createApi as any)({
CleanShot.2023-02-25.at.21.14.31.mp4

RED-16

@markerikson
Copy link
Collaborator

To be honest, we've never tried to do any kind of perf measurements for how long our TS typings take to analyze.

I've only ever seen a couple people even try to do that.

It's something we can look at eventually, but I don't think there's anything we can immediately do.

@markerikson markerikson added this to the 2.0 milestone Feb 25, 2023
@markerikson
Copy link
Collaborator

@Andarist if you've got any suggestions for analyzing this or improving our types perf, I'm interested :)

@dutzi
Copy link
Contributor Author

dutzi commented Feb 25, 2023

I started examining this after reading this post.

I think it's a good place to start (check out the comment section, it has some interesting discussion with useful links).

Or, tl;dr:

TS-Wiki on Performance Tracing
A better tool to inspect TSC's trace.json

Anyhow, I'll try helping!

@Andarist
Copy link

Those performance tracings are quite good - I used them at least twice or twice to get more insights into stuff. I could take a look at this if you share your trace.

@joshuajung
Copy link

Hi there & thanks for opening this issue @dutzi. I'm happy to have come across it, as it confirms my own testing.

We're using RTK Query with the OpenAPI code generator, resulting in about 7000 lines of generated endpoint definitions. I can fully reproduce your observations with VSCode IntelliSense population being significantly slow (1-3s). Changing the API type to any as described above immediately 'solves' the issue.

Unfortunately, I'm lacking the knowledge to provide helpful input here, but I'll be monitoring the issue and happy to help with triage.

@bschuedzig
Copy link

Maybe it is not connected directly, but I also experienced a performance degradation (type completion) in a medium sized project (using @rtk-query/codegen-openapi)

In our case we found the culprit to be multiple calls to .enhanceEndpoints().
After refactoring the code to use it only once in the whole application, performance was back to expected levels.

@ConcernedHobbit
Copy link

ConcernedHobbit commented Jul 17, 2023

createApi seems painfully slow! A cascade starting from EndpointBuilder takes ~652ms by itself.

(percentages relative to EndpointBuilder)
EndpointBuilder: 652ms (100%)
> QueryExtraOptions: 575ms (88%)
1> QueryTypes: 552ms (84%)
2> QueryCacheLifecycleApi: 525ms (81%)
3> QueryDefinition: 425ms (65%)
4> QueryLifecycleApi: 399ms (61%)
5> RootState: 225ms (35%)
6> MutationState: 212ms (33%)
7> BaseMutationSubState 207ms (32%)
8> MutationDefinition 206ms (32%)
9> MutationExtraOptions 193ms (30%)
10> MutationTypes 184ms (28%)

11> MutationCacheLifecycleApi 92ms (14%)
11> MutationLifecycleApi 74ms (11%)

I'll do some experimentation to see if I can find a root cause in our usage of creteApi, or if it's just because creteApi is an inherently expensive operation.
Unfortunately we're not OSS, but I can disclose that the createApi call does not use enhanceEndpoints (directly), but consists of

  • createApi with a reducerPath, baseQuery (fetchBaseQuery), endpoint builder (two builder.mutation calls with a single query defined in each) and nothing else. Otherwise the createApi call references just simple types (two interfaces defined without recursive steps, e.g. simply export type Dummy = { object: string; variables?: string; })

This is on latest (1.9.5) with TypeScript 5.1.6

@rrebase
Copy link

rrebase commented Jul 29, 2023

Hey, I've also noticed a very significant TS IntelliSense slowdown originating from createApi. As a temp workaround I've been any-fying it while working on anything non-RTK related

Here's a minimal repro https://github.com/rrebase/rtk-query-ts-perf with a few endpoints generated from an open-api schema highlighting the types perf issue. Hopefully someone with experience with the complex types of this codebase can pinpoint the improvement areas from the traces.

Referencing microsoft/TypeScript#34801 as the investigations in MUI about similar TS perf issues might be useful. Personally, I'd always take the trade-off of faster perf over correctness if we're faced with a choice.

@mpressmar
Copy link

We'd like to use RTK Query with the generated react hooks, but with our roughly 400 endpoints (using a custom queryFn) the performance of TypeScript is so dramatically impacted that I'm afraid it's not usable. In IntelliJ, the autocompletion on the "api" object will run for minutes without returning a suggestion list.

@joshuajung
Copy link

joshuajung commented Aug 16, 2023

I think I made some progress on this issue, at least for our specific setup.

In our project, we are already using .enhanceEndpoints() only once, to add caching behavior (providesTags/invalidatesTags). Still, @bschuedzig's comment got me thinking whether it wouldn't be possible to short-circuit types from the unenhanced API to the enhanced version. This does not come with any side effects for our case, as our caching enhancements do not modify the API typings in any way.

So here is what I did (only the last line changed):

// The empty API
const emptyApi = createApi({
  reducerPath: "api",
  baseQuery: baseQuery,
  tagTypes: [],
  endpoints: () => ({}),
});

// The API with injected endpoints, as generated by code generation
const generatedApi = emptyApi.injectEndpoints({
  endpoints: (build) => ({
    // (generated endpoints)
  })
});

// Our final API object, as we use it in the project
export const api = generatedApi.enhanceEndpoints({
  // (caching enhancements)
}) as any as typeof generatedApi; // <-- This part did the trick ⚡️

For our project, this improved IntelliSense performance by about 70%, which makes the difference between slow and unusable. Delay until IntelliSense for a regular Query Hook shows up is now down to around 2-3 seconds – still hurts, but no longer enough to grab a coffee. Hope this helps some of you folks as well.

@markerikson
Copy link
Collaborator

markerikson commented Aug 22, 2023

No idea when we'll ever have time to look into this, but slapping a link here for later reference. Tanner Linsley just pointed to some TS perf debugging resources:

And some more here:

@markerikson
Copy link
Collaborator

@ConcernedHobbit : how did you generate that step percentage perf information?

@ConcernedHobbit
Copy link

@ConcernedHobbit : how did you generate that step percentage perf information?

I used tsc with the --generateTrace flag and manually took a look at it in the Perfetto.dev web-app.

@dutzi
Copy link
Contributor Author

dutzi commented Sep 18, 2023

Hey, I think I made some progress here.

I created a small repo that reproduces the issue https://github.com/dutzi/rtk-ts-perf.

Check out this video that demos it https://cln.sh/bHBsDGGm.

I tried playing a bit with the typings, and noticed that if I edit ./node_modules/@reduxjs/toolkit/dist/query/react/module.d.ts removing HooksWithUniqueNames in line 23:

} & HooksWithUniqueNames<Definitions>;

Change to:

}

I get instant intellisense.

I didn't have enough time to improve the utility type, but hope this helps move this forward.

@dutzi
Copy link
Contributor Author

dutzi commented Sep 18, 2023

If you want to get accurate timings for IntelliSense, open the Command Palette in VSCode and choose "TypeScript: Open TS Server log".

It might ask you enable TS logging, approve and then copy the path to the tsserver.log file that just opened (Command Palette → "File: Copy Path of Active File").

Now start a terminal and run:

tail -f "<path_to_tsserver.log>" | grep "completionInfo: elapsed time"

@markerikson markerikson added the linear Created by Linear-GitHub Sync label Sep 22, 2023
@markerikson markerikson changed the title Slow intellisense in VSCode [RED-16] Slow intellisense in VSCode Sep 22, 2023
@markerikson markerikson removed the linear Created by Linear-GitHub Sync label Oct 1, 2023
@markerikson
Copy link
Collaborator

markerikson commented Oct 2, 2023

We did some perf analysis last night, and confirmed that HooksWithUniqueNames seems to be the biggest time sink . We think it has to do with the way this is getting handled as a distributive check, which ends up turning into a union of N individual types (which later gets converted using UnionToIntersection):

export type HooksWithUniqueNames<Definitions extends EndpointDefinitions> =
  keyof Definitions extends infer Keys
    ? Keys extends string
      ? Definitions[Keys] extends { type: DefinitionType.query }
        ? {
            [K in Keys as `use${Capitalize<K>}Query`]: UseQuery<
              Extract<Definitions[K], QueryDefinition<any, any, any, any>>
            >
          } &
            {
              [K in Keys as `useLazy${Capitalize<K>}Query`]: UseLazyQuery<
                Extract<Definitions[K], QueryDefinition<any, any, any, any>>
              >
            }
        : Definitions[Keys] extends { type: DefinitionType.mutation }
        ? {
            [K in Keys as `use${Capitalize<K>}Mutation`]: UseMutation<
              Extract<Definitions[K], MutationDefinition<any, any, any, any>>
            >
          }
        : never
      : never
    : never

Here's a flame graph of the perf from dutzi's example with 2.0-beta.2:

image

We've got a PR up in #3767 that rewrites it to do 3 mapped object types - one each for queries, lazy queries, and mutations:

export type HooksWithUniqueNames<Definitions extends EndpointDefinitions> = {
  [K in keyof Definitions as Definitions[K] extends {
    type: DefinitionType.query
  }
    ? `use${Capitalize<K & string>}Query`
    : never]: UseQuery<
    Extract<Definitions[K], QueryDefinition<any, any, any, any>>
  >
} &
  {
    [K in keyof Definitions as Definitions[K] extends {
      type: DefinitionType.query
    }
      ? `useLazy${Capitalize<K & string>}Query`
      : never]: UseLazyQuery<
      Extract<Definitions[K], QueryDefinition<any, any, any, any>>
    >
  } &
  {
    [K in keyof Definitions as Definitions[K] extends {
      type: DefinitionType.mutation
    }
      ? `use${Capitalize<K & string>}Mutation`
      : never]: UseMutation<
      Extract<Definitions[K], MutationDefinition<any, any, any, any>>
    >
  }

This appears to be a major improvement! Running a perf check against that same example, the main blocking section drops from 2600ms to 1000ms (still a long time, but 60% better!):

image

Could folks try out that PR and let us know how much of an improvement it feels like in practice? You can install it from the CodeSandbox CI build here:

Note that the PR is against our v2.0-integration branch, so it will involve an upgrade, but I'm happy to have us backport that to 1.9.x as well.

@EskiMojo14
Copy link
Collaborator

here's a version backported to v1.9.x: #3769

@markerikson
Copy link
Collaborator

Just published https://github.com/reduxjs/redux-toolkit/releases/tag/v1.9.7 with those changes!

There's more work we can do to investigate this, but wanted to get that out given that it's a significant improvement.

Please let us know how it works out!

@joshuajung
Copy link

Just published https://github.com/reduxjs/redux-toolkit/releases/tag/v1.9.7 with those changes!
(...)
Please let us know how it works out!

Very neat, thank you! For our code base and using a naive stopwatch test, this resulted in another ~50% lag reduction.

When I combine this with my hacky workaround described in #3214 (comment), I'm now down to a quarter of the original lag, which is very enjoyable.

@markerikson
Copy link
Collaborator

Given that we did just speed up the RTKQ hooks types, I'm going to say this is sufficiently improved for 2.0. We can do more perf testing post-2.0, but in the interest of getting 2.0 wrapped up I'm going to move this out of the 2.0 milestone and not spend any further time on this until 2.0 is out the door.

@markerikson markerikson modified the milestones: 2.0, Post 2.0 Oct 19, 2023
@markerikson markerikson added bug Something isn't working rtk-query labels Feb 6, 2024 — with Volta.net
@markerikson
Copy link
Collaborator

Tagging in another report of slow perf for later reference:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests