-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RED-16] Slow intellisense in VSCode #3214
Comments
To be honest, we've never tried to do any kind of perf measurements for how long our TS typings take to analyze. I've only ever seen a couple people even try to do that. It's something we can look at eventually, but I don't think there's anything we can immediately do. |
@Andarist if you've got any suggestions for analyzing this or improving our types perf, I'm interested :) |
I started examining this after reading this post. I think it's a good place to start (check out the comment section, it has some interesting discussion with useful links). Or, tl;dr: TS-Wiki on Performance Tracing Anyhow, I'll try helping! |
Those performance tracings are quite good - I used them at least twice or twice to get more insights into stuff. I could take a look at this if you share your trace. |
Hi there & thanks for opening this issue @dutzi. I'm happy to have come across it, as it confirms my own testing. We're using RTK Query with the OpenAPI code generator, resulting in about 7000 lines of generated endpoint definitions. I can fully reproduce your observations with VSCode IntelliSense population being significantly slow (1-3s). Changing the API type to Unfortunately, I'm lacking the knowledge to provide helpful input here, but I'll be monitoring the issue and happy to help with triage. |
Maybe it is not connected directly, but I also experienced a performance degradation (type completion) in a medium sized project (using @rtk-query/codegen-openapi) In our case we found the culprit to be multiple calls to .enhanceEndpoints(). |
createApi seems painfully slow! A cascade starting from
I'll do some experimentation to see if I can find a root cause in our usage of
This is on latest (1.9.5) with TypeScript 5.1.6 |
Hey, I've also noticed a very significant TS IntelliSense slowdown originating from Here's a minimal repro https://github.com/rrebase/rtk-query-ts-perf with a few endpoints generated from an open-api schema highlighting the types perf issue. Hopefully someone with experience with the complex types of this codebase can pinpoint the improvement areas from the traces. Referencing microsoft/TypeScript#34801 as the investigations in MUI about similar TS perf issues might be useful. Personally, I'd always take the trade-off of faster perf over correctness if we're faced with a choice. |
We'd like to use RTK Query with the generated react hooks, but with our roughly 400 endpoints (using a custom queryFn) the performance of TypeScript is so dramatically impacted that I'm afraid it's not usable. In IntelliJ, the autocompletion on the "api" object will run for minutes without returning a suggestion list. |
I think I made some progress on this issue, at least for our specific setup. In our project, we are already using So here is what I did (only the last line changed): // The empty API
const emptyApi = createApi({
reducerPath: "api",
baseQuery: baseQuery,
tagTypes: [],
endpoints: () => ({}),
});
// The API with injected endpoints, as generated by code generation
const generatedApi = emptyApi.injectEndpoints({
endpoints: (build) => ({
// (generated endpoints)
})
});
// Our final API object, as we use it in the project
export const api = generatedApi.enhanceEndpoints({
// (caching enhancements)
}) as any as typeof generatedApi; // <-- This part did the trick ⚡️ For our project, this improved IntelliSense performance by about 70%, which makes the difference between slow and unusable. Delay until IntelliSense for a regular Query Hook shows up is now down to around 2-3 seconds – still hurts, but no longer enough to grab a coffee. Hope this helps some of you folks as well. |
No idea when we'll ever have time to look into this, but slapping a link here for later reference. Tanner Linsley just pointed to some TS perf debugging resources: And some more here: |
@ConcernedHobbit : how did you generate that step percentage perf information? |
I used tsc with the --generateTrace flag and manually took a look at it in the Perfetto.dev web-app. |
Hey, I think I made some progress here. I created a small repo that reproduces the issue https://github.com/dutzi/rtk-ts-perf. Check out this video that demos it https://cln.sh/bHBsDGGm. I tried playing a bit with the typings, and noticed that if I edit } & HooksWithUniqueNames<Definitions>; Change to: } I get instant intellisense. I didn't have enough time to improve the utility type, but hope this helps move this forward. |
If you want to get accurate timings for IntelliSense, open the Command Palette in VSCode and choose "TypeScript: Open TS Server log". It might ask you enable TS logging, approve and then copy the path to the Now start a terminal and run: tail -f "<path_to_tsserver.log>" | grep "completionInfo: elapsed time" |
We did some perf analysis last night, and confirmed that export type HooksWithUniqueNames<Definitions extends EndpointDefinitions> =
keyof Definitions extends infer Keys
? Keys extends string
? Definitions[Keys] extends { type: DefinitionType.query }
? {
[K in Keys as `use${Capitalize<K>}Query`]: UseQuery<
Extract<Definitions[K], QueryDefinition<any, any, any, any>>
>
} &
{
[K in Keys as `useLazy${Capitalize<K>}Query`]: UseLazyQuery<
Extract<Definitions[K], QueryDefinition<any, any, any, any>>
>
}
: Definitions[Keys] extends { type: DefinitionType.mutation }
? {
[K in Keys as `use${Capitalize<K>}Mutation`]: UseMutation<
Extract<Definitions[K], MutationDefinition<any, any, any, any>>
>
}
: never
: never
: never Here's a flame graph of the perf from dutzi's example with 2.0-beta.2: We've got a PR up in #3767 that rewrites it to do 3 mapped object types - one each for queries, lazy queries, and mutations: export type HooksWithUniqueNames<Definitions extends EndpointDefinitions> = {
[K in keyof Definitions as Definitions[K] extends {
type: DefinitionType.query
}
? `use${Capitalize<K & string>}Query`
: never]: UseQuery<
Extract<Definitions[K], QueryDefinition<any, any, any, any>>
>
} &
{
[K in keyof Definitions as Definitions[K] extends {
type: DefinitionType.query
}
? `useLazy${Capitalize<K & string>}Query`
: never]: UseLazyQuery<
Extract<Definitions[K], QueryDefinition<any, any, any, any>>
>
} &
{
[K in keyof Definitions as Definitions[K] extends {
type: DefinitionType.mutation
}
? `use${Capitalize<K & string>}Mutation`
: never]: UseMutation<
Extract<Definitions[K], MutationDefinition<any, any, any, any>>
>
} This appears to be a major improvement! Running a perf check against that same example, the main blocking section drops from 2600ms to 1000ms (still a long time, but 60% better!): Could folks try out that PR and let us know how much of an improvement it feels like in practice? You can install it from the CodeSandbox CI build here: Note that the PR is against our |
here's a version backported to v1.9.x: #3769 |
Just published https://github.com/reduxjs/redux-toolkit/releases/tag/v1.9.7 with those changes! There's more work we can do to investigate this, but wanted to get that out given that it's a significant improvement. Please let us know how it works out! |
Very neat, thank you! For our code base and using a naive stopwatch test, this resulted in another ~50% lag reduction. When I combine this with my hacky workaround described in #3214 (comment), I'm now down to a quarter of the original lag, which is very enjoyable. |
Given that we did just speed up the RTKQ hooks types, I'm going to say this is sufficiently improved for 2.0. We can do more perf testing post-2.0, but in the interest of getting 2.0 wrapped up I'm going to move this out of the 2.0 milestone and not spend any further time on this until 2.0 is out the door. |
Tagging in another report of slow perf for later reference: |
Hey 👋
I'm trying to figure out what slows our VSCode intellisense (TS-backed code auto-complete) down and may have found the main cause.
Not 100% sure it's RTK Query (
createApi
) or not, but figured it's a decent-enough thing I found that I may as well share it.Where I work at we have a pretty hefty React/RTK app (not OSS 😞), we've been dealing with slow (not unbearable, but annoying) response rate from VSCode intellisense (feels like >1s until the suggestions list shows up, but looking at the TS Server logs it's probably ~800ms).
I tried a few things, eventually landed on this:
If I any-fy the call to
createApi
, the TS Server logs report thatcompletionInfo
(which is in charge of computing the list of suggested items that show up in VSCode's autocomplete) drops from 840ms to 122ms.Here's a video before the change (note how slow it takes from the time I hit
.
to when I see the suggestions:CleanShot.2023-02-25.at.21.15.29.mp4
Here it is when I make the following change:
To:
CleanShot.2023-02-25.at.21.14.31.mp4
RED-16
The text was updated successfully, but these errors were encountered: