Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hono Type Inference is taking too long during builds #3869

Open
askorupskyy opened this issue Jan 30, 2025 · 15 comments
Open

Hono Type Inference is taking too long during builds #3869

askorupskyy opened this issue Jan 30, 2025 · 15 comments
Labels
enhancement New feature or request.

Comments

@askorupskyy
Copy link
Contributor

askorupskyy commented Jan 30, 2025

What is the feature you are proposing?

I began using Hono for my side project about a year ago. After a year this grew into a pretty chunky product that has hundreds if not thousands of different modules and endpoints.

The catch is the build time of the project. Since Hono is inferring the Context type for each route, and then extends it with the next router and so on, my application now takes about 8 minutes to compile in CI with esbuild. This increased our CI consumption (and the bills) by a lot, deploys take a lot of time, and I had to spend a lot of engineering hours just to make the DX for other contributors bearable.

I've had previous concern with the type inference mechanism - you have to compile your entire app (including your ORM, hidden business logic and etc) just to have RPC functionality for your frontend, which is essentially just the inputs/outputs for your API, and shouldn't be more than that. The solution in the docs page helped, but again - the compile times are killing it, it kills live-reload, requires additional setups, and etc.

Some temporary solutions I've implemented definitely helped:

  • Wrap the entire app in the Nx monorepo.
  • Split each router into buildable esbuild library.
  • Have the main router (which is also buildable) connect all of those together (inferring the types)
  • Skip the type check in main router in the development environment (because the main router took forever to make one hell of a type by joining the previous types together).
  • Live reload is now down from ~2mins to ~10secs.

Having said that, I think this is too much engineering to make a JS framework work for a small startup. It feels like the type inference solution is not going to play out long-term and I would like to hear your opinions on how this can be improved or if there's any plans to move to other models (I previously mentioned the contract model, where the input/output types for each request are known in advance and then each handler checks if it satisfies the type of IO).

It looks like few other problems derive from this mechanism for other people as well, causing to move to other frameworks #3450 (comment). I've mentioned this many times before, and so did the others. Thanks!

@askorupskyy askorupskyy added the enhancement New feature or request. label Jan 30, 2025
@EdamAme-x
Copy link
Contributor

I agree, and felt that the time spent on type inference was noticeably more bloated than when I learned about this project a year ago.

@Rick-Phoenix
Copy link

Like many others I am also having this problem. I have tried the suggested solution for the rpc in the docs (i.e. placing the client on a separate package and running tsc --watch on it) but they have only helped to a point.

Part of me kind of thinks that zod contributes a lot to this problem, because I have briefly tested removing it and saw a noticeable difference although I did not look too much into it. Do you use zod as your app's validator?

@askorupskyy
Copy link
Contributor Author

Hi @Rick-Phoenix, yes, I've been using Zod along with openapi-zod package.

I agree that Zod might be a part of a problem, but again, mostly due to Type Inference rather than anything else in my opinion. I think it has to do with z.infer, which has to be called every time the type for a specific endpoint is calculated.

Moreover, I've been using a framework with a similar type inference model before at work, and we had so many zod schemas, the tsc would run out of RAM.

@Rick-Phoenix
Copy link

Right. One of these days I will try to swap zod with typebox and see if/how that makes a difference. I am curious to see how much of the TS performance burden is on Hono vs Zod.

@bompi88
Copy link

bompi88 commented Feb 2, 2025

I also have the same problem. I'm using openapi-zod as well. I usually also end up in a situation like:

TS7056: The inferred type of this node exceeds the maximum length the compiler will serialize. An explicit type annotation is needed.

In the rpc guide there is a small section talking about providing type hints, but this unfortunately does not work the same way with openapi-zod?

https://hono.dev/docs/guides/rpc#specify-type-arguments-manually

Would be nice if there was a way to do the same thing using openapi-zod, or a clear documentation on how to do so.

EDIT:

So I basically want to be able to "force" using interfaces as these seems to simplify the resulting type (therefore could handle inference better?) https://github.com/microsoft/Typescript/wiki/Performance#preferring-interfaces-over-intersections

@flipvh
Copy link

flipvh commented Feb 3, 2025

Same here for our OS project https://github.com/cellajs/cella. Incremental compilation is perhaps my biggest challenge atm. What is your performance on this @askorupskyy ? Our incremental compilation (ie. when I add a non-existent endpoint param in the frontend) takes 10 seconds, which is too much to effectively use IntelliSense. Could we perhaps team up to find resources to contribute to a better solution for the combination you describe? We also use zod-openapi.

@askorupskyy
Copy link
Contributor Author

askorupskyy commented Feb 3, 2025

@flipvh Hi, thanks for responding. My IntelliSense performance on my project is effectively non-existent because the builds take forever and I am not even able to test my changes for solid 5-8 minutes after saving the file.

The following helped:

  • I broke the routers into monorepo libs, each buildable (routers-auth, routers-orders, etc)
  • I then have a main router lib (routers-main) which effectively does new Hono().route('/auth', authRouter).route('/order', orderRouter)
  • I have 2 build configurations on routers-main:
    • production: builds and generates d.ts for the entire app, from which I can then create my Hono RPC client.
    • development: builds all of the dependencies: routers-auth, routers-orders and etc, but skips the tsc on routers-main because in takes the longest. This does not generate the d.ts, so live-reload only updates the business logic and not the client. In reality just type-checks all of the routers and bundles them together, without type-checking the final artifact.
  • To update the hc client I just run routers-main:build:production which generates the updated d.ts. It does take quite some time but because production builds get cached, the only thing that needs to be built is routers-main.

I can make a sample repo if you would like me to show the example above in details.

Each router is building in parallel, taking around 10 secs. I brought down my live-reload time to around 10 secs from a few minutes. As for IntelliSense – it's blazingly fast, although I had to trade off live RPC builds (aka the client does change until I run routers-main:build:production which takes about a minute).

As for teaming up and finding the best permanent solution – I feel like ts-rest is solving this problem the best. Cool thing about them is they have adapters, meaning pretty much any backend framework can be used with ts-rest. There's already a project, but it has not been maintained for long time. We can contribute to this issue to see if we can make anything work.

Feel free to contact me using the links on my website if you have any other questions or would like to work together on this!

@Rick-Phoenix
Copy link

@askorupskyy Do you by any chance make heavy use of zod's omit, extend and pick methods? I've seen a benchmark the other day showing that using these even a few times can increase the language server's workload by 10x.

Btw would you mind explaining what ts-rest is doing differently about this and it might improve this issue?

@askorupskyy
Copy link
Contributor Author

askorupskyy commented Feb 4, 2025

@Rick-Phoenix yes, I do use .omit() and .pick() heavily. Not sure if the performance diff is indeed 10x but either way it should not affect the compile time of my business logic by minutes – type inheritance is at fault here.

As for ts-rest – this is a framework that is designed to create RPC wrappers for any JS framework. The way it works is you define contracts – similar to routes in hono/zod-openapi. The main difference is the way the client is created with contracts....

In Hono, you define a bunch of routes, and each route expands the type of the previous route, leaving you with a huge AppType, which requires your entire app to compile before you can work with the client. Not only this includes a bunch of server-side types in your client code, but also requires tsc to join and inherit these types.

ts-rest takes a slightly different approach: a contract is everything the client needs to be generated. It contains the input/output types or validations, and that's it. Effectively it's just an object that you create that represents the shape of your backend. This is the same thing that Hono, Elysia, and tRPC take forever to compile, but since you define it yourself, it gives instant IntelliSense and compile time while providing the same kind of DX.

Now when you define routes with ts-rest, you just import a contract into the handler, and it provides you with type completion, basically checking ReturnType<typeof orderHandler> satisfies contract['order']['output']. Quite simple and effective.

I am not saying that Hono should be like ts-rest, but from what I've seen so far, this model seems to be the best in terms of DX vs performance. Definitely worth exploring, and would definitely make Hono and instant winner compared to tRPC and Elysia that suffer from the same kind of flaw.

@Rick-Phoenix
Copy link

That is very fascinating, thank you for taking the time to explain that. I wanted try out tRPC because I am fascinated by the idea of it but if ts-rest can offer more or less the same functionalities but with better performance, I'll try that instead.

To be honest I thought tRPC was exactly like that (you directly define the schemas of the endpoints), so how come so many people struggle with ts performance? Does it also have that type inheritance structure in routes like Hono?

Btw the benchmark I was referring to is this: https://dev.to/nicklucas/typescript-runtime-validators-and-dx-a-type-checking-performance-analysis-of-zodsuperstructyuptypebox-5416

@askorupskyy
Copy link
Contributor Author

Yes @Rick-Phoenix each .procedure() call in tRPC extends the type of the core app. Same thing as Hono.

Thanks for sharing the benchmark!

It looks like ts-rest supports fetch adapters so i will take a look over the weekend.

@Rick-Phoenix
Copy link

Great! Let us know if you find a way to integrate it with Hono

@MiguelsPizza
Copy link

I feel like it can speak to this since I wrote a library that made me very aware of this issue: ( @express-ts-rpc/router and @express-ts-rpc/client. These packages provide a type-safe wrapper around Express 5, similar to Hono's type inference and client generation. I wrote them for my company to improve the DX of our large Express app without a full rewrite when we migrated to typescript. It's basically just hono and the client has an almost identical api to hono client.

Hono's type inference has limitations due to its Express-like route declarations. While ts-rest is great, it doesn't follow the standard Express route config, making it a hard sell for those migrating from Express or hiring devs with Express experience.
To avoid hitting Node's/IDE's memory limits during TypeScript builds, avoid putting too much into one Hono app. Combining Hono apps creates deeply nested generic trees that strain TypeScript's memory. When using the client, export the app type from each controller file and cast the app as any before passing it to the main app. This leaves the main entry point untyped but allows TypeScript to checkpoint inference on each controller file when using the incremental flag.

At a certain point, you will have to split up you typescript project into a monorepo. It's not a hono issue, just a typescript one. having different sections of your repo reference compiled d.ts files instead of implementation code is the only sustainable solution for large TS codebases I'm aware of atm.

the one I wrote for express-ts-rpc is a pretty good example of how to set one up

@flipvh
Copy link

flipvh commented Feb 12, 2025

Hi all. I made some progress in our cella project on some of these challenges. As @MiguelsPizza and @askorupskyy pointed out, when you have too many routes, you have to precompile parts. So we now do this in cella. Manually using pnpm check and also when you create cella for the first time and during precommit etc. Still need to tune it because now the frontend intellisense (where I am importing backend hc clients and some schemas) is near instant, but my backend intellisense has gotten worse than it originally was... I fear this could be due to VSCode not understanding what to look at or work with since I have two "codebases" now?: One in my src and one in dist. Maybe during code changes its compiling or traversing d.ts files while I don't want it to?

Secondly, I my sourcemap isnt working yet: clicking on a backend import in my frontend results in jumping into backend/dist whereas I want to see my backend/src files.

So... halfway there? Let me know if you have things to share on this subject or if you are interested in helping out in our cella codebase or somewhere else.

Still, this is again a way of bypassing the challenges as pointed out by OP. I am still also interested in helping directly or indirectly in finding ways to improve the type inference challenge in and around hono + hono/zod-openapi.

@Rick-Phoenix
Copy link

Same here. My frontend is super quick since it imports the compiled types but the backend is slow (although it did improve somehow recently, I have no idea why).

Right now I am exporting the type with all the routes and all of the zod schemas from the backend package to the shared schemas package, where tsc watch is active whenever I am in dev mode.

I am not sure how intellisense performance can be improved on the backend package itself... I guess you might define the zod schemas in an external (precompiled) package, but other than... Not many ideas. You might even split routes in their own isolated packages but that could get complicated quickly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request.
Projects
None yet
Development

No branches or pull requests

6 participants