-
-
Notifications
You must be signed in to change notification settings - Fork 691
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test: does erasing types of components improve compile times? #2905
Conversation
3ed6823
to
5bcab29
Compare
It's worth asking whether this should be opt-in or opt-out. (If you look at the commit history here, you can see I waffled.) My thinking here is this: Ideally, the framework should be able to operate in "no erased types" mode all the time. It produces the best end result for your users, in both binary size (load times, bandwidth use) and runtime performance. It is purely a limitation of compiler performance that this becomes infeasible for some larger apps, with larger UI trees and therefore combinatorically-increasing type trees. I would rather the default be the best outcome, and be able to tell people "oh, and it might help your dev-mode compile times to opt into the |
I suppose it’s the question of where’s the pain point. I expect developers to notice long compile times when they first try Leptos, and not binary size/runtime perf. Then when they go to release or deploy you say oh you need the release flag and can optimize size by adding this. I think people expect some hoops at that point. Making this opt in, i.e slow to compile by default, reinforces a preconception I've heard a bunch that rust on the web has slow iteration times due to long compile times. I imagine more people would simply have their preconception validated and quit if they’re trying it out than go through that friction of asking the question. I understand the fear of packages escaping with poor performance/bundle size into the wild, but it would be quite easy to add a Warning/Check to |
ffc40bd
to
926ae84
Compare
Chiming in here. TLDR: It's hard to argue a 50%ish improvement in compile times will ever not be wanted in dev. It's also easy to see the opposite is true in release, an e.g. How could this be done? I've done some concrete comparisons to remove any doubt between main Notes:
I hope this helps, and thanks again for your work gbj, truly impressive. |
I've just added 0.6 timings for the codebase under test pre-port to 0.7. Still slightly better, but it's important to show it's a minimal increase from 6 to 7 for dev compiles once erasure added too. |
926ae84
to
d345416
Compare
Thanks for the numbers @zakstucke Ben and I were discussing yesterday and I have a proposal: using a custom Benefits of using
I've rebased the PR and updated accordingly. |
@gbj sounds great! I think the crux is Your flag solution and having it built into cargo leptos sounds like it covers more edge cases than my features suggestion. |
Hey @gbj, unfortunately this branch now brings back the dreaded infinite compile -> RAM overflow, this has happened post my benchmarks. I've tested the current commit history of this PR, the breaking commit is: |
Interestingly, when I reverse the changes in that commit, I assumed it would be caused by the What actually causes it is the Exact changes to get compiling here: Hopefully that makes some sense to you... |
It seems just replacing the contents of the Something like the compiler gets stuck producing code for: |
@zakstucke Thanks! Looks like this particular change must have been the issue, then, more than the others. That's quite tricky as this PR becomes virtually unusable with |
Unusable yes, but only because of this problem with |
But technically a site does work by just removing the closure entirely, and just returning |
@zakstucke Is it possible that the whole issue was that I foolishly named them the same thing? I don't think that's right because I think I'm correctly accessing it as the struct field function and not the method, but I guess maybe it is. Try d799ede I guess. I don't think it will fix it but who knows. |
No, I'm quite confident there's a logic issue here, take a look at the |
For example, when I was testing I tried reworking things to reuse the same |
@gbj To put my thinking more formally: Where
I'm not too familiar with compiler stuff, so I may be way off, but from my testing and how the logic looks, this or similar seems to me what's happening. |
@zakstucke Thanks — This is a little tricky because I don't have an example in front of me that actually causes compiler issues. Do you have an example, or does it only occur in a sufficiently large/complex app that it's not really reproducible in small? While investigating, I did find a possible source of the looping — Namely, that calling I have no idea whether this will help with the particular issue, but it seems better to me in any case. |
Unfortunately your change doesn't fix it. I've been trying to repro: after deleting half the project the app starts compiling in 5m... still using >10GB of RAM whilst compiling though. (I wasn't deleting components, just the entire body of components, aka all the So seems like it might be a size issue, shame, my recursive idea would've been much nicer 😢 |
Does |
I've also realised I can actually get to the end of compilation with the untouched project too, it uses close to 50GB of RAM to get there (I must've been running too much other stuff last time it caused my PC to crash), compiles then get to this:
|
@gbj see discord dm |
@zakstucke I've just reverted the I've realized that -- when someone runs into the case in which they need prop spreading but now it isn't supported because their component has been erased -- they can just mark that component If you could confirm for me that the latest commit here is back to normal, I think it's in fine shape to be merged. |
Related to #2902. There is a genuine trade-off between compile time and WASM binary size: there seems to be some exponential compile time growth (and occasional recursion limits!) as we build a larger and larger statically-typed view tree, but it does allow the compiler to eliminate unused code much more effectively.
hackernews
cargo build --timings --features=ssr
: 19.67s (main) => 6.65s (branch)WASM size: 544kb (main) => 583kb (branch)
todo_app_sqlite_axum
cargo build --timings --features=ssr
: 5.74s (main) => 2.62s (branch)WASM size: 535kb (main) => 566kb (branch)