-
Notifications
You must be signed in to change notification settings - Fork 12.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expand Perf Suite Codebases, Expand Perf Suite Operations #44033
Comments
@rbuckton we can turn on declaration emit, source maps, and declaration maps for most (if not all) perf projects (and remove |
@weswigham Perhaps you can do this additionally? |
(CC @amcasey) @MartinJohns can you elaborate? |
@DanielRosenwasser Continue to run the perf suite as is, but additionally run the perf suite with declaration emit, source maps and declaration maps enabled. Basically produce two data sets. |
In our current setup, we run benchmarks against a snapshot of a specific project, using the settings specific to that project. This is intended to cover different scenarios, such as:
That way we aren't only focused on everything being a "kitchen sink" build, and we can catch performance regressions in non-"kitchen sink" areas (i.e., we don't want to slow down in code paths where we don't emit source maps). The current approach is somewhat randomized, but it covers most of the bases. I wouldn't want to have to run a benchmark against the same project for every permutation of every feature, as that doesn't scale.
What we can do is add new scenarios that point to the same sources but with different settings. That way old runs wouldn't be invalidated. So in that case we might have a "TFS+SourceMaps" scenario, etc. |
Related: #44251 |
Oh, I didn't realize we were listing things here. Copy/paste from our internal thread and my notes:
|
Also related, but, I want to add two new hosts to ts-perf, vscode and bun. The former we can pull via https://www.npmjs.com/package/@vscode/test-electron and then use ELECTRON_RUN_AS_NODE. I want this because electron's build of v8 has different options, mainly pointer compression, which impact the maximum memory usage of the process as well as the small integer optimization. I also bun's new "as node" mode working with some tweaks to typescript (which are really just bugs in bun that don't have proper error messages). It's an interesting test as it's not v8, it's JSC. |
Maybe Ghost. |
I have understood that the snapshots for benchmarks can be quite old, for example, Angular. Would it make sense in some point to refresh those? I would be interested how recent Angular code base is handled with different Typescript releases. I understand that you probably do not want to update the benchmarks you are doing in PRs. But maybe when doing release, it would be possible to add more current code. (And if you choose now one version, of course it will be old in few years :D ) Newer snapshots would be using new features. Of course, it's also possible that I have misunderstood the case. |
Our hope is that we have this all in public, so things can be updated where needed by anyone. That and time / fragility are the reasons we haven't updated. The snapshotting is hard to get rid of, because that's required for consistency in graphs without backfilling. However, I'm working on new infra that will be doing A/B tests, which means that we don't always need to be running a specific version if needed. |
Drizzle? (see #54939) |
As of today, our benchmark suite can now run This is particularly interesting as it uses an entirely different engine (JSC) than Node/Deno (V8).
I'm not yet sure if this should be a part of our main test suite; probably not until we increase our capacity a bit more. Next up would be using VS Code's electron build via |
This is fixed in oven-sh/bun#4378 (currently the canary build) which will become the release build (v0.8.2) tomorrow or the next day. The error was in some code assuming The last few lines of output of
Note how it is no longer calling openat() on |
Thanks for the fix! There are other things that are broken (as noted above), but I have yet to go reproduce them. I'll send some bug reports once I get that sorted. |
Hello from Prisma 👋🏼 The performance test suite is located at https://github.com/microsoft/typescript-benchmarking? but is currently private? Do you plan to make it public one day? (Of course you might have reasons to not make it public, I'm curious here) |
That repo is currently only build infra, which I plan to make public in short order as there's nothing sensitive. The actual tool that does the benchmarking isn't ready to be made public and that's where the benchmarks currently live. I'm hoping I can fix it up to pull benchmarks from the public repo but it's not really made for multiple places to store benchmarks, or benchmarks that have to be cloned from elsewhere. But that's just details and I'm working on it. The new hosts were just a lot easier to add! |
Next up is adding some more benchmarks and making public what I can make public quickly, e.g. the infra itself (but not the tool, sorry). |
I'm happy to say that the new perf infra is now live and public: https://github.com/microsoft/typescript-benchmarking I've been waiting to make this public as I did not yet have a good mechanism to support public benchmarks, but I got it all working in the past few days and now it's ready to go. The first public benchmarks are Note that the public benchmarks are still split off from our old internal benchmarks; we have a lot more perf capacity than we used to have but still not enough to start adding lots that always run. For now, they can be run via the This is the first time we've had new, actually error-free benchmarks in years, so I'm very excited. |
I'm going to close this issue in favor of microsoft/typescript-benchmarking#32 on the benchmarking repo. This thread was refocused toward benchmark suggestions, but now that the repo that holds them is public, discussion and PRing can happen there. |
The perf suite code typically was added for specific scenarios, but if you're trying to get a sense of how sourcemap emit has changed, it's hard given that not all projects use that functionality.
I think we should
The text was updated successfully, but these errors were encountered: