Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fetch #11

Open
ronag opened this issue Nov 14, 2022 · 13 comments
Open

fetch #11

ronag opened this issue Nov 14, 2022 · 13 comments
Labels
benchmark-needed Add to issues that does not have a benchmark help wanted Extra attention is needed

Comments

@ronag
Copy link
Member

ronag commented Nov 14, 2022

Refs: nodejs/undici#1203
Refs: #9

@anonrig
Copy link
Member

anonrig commented Nov 17, 2022

I believe we should be much more descriptive about what we want to talk about, and add a Github issue template.

My ideal performance issue would be:

  • Add a benchmark
  • Mention the specific use case (hot path) for this performance issue
  • Provide solutions (if available)

@anonrig anonrig added the benchmark-needed Add to issues that does not have a benchmark label Nov 17, 2022
@metcoder95
Copy link
Member

This is mostly referring to the WHATWG streams or there is something more underneath?

@mcollina
Copy link
Member

I don't think this is done. I haven't seen any evidence of it, I still see a significant gap compared to http.request.

@mcollina mcollina reopened this Jan 24, 2023
@mcollina
Copy link
Member

Here is the latest data I have collected on Node.js v19.14.0:

$ CONNECTIONS=10 PIPELINING=10 PORT=3000 PARALLEL=100 SAMPLES=1000 node benchmarks/benchmark.js
│ Tests               │ Samples │          Result │ Tolerance │ Difference with slowest │
|─────────────────────|─────────|─────────────────|───────────|─────────────────────────|
│ undici - fetch      │     100 │  314.64 req/sec │  ± 1.29 % │                       - │
|─────────────────────|─────────|─────────────────|───────────|─────────────────────────|
│ http - keepalive    │     100 │  600.19 req/sec │  ± 0.92 % │               + 90.75 % │
|─────────────────────|─────────|─────────────────|───────────|─────────────────────────|
│ http - no keepalive │     100 │  602.57 req/sec │  ± 0.89 % │               + 91.51 % │
|─────────────────────|─────────|─────────────────|───────────|─────────────────────────|
│ undici - pipeline   │     100 │ 1653.26 req/sec │  ± 1.81 % │              + 425.44 % │
|─────────────────────|─────────|─────────────────|───────────|─────────────────────────|
│ undici - request    │     100 │ 1905.69 req/sec │  ± 1.99 % │              + 505.67 % │
|─────────────────────|─────────|─────────────────|───────────|─────────────────────────|
│ undici - stream     │     100 │ 2185.38 req/sec │  ± 2.05 % │              + 594.56 % │
|─────────────────────|─────────|─────────────────|───────────|─────────────────────────|
│ undici - dispatch   │     100 │ 2602.14 req/sec │  ± 0.75 % │              + 727.02 % │

We could say that the major blocker are WHATWG streams, but I would not consider this "done".

We still need a 2x throughput increase

@RafaelGSS
Copy link
Member

@mcollina today's release will include a perf improvement on webstream errors. It should improve a bit.

@mcollina
Copy link
Member

There is a massive improvement in main:

│ Tests               │ Samples │          Result │ Tolerance │ Difference with slowest │
|─────────────────────|─────────|─────────────────|───────────|─────────────────────────|
│ undici - fetch      │     100 │  432.22 req/sec │  ± 1.49 % │                       - │
|─────────────────────|─────────|─────────────────|───────────|─────────────────────────|
│ http - no keepalive │     100 │  598.51 req/sec │  ± 0.88 % │               + 38.47 % │
|─────────────────────|─────────|─────────────────|───────────|─────────────────────────|
│ http - keepalive    │     100 │  647.20 req/sec │  ± 0.84 % │               + 49.74 % │
|─────────────────────|─────────|─────────────────|───────────|─────────────────────────|
│ undici - pipeline   │     100 │ 1616.99 req/sec │  ± 1.75 % │              + 274.11 % │
|─────────────────────|─────────|─────────────────|───────────|─────────────────────────|
│ undici - request    │     100 │ 1900.22 req/sec │  ± 1.67 % │              + 339.64 % │
|─────────────────────|─────────|─────────────────|───────────|─────────────────────────|
│ undici - stream     │     100 │ 2170.23 req/sec │  ± 2.06 % │              + 402.11 % │
|─────────────────────|─────────|─────────────────|───────────|─────────────────────────|
│ undici - dispatch   │     100 │ 2457.00 req/sec │  ± 1.79 % │              + 468.45 % │

We still need a rough +50% increase.

@RafaelGSS
Copy link
Member

We can do it! I think I'll have time next week to work on this.

@anonrig
Copy link
Member

anonrig commented Jan 30, 2023

According to my benchmarks (beware that it is flaky), the pull request to add Ada doubles the performance of undici - fetch from 984 req/sec to 1823 req/sec

Screenshot 2023-01-30 at 3 46 11 PM

@anonrig
Copy link
Member

anonrig commented Jan 30, 2023

We still need a rough +50% increase.

@mcollina, this is the rough +100% increase you were looking for (I guess).

@RafaelGSS
Copy link
Member

That actually should not just improve fetch but improve all the methods right? So, in theory, fetch will still be slower than node-fetch.

@anonrig
Copy link
Member

anonrig commented Jan 30, 2023

That actually should not just improve fetch but improve all the methods right? So, in theory, fetch will still be slower than node-fetch.

I'm not really quite sure. The amount of new URL in undici might be greater than the usage in node-fetch.

@metcoder95
Copy link
Member

What about the WHATWG, has been any further research for them?

@mcollina
Copy link
Member

This keep being closed, but either all the fixes needed are not released yet or there is some other problem:

[bench:run] │ Tests               │ Samples │           Result │ Tolerance │ Difference with slowest │
[bench:run] |─────────────────────|─────────|──────────────────|───────────|─────────────────────────|
[bench:run] │ undici - fetch      │     100 │  2069.16 req/sec │  ± 1.10 % │                       - │
[bench:run] |─────────────────────|─────────|──────────────────|───────────|─────────────────────────|
[bench:run] │ http - no keepalive │     100 │  3525.25 req/sec │  ± 1.08 % │               + 70.37 % │
[bench:run] |─────────────────────|─────────|──────────────────|───────────|─────────────────────────|
[bench:run] │ http - keepalive    │     100 │  5489.39 req/sec │  ± 2.56 % │              + 165.30 % │
[bench:run] |─────────────────────|─────────|──────────────────|───────────|─────────────────────────|
[bench:run] │ undici - pipeline   │     100 │  6922.14 req/sec │  ± 1.82 % │              + 234.54 % │
[bench:run] |─────────────────────|─────────|──────────────────|───────────|─────────────────────────|
[bench:run] │ undici - request    │     100 │  9877.68 req/sec │  ± 1.47 % │              + 377.38 % │
[bench:run] |─────────────────────|─────────|──────────────────|───────────|─────────────────────────|
[bench:run] │ undici - stream     │     100 │ 11218.29 req/sec │  ± 2.20 % │              + 442.17 % │
[bench:run] |─────────────────────|─────────|──────────────────|───────────|─────────────────────────|
[bench:run] │ undici - dispatch   │     100 │ 12066.94 req/sec │  ± 2.23 % │              + 483.18 % │
[bench:run]

As you can see there is a staggering gap between what is possible (undici dispatch()) and the cost of running the fetch API.

This issue should not be closed until fetch reaches the same level of http.request() with keepalive enabled.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
benchmark-needed Add to issues that does not have a benchmark help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

5 participants