-
Notifications
You must be signed in to change notification settings - Fork 107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Use fixed queue #555
Conversation
@ronag @metcoder95 could you take a look? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It can potentitally due to the optimizations done for Array operations. Haven't check but often they are already inlined within v8 (sometimes even in assembly already).
Can you try to use a tool like tinybench
to benchmark them?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's export the new FixedQueue
as a built-in task queue that can be used instead of the current one, so users have the chance to use it.
Let's add some documentation for it as well, and it should be good to go
We could also make a "smart" mode. Where if more than x (128?) number of items have been queued we switch automatically to fixed queue. |
Nice, that would be great! We can add an option called Tho if the default task queue is already a FixedQueue, there's nothing to do there. |
Although the bottleneck with size getter is already fixed, I still can't find the main one. |
TBH. I think we should always use FixedQueue by default, the "smart" mode is just a compromise suggestion |
Here's a benchmark of 1000 items pushed to queue and then shifted:
We can see that FixedQueue is ~13 times faster. But when we do the same benchmark against Piscina with two different
Which is very strange. |
Should I push this benchmarks into the branch? |
I think the difference is within the margin of error. |
And I completely don't understand why, I thought that even at this scale the difference should be measurable. I just raised the amount of tasks to be queued to 100_000 and Piscina benchmark started to show the difference.
FixedQueue is ~6 times faster now. |
Seems like I have found another caveat. Typescript target ES2019 compiles private fields into something that kills performance. TLDR: Here is benchmark comparison of current
And here's the same benchmark with target ES2022:
And that's what we can get with target ES2019 but rewriting modern private field syntax into old one with underscore (like
As we can see the same FixedQueue implementation shows different results.
And the question is should we update target or rewrite all private fields on the hotpath? |
I'm afraid of breaking implementation if there is custom usage of the current ArrayQueue, I'm ok with keeping it like this within the current version, and make it the default on I'm planning to expedite a major mid June or early July
It's kind of expected.
Let's rewrite it for now, and we can aim to target 2022 private syntax on next major |
So I propose to do following:
import { Piscina, FixedQueue } from 'Piscina'
const pool = new Piscina({
filename: resolve(__dirname, 'fixtures/add.js'),
taskQueue: new FixedQueue()
})
Until it eventualy becomes default in the following major release. |
Also I have a few little benchmarks (results of which I posted above) and |
I also think that |
SGTM 👍
Let's commit them, will serve as a baseline for further optimizations.
Agree, let's mention this while documenting the new |
Added a new test case to cover the issue with FixedQueue where removing elements from the middle of the queue leads to unexpected behavior.
For the suggestion of the |
I sat and thought for a while and came to the same conclusion. Maybe this is done in order not to check the real number of elements. |
test/fixed-queue.ts
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add some tests using the task queue with Piscina
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a few. Should I test any particular cases?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's go with basics and possible cases that you'd like to test when you implemented the new task queue
Co-authored-by: Carlos Fuentes <[email protected]>
Co-authored-by: Carlos Fuentes <[email protected]>
Co-authored-by: Carlos Fuentes <[email protected]>
Seems like it wasn't a bug and I should've carefully read the comments which clearly states the following:
Will double check how it affects performance and then revert my "fixes" if the difference is observable. |
Give it a try with max size + 1, and (max * 2) + 1 as baseline just to see its impact when expanding the queue |
@metcoder95 sorry. Been busy with work, so haven't added any additional benchmarks. |
np! The PR was great to land already, as it is not the default the benchmarks are great in separated PR 👍 |
@metcoder95 Not quietly sure where and in which shape it should be present in piscina repo (if should at all?). It shows results for queue size of
As we might notice the first suite shows that Piscina fix actualy has positive effect, but it's an edge case and in general Node's implementation is roughly 13-20% faster. |
Some time ago @ronag mentioned in #452 that Piscina could adopt Nodejs fixed_queue as TaskQueue.
This draft will fix #452.
Synthetic benchmark shows drastical speed difference when comparing fixed_queue vs queue based on plain js array.
But actual simple-benchmark.js shows the oposite result.
I passed FixedQueue as custom queue here.
Inital
simple-benchmark.js
shows around ~82k ops on my machine, butsimple-benchmark-2.js
withFixedQueue
shows half of original performance with ~44k ops.Do I miss something?