-
Notifications
You must be signed in to change notification settings - Fork 29.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Revert url perf changes #1602
Revert url perf changes #1602
Conversation
+1 BTW, if someone wants to put this behind a flag they can do that in a minor release since it would only be considered a feature addition. |
I'm +1 here as well. I know there is more than a handful of possible places in my own modules or modules I maintain where this may result in a subtle breakage (including |
LGTM. Let's not hold up the release. |
Wearing my npm maintainer's tiara, I'm +1 on this if the goal is to get |
+1. Let's get this in behind a flag (or as @mikeal suggests, published on npm as a |
+1. I apologize for not weighing in on #1561 initially. I'd like to propose a rule that any commit message that mentions performance MUST include either specific data on the improvement of a pre-existing benchmark, or a new benchmark that shows improvement with the patch. If others here agree this is an idea worth pursuing, I'll send a PR for the contribution guide. Without specific and reproducible benchmark results, it's impossible to even have a discussion about performance. Even if there are improved benchmarks, the benefit has to be weighed against the costs, including the costs of a lack of stability, and must be put in perspective of how big a performance difference it'll make to a real application. What's hurting me much more these days is child_process performance, not url parsing. Even before this change, we could parse 1,000,000 urls in just a few seconds. If we do re-visit #1561, it should also be split into separate bits for the benefit of good atomic commit hygiene. Switching from object properties to accessors is a significant change. There are other changes to the parsing approach, renamed variables, and so on. Just to try to put some level of data behind what the performance impact is here, I came up with this very simple benchmark:
There is no performance difference to be had. Even if there was, it's an operation that can already be performed in excess of 150k times per second, and no one has that many urls to parse. Therefor, the only consideration should be whether or not the semantic changes on their own are worth the disruption. I think it's clear that they're not. It would be better if this module were taken to userland, and leave url.js as it was. |
The bench results were attached to the original PR, and didn't get ported over to the final PR (which was cut because github doesn't allow PRs to be retargeted). Sorry for the information loss (which is due to me switching the dev branch from |
+1 I wouldn't mine it flagged or as a preload module, so long as that takes us basically no time to do. |
This reverts commit 3fd7fc4. It was agreed that this change contained too much potential ecosystem breakage, particularly around the inability to `delete` properties off a `Url` object. It may be re-introduced for a later release, along with better work on ecosystem compatibility. PR-URL: nodejs#1602 Reviewed-By: Mikeal Rogers <[email protected]> Reviewed-By: Ben Noordhuis <[email protected]> Reviewed-By: Forrest L Norvell <[email protected]> Reviewed-By: Chris Dickinson <[email protected]> Reviewed-By: Isaac Z. Schlueter <[email protected]> Reviewed-By: Jeremiah Senkpiel <[email protected]>
https://jenkins-iojs.nodesource.com/job/iojs+any-pr+multi/641/ will merge if we get a 👍 from CI |
merged f34b105 |
Really sorry about @petkaantonov, I was looking forward to getting this out there but I'm also glad that we're taking a more safe approach to compatibility with the ecosystem; hopefully we can find a path forward that gets this back in sooner than later. Thanks for the excellent work on this so far! |
This reverts commit 6687721. It was agreed that this change contained too much potential ecosystem breakage, particularly around the inability to `delete` properties off a `Url` object. It may be re-introduced for a later release, along with better work on ecosystem compatibility. PR-URL: #1602 Reviewed-By: Mikeal Rogers <[email protected]> Reviewed-By: Ben Noordhuis <[email protected]> Reviewed-By: Forrest L Norvell <[email protected]> Reviewed-By: Chris Dickinson <[email protected]> Reviewed-By: Isaac Z. Schlueter <[email protected]> Reviewed-By: Jeremiah Senkpiel <[email protected]>
This reverts commit dbdd81a. It was agreed that this change contained too much potential ecosystem breakage, particularly around the inability to `delete` properties off a `Url` object. It may be re-introduced for a later release, along with better work on ecosystem compatibility. PR-URL: #1602 Reviewed-By: Mikeal Rogers <[email protected]> Reviewed-By: Ben Noordhuis <[email protected]> Reviewed-By: Forrest L Norvell <[email protected]> Reviewed-By: Chris Dickinson <[email protected]> Reviewed-By: Isaac Z. Schlueter <[email protected]> Reviewed-By: Jeremiah Senkpiel <[email protected]>
This reverts commit 3fd7fc4. It was agreed that this change contained too much potential ecosystem breakage, particularly around the inability to `delete` properties off a `Url` object. It may be re-introduced for a later release, along with better work on ecosystem compatibility. PR-URL: #1602 Reviewed-By: Mikeal Rogers <[email protected]> Reviewed-By: Ben Noordhuis <[email protected]> Reviewed-By: Forrest L Norvell <[email protected]> Reviewed-By: Chris Dickinson <[email protected]> Reviewed-By: Isaac Z. Schlueter <[email protected]> Reviewed-By: Jeremiah Senkpiel <[email protected]>
@isaacs The performance improvement is up to 25x, but at least 10x as shown in many issue/pr threads linked in the real PR. The absolute speed is only 10k parses per second, so it is a major bottleneck for web servers since express parses urls per each request. It improves TechEmpower single query benchmark score by 30% (yes, that benchmark queries a database as well as renders templates, that's how slow url parsing is). The reason for making everything an accessor is so that This has been in userland for a long time (without everything being an accessor though) now but as far as I can tell from anecdotal evidence, many people aren't using io.js simply because there is nothing really substantial over node. 30% faster web servers would have surely been such a thing. |
Maybe current parser could be re-written in several passes, eliminating bottlenecks one-by-one? |
@isaacs I don't know what you've been testing, but here is what I get:
That is a 12x performance gain. Can you double-check if you actually tested the real thing? edit: thats not even enough iterations for the JIT warmup to be taken into account, the actual number for the faster url parser is about 2.3 million, making it a 15x improvement for that particular test case |
To be clear here: I'd like to see it land in the future, but we need to do some prep on the ecosystem so that important things like npm actually still work. :) |
Please move the discussion to #643. |
For discussion purposes at the moment, continuing from #1591, but this is the easiest path that I see to getting a less controversial 2.0.0 out.