Skip to content
This repository has been archived by the owner on Jun 2, 2018. It is now read-only.

Unrealistic benchmark scenarios: ignoring initialization #9

Open
mindplay-dk opened this issue Jun 8, 2015 · 1 comment
Open

Unrealistic benchmark scenarios: ignoring initialization #9

mindplay-dk opened this issue Jun 8, 2015 · 1 comment

Comments

@mindplay-dk
Copy link

It's great that you went through the work of creating a real benchmark for this 👍

But, as with the original benchmark, I have to wonder if it's a realistic scenario. You're benchmarking the raw performance of simply resolving URLs in a loop. When is that going to happen in real life?

For most real-world scenarios, initialization and resolution is going to happen lock-step, e.g. resolution is going to happen once per request - which means that initialization is going to happen once per request.

This isn't NodeJS - the router doesn't sit around waiting, prepared for the next request; it needs to be initialized on every call. (and yes, I know about React, but the ordinary scenario is more likely Apache or FPM, etc.)

I think, for this benchmark to be relevant, initialization needs to happen in the benchmark loop.

For that matter, if this was to be totally realistic, you should run the routers in isolated scripts, using actual HTTP requests. I think it's fair to ignore overhead from autoloading etc. as it's likely going to have a marginal impact overall - but I don't think it's realistic to exclude initialization entirely?

Also note that Benchmark::execute() ought to dry-run the test once (execute without measuring) to trigger autoloaders etc. so this doesn't potentially skew the result of the first iteration.

@tyler-sommer
Copy link
Member

@mindplay-dk Belated thanks for taking a look!

Your points make a lot of sense, however, my original intent was not to benchmark the full solution, only matching. It was not intended to be realistic :)

I do agree, though, that testing initialization is very important. At that point, however, I think people should be profiling their implementation rather than relying on this silly benchmark.

As for your last point, we already discard any results outside of 3 standard deviations; effectively negating the need for a dry run. At least that's my assumption. Perhaps I'll test it out..

Let me know if you have any further thoughts. Cheers!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Development

No branches or pull requests

2 participants