-
Notifications
You must be signed in to change notification settings - Fork 19
benchmark: maintain no. of parallel requests in throughput benchmark #17
Conversation
Check the changes. Especially the If this is alright, we may have to update the measurements. And, bump the version? |
results here: DartIO has higher throughput. It would be interesting to figure out why. (CPU overhead of Cronet vs DartIO?) https://gist.github.com/dcharkes/2194c401945a60930f07225c792369a6 Yes, we have to update the measurements. No, we don't have to bump the version. We don't actually change the package itself, just the benchmarks and markdown files which is not something that is subject to versioning in a Dart package. It is not the Dart API or the semantics of the API. |
I can't conduct the throughput test on local server. Maxing out my resources and then crashing.
Against https://example.com though, my result seems weird -
Should I put these in markdown file? And, can you conduct the local server test? System is crashing every time. Though we can avoid this test if it's causing unrelated issues like this. |
Hm, you could try to figure out what is going on with example.com. Maybe you're over-saturating network bandwidth or CPU? Understanding it will teach us more about the two different systems. If I find some time I can run the local tests. (But they might be less representative for real-world performance, so I would not prioritize these.) |
A closer inspection of Maybe, #3 (comment) this was the indication. In our previous throughput testing script, we've noticed -
😕 |
Tested on another local server (HTTP/1.1 Python Simple Server) and got this -
Got no response from dart:io after |
@dcharkes Updated the files with the results from your machine as results seems kinda weird on mine. I ran it multiple times and got same response every time. I think we should investigate why this is happening while we continue working on other features/issues as my machine is more representative of a general person's machine (config. wise). I need help regarding that. I've put some of the details in the above comments :) |
Spawn a new request when any request is returned within stipulated time duration in case of throughput benchmark to maintain the number of parallel requests performed.
Closes #16