-
-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Panicking in Client call #1429
Comments
This issue was first brought up the rusoto project: rusoto/rusoto#914 |
As noticed, the panic is in |
Thanks! This fixed the issue for me. Things are rock solid from my initial testing. |
@seanmonstar when can this be put out in a release? |
Should be published soon (today?). I'd like to get in some documentation fixes as well (separate from this issue). |
Thank you for the exceptionally quick turnaround time on these fixes, @seanmonstar . 😄 |
@seanmonstar my tool appears more resilient now. I can run it at full speed without the reactor dying from a panic. I do see regular outputs of the new IO error you added:
Should I create a new ticket for this error? |
@xrl yea, definitely. Bummer. The connection pooling code in hyper hasn't seen much love, and is complicated and apparently fragile 😢. |
I am sending requests to AWS Kinesis. Because of the small size of requests I have to generate many concurrent requests. I am using the futures stream::BufferUnordered to put a cap on concurrent requests. Now I have an issue where after ~30 seconds the tokio core panics and goes away. I'm creating on the order of 1000 concurrent requests.
Here is the backtrace:
You can reproduce the error by cloning down https://github.com/tureus/kinesis-hyper . You will need AWS credentials and a single test stream already created. The code will take it from there and start issuing many concurrent requests.
The text was updated successfully, but these errors were encountered: