Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Active connections time out while shipping bulk data #32

Closed
bgilbert opened this issue Nov 12, 2012 · 4 comments
Closed

Active connections time out while shipping bulk data #32

bgilbert opened this issue Nov 12, 2012 · 4 comments

Comments

@bgilbert
Copy link

The transport send_pack() implementations detach from the session before data transmission is complete. (They either don't use RequestHandler.flush(), or don't request a callback.) When the pack takes a long time to transfer, the SessionContainer can then expire the session while the client is still receiving. The client finishes receiving, tries to open the next connection, fails, and reports "Server lost session".

This occurs reliably with XhrPollingTransport and a multi-megabyte pack. It may also affect streaming transports when they hit the amount_limit, but I haven't tested this.

@mrjoes
Copy link
Owner

mrjoes commented Nov 13, 2012

Uh, that's not typical use case.

As I understood from description, here's what happens:

  1. Client uses polling transport;
  2. You send multi-megabyte message to the client;
  3. Because message is sent asynchronously, it might take more than 5 seconds for client to receive the message;
  4. When client finally receives huge message, it attempts to connect to the server, but session already gone

In this case yes, connection should be closed after all data was successfully sent to the client. I will take a look.

@mrjoes mrjoes closed this as completed in 3a9a96c Nov 13, 2012
@mrjoes
Copy link
Owner

mrjoes commented Nov 13, 2012

Try it now - I added simple flow control for polling and streaming transports.

@bgilbert
Copy link
Author

Debugging fail: I'm using XhrStreamingTransport, not XhrPollingTransport. And no, this didn't fix it: the flush callback is running long before all of the data is transmitted. Not sure why.

Regardless, I guess the race condition isn't completely avoidable, due to TCP buffering and network delay.

@mrjoes
Copy link
Owner

mrjoes commented Nov 14, 2012

I see. Looks like congestion avoidance algorithm kicked in and TCP stack decided to cache outgoing data without reporting to application layer.

This change should improve situation for mobile/slow connections as well, so I'll keep it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants