-
-
Notifications
You must be signed in to change notification settings - Fork 7.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: use pipeline over stream.pipe #9819
Conversation
Pull Request Test Coverage Report for Build 85be597d-e53e-43e6-a6dc-a62eb5acd350
💛 - Coveralls |
|
`pipeline` ends up destroying streams used if there is an error in one of the streams. Due to this, there's no chance of a memory leak from errored out streams. There's also now an addition of adding an error handler to the `StreamableFile` so that stream errors by default return a 400 and can be customized to return an error message however a developer would like. These only effect the express adapter because of how Fastify already internally handles streams. fix: nestjs#9759
c2b2cfd
to
18c27cd
Compare
Sure enough, there was something going on with my local build that was running fine that wasn't in CI. Got it fixed up, should be good once CI resolves now |
Hello @jmcdo29, I have some doubt about the way error handling is implemented in this PR. I think it will work fine if the error happens at the beginning of the stream, for instance if a file is not present such as in your test suite. But an error could also happen anytime mid-stream (imagine your streamable file streams data from a S3 service, but the S3 service crashes or become unreachable). In that case you will already have sent 200 OK headers and some content to the client and it would be too late to send a 400. (Also IMHO 400 is client bad request error, and seems inappropriate as a default error). To my knowledge, HTTP doesn't have a mechanism for signaling an error during a response and after headers have been sent and some content stream. So I think the "least worst" approach would be to close the client socket from the server, so the client is notified something is wrong with the streaming (by receiving an IO error) and can retry later. If we just end the response in case of error, the client will not be able to differentiate between a normal stream completion, and an error which will lead the client to have a truncated content without noticing it. An alternative for the client would be to compare the Content-Length with the actual streamed content, however it is not always possible to have a Content-Length (we may not known the length of the steam, and it could be generated dynamically). Let me know if you have a better way to handle this. I think adding a test case with a stream failing in deferred way would be a good way to experiment possible solutions Best regards, |
As an afterthought of my last comment, I think in case of error in the source (here the StreamableFile), pipeline will destroy the destination stream (here the express Response). So in this case the client socket should be destroyed (because the express response is an http.OutgoingMessage). So in all cases (error at start or mid-stream) the socket will be closed. (Note this may not optimal for the point of view of keep alive in case of start error, because in that case we should be able to emit error in headers, and end the response properly, without killing the underlying socket. A rough idea to implement that behavior would be to not use pipeline, but pipe and some manually event handling) So in the error handler, maybe it should be enough to check headerSent before sending the 400, to diffrentiate the start or mid-stream error. |
That's actually why in the
Even though the original issue of #9759 is to use
Got an idea of what this would look like? |
As inspiration of the article : pipeline is roughly equivalent to :
So we could do something like that (pseudo code)
EDIT: I am not sure about the original implementation in the article, and I did some minor modification based on a quick experimentation with local streams, so ofc take it as a way to draft the idea and not working code ^^. |
Is there anything left we need to implement here @jmcdo29? 🙌 |
Let me make the changes @micalevisk suggested and then this should be good to merge. If we need to modify it again later to take into account other features that's always an option |
4f42aae
to
248596b
Compare
LGTM |
PR Checklist
Please check if your PR fulfills the following requirements:
PR Type
What kind of change does this PR introduce?
What is the current behavior?
If a request is cancelled during a
StreamableFile
response, then the stream is not properly closed which can lead to memory leaks and possible server failures if enough of them occurIssue Number: #9759
What is the new behavior?
pipeline
ends up destroying streams used if there is an error in one ofthe streams. Due to this, there's no chance of a memory leak from errored out
streams. There's also now an addition of adding an error handler to the
StreamableFile
so that stream errors by default return a 400 and can becustomized to return an error message however a developer would like. These
only effect the express adapter because of how Fastify already internally
handles streams.
Does this PR introduce a breaking change?
Other information
fix: #9759