-
Notifications
You must be signed in to change notification settings - Fork 29.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Duplex stream pipeline regression in 13 #32955
Comments
I think this is related to #31940 as well |
Equivalent test case using TCP const net = require('net')
const pipeline = require('stream').pipeline
net.createServer(function (socket) {
// echo server
pipeline(socket, socket, () => {})
// 13 force destroys the socket before it has a chance to emit finish
socket.on('finish', function () {
console.log('finished only printed in node 12')
})
}).listen(10000, function () {
const socket = net.connect(10000)
socket.end()
}) |
cc @ronag |
Seems to work on master. Checking v13-staging. |
This is a problem on v13 only as far as I can tell. |
Looking into ways to get around this. |
This happens in master on all streams that don't have autoDestroy enabled as well (ie any readable-stream stream or any stream that opt out) const stream = require('stream')
// duplex stream similar to a tcp stream etc
const dup = new stream.Duplex({
autoDestroy: false,
write (data, enc, cb) {
cb()
},
read () {
this.push(null)
},
destroy () {
console.log('ws: am getting destroyed')
},
final (cb) {
console.log('ws: flushing writable...')
setTimeout(function () {
console.log('ws: done flushing writable...')
cb()
}, 1000)
}
})
// just some sink
const sink = new stream.Writable({
write (data, enc, cb) {
cb()
}
})
// pipe readable side
stream.pipeline(dup, sink, function () { })
dup.write('test')
dup.end() Above prints the following
|
Yes, I see the problem. Will try to prepare a PR by the end of the day. Thanks a lot @mafintosh! |
pipeline was too agressive with destroying Duplex streams which were the first argument into pipeline. Just because it's !writable does not mean that it is safe to be destroyed, unless it has also emitted 'finish'. Fixes: nodejs#32955
This PR logically reverts nodejs#31940 which has caused lots of unnecessary breakage in the ecosystem. This PR also aligns better with the actual documented behavior: `stream.pipeline()` will call `stream.destroy(err)` on all streams except: * `Readable` streams which have emitted `'end'` or `'close'`. * `Writable` streams which have emitted `'finish'` or `'close'`. The behavior introduced in nodejs#31940 was much more aggressive in terms of destroying streams. This was good for avoiding potential resources leaks however breaks some common assumputions in legacy streams. Furthermore, it makes the code simpler and removes some hacks. Fixes: nodejs#32954 Fixes: nodejs#32955
This PR logically reverts nodejs#31940 which has caused lots of unnecessary breakage in the ecosystem. This PR also aligns better with the actual documented behavior: `stream.pipeline()` will call `stream.destroy(err)` on all streams except: * `Readable` streams which have emitted `'end'` or `'close'`. * `Writable` streams which have emitted `'finish'` or `'close'`. The behavior introduced in nodejs#31940 was much more aggressive in terms of destroying streams. This was good for avoiding potential resources leaks however breaks some common assumputions in legacy streams. Furthermore, it makes the code simpler and removes some hacks. Fixes: nodejs#32954 Fixes: nodejs#32955 PR-URL: nodejs#32968 Reviewed-By: Matteo Collina <[email protected]> Reviewed-By: Mathias Buus <[email protected]> Backport-PR-URL: nodejs#32980
This PR logically reverts #31940 which has caused lots of unnecessary breakage in the ecosystem. This PR also aligns better with the actual documented behavior: `stream.pipeline()` will call `stream.destroy(err)` on all streams except: * `Readable` streams which have emitted `'end'` or `'close'`. * `Writable` streams which have emitted `'finish'` or `'close'`. The behavior introduced in #31940 was much more aggressive in terms of destroying streams. This was good for avoiding potential resources leaks however breaks some common assumputions in legacy streams. Furthermore, it makes the code simpler and removes some hacks. Fixes: #32954 Fixes: #32955 PR-URL: #32968 Reviewed-By: Matteo Collina <[email protected]> Reviewed-By: Mathias Buus <[email protected]>
This PR logically reverts #31940 which has caused lots of unnecessary breakage in the ecosystem. This PR also aligns better with the actual documented behavior: `stream.pipeline()` will call `stream.destroy(err)` on all streams except: * `Readable` streams which have emitted `'end'` or `'close'`. * `Writable` streams which have emitted `'finish'` or `'close'`. The behavior introduced in #31940 was much more aggressive in terms of destroying streams. This was good for avoiding potential resources leaks however breaks some common assumputions in legacy streams. Furthermore, it makes the code simpler and removes some hacks. Fixes: #32954 Fixes: #32955 PR-URL: #32968 Reviewed-By: Matteo Collina <[email protected]> Reviewed-By: Mathias Buus <[email protected]>
What steps will reproduce the bug?
pipeline in 13 seems to destroy duplex streams before the get a change to finish their writable flus h (ws.end() -> ws._final) lifecycle.
I managed to boil it down to a pretty straightforward test case below
Running this produces:
Notice that the
dup
stream gets destroyed before it has a chance to finish it's writable flush in the _final lifecycle, due to the pipeline auto destroying it in 13.The text was updated successfully, but these errors were encountered: