-
Notifications
You must be signed in to change notification settings - Fork 29.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test: remove test-http-exit-delay #4786
Closed
Closed
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
test-http-exit-delay has a flaky history. Examination of the bug it was written to find suggests that the test should be removed. * The test is trying to find a delay of up to 1 second, but the delay may also be much smaller than that. So the test will not catch the bug all the time. It is therefore flaky when the bug exists. * Experience has shown that the test is flaky as well when the bug is absent in the code because it can sometimes take slower devices (such as the Raspberry Pi devices that we have in continuous integration) more than the allowed one second to run. Increasing the timeout for those devices will make the test pass but will also mean that the test isn't really testing anything because the delay it is trying to catch was a delay of up to one second. I don't think this is an appropriate test to run once on CI. If there is to be a test for the issue in question, it should be a benchmark test that is run a large number of times. We don't really have such tests in CI yet I would argue that this test is actively a problem. It does not reliably catch the issue it is supposed to catch, nor can it likely be made to do so. (To do so would likely require converting it to a benchmarking test as previously described. We don't run those in CI, at least not at this time.) Because this test will have both false positives and false negatives, especially on the slower devices, it contributes to a culture of dismissing failed tests. It does not reliably identify an issue nor does it reliably pass on a working code base. This test should be removed. Ref: nodejs#4277
Trott
added
http
Issues or PRs related to the http subsystem.
test
Issues and PRs related to the tests.
labels
Jan 21, 2016
LGTM |
1 similar comment
LGTM |
Merged
hmm... I hate removing tests :-( |
Landed in 58d999e |
Trott
added a commit
that referenced
this pull request
Jan 22, 2016
test-http-exit-delay has a flaky history. Examination of the bug it was written to find suggests that the test should be removed. * The test is trying to find a delay of up to 1 second, but the delay may also be much smaller than that. So the test will not catch the bug all the time. It is therefore flaky when the bug exists. * Experience has shown that the test is flaky as well when the bug is absent in the code because it can sometimes take slower devices (such as the Raspberry Pi devices that we have in continuous integration) more than the allowed one second to run. Increasing the timeout for those devices will make the test pass but will also mean that the test isn't really testing anything because the delay it is trying to catch was a delay of up to one second. I don't think this is an appropriate test to run once on CI. If there is to be a test for the issue in question, it should be a benchmark test that is run a large number of times. We don't really have such tests in CI yet I would argue that this test is actively a problem. It does not reliably catch the issue it is supposed to catch, nor can it likely be made to do so. (To do so would likely require converting it to a benchmarking test as previously described. We don't run those in CI, at least not at this time.) Because this test will have both false positives and false negatives, especially on the slower devices, it contributes to a culture of dismissing failed tests. It does not reliably identify an issue nor does it reliably pass on a working code base. This test should be removed. Ref: #4277 PR-URL: #4786 Reviewed-By: James M Snell <[email protected]> Reviewed-By: Brian White <[email protected]>
rvagg
pushed a commit
that referenced
this pull request
Jan 25, 2016
test-http-exit-delay has a flaky history. Examination of the bug it was written to find suggests that the test should be removed. * The test is trying to find a delay of up to 1 second, but the delay may also be much smaller than that. So the test will not catch the bug all the time. It is therefore flaky when the bug exists. * Experience has shown that the test is flaky as well when the bug is absent in the code because it can sometimes take slower devices (such as the Raspberry Pi devices that we have in continuous integration) more than the allowed one second to run. Increasing the timeout for those devices will make the test pass but will also mean that the test isn't really testing anything because the delay it is trying to catch was a delay of up to one second. I don't think this is an appropriate test to run once on CI. If there is to be a test for the issue in question, it should be a benchmark test that is run a large number of times. We don't really have such tests in CI yet I would argue that this test is actively a problem. It does not reliably catch the issue it is supposed to catch, nor can it likely be made to do so. (To do so would likely require converting it to a benchmarking test as previously described. We don't run those in CI, at least not at this time.) Because this test will have both false positives and false negatives, especially on the slower devices, it contributes to a culture of dismissing failed tests. It does not reliably identify an issue nor does it reliably pass on a working code base. This test should be removed. Ref: #4277 PR-URL: #4786 Reviewed-By: James M Snell <[email protected]> Reviewed-By: Brian White <[email protected]>
MylesBorins
pushed a commit
that referenced
this pull request
Feb 17, 2016
test-http-exit-delay has a flaky history. Examination of the bug it was written to find suggests that the test should be removed. * The test is trying to find a delay of up to 1 second, but the delay may also be much smaller than that. So the test will not catch the bug all the time. It is therefore flaky when the bug exists. * Experience has shown that the test is flaky as well when the bug is absent in the code because it can sometimes take slower devices (such as the Raspberry Pi devices that we have in continuous integration) more than the allowed one second to run. Increasing the timeout for those devices will make the test pass but will also mean that the test isn't really testing anything because the delay it is trying to catch was a delay of up to one second. I don't think this is an appropriate test to run once on CI. If there is to be a test for the issue in question, it should be a benchmark test that is run a large number of times. We don't really have such tests in CI yet I would argue that this test is actively a problem. It does not reliably catch the issue it is supposed to catch, nor can it likely be made to do so. (To do so would likely require converting it to a benchmarking test as previously described. We don't run those in CI, at least not at this time.) Because this test will have both false positives and false negatives, especially on the slower devices, it contributes to a culture of dismissing failed tests. It does not reliably identify an issue nor does it reliably pass on a working code base. This test should be removed. Ref: #4277 PR-URL: #4786 Reviewed-By: James M Snell <[email protected]> Reviewed-By: Brian White <[email protected]>
MylesBorins
pushed a commit
that referenced
this pull request
Feb 18, 2016
test-http-exit-delay has a flaky history. Examination of the bug it was written to find suggests that the test should be removed. * The test is trying to find a delay of up to 1 second, but the delay may also be much smaller than that. So the test will not catch the bug all the time. It is therefore flaky when the bug exists. * Experience has shown that the test is flaky as well when the bug is absent in the code because it can sometimes take slower devices (such as the Raspberry Pi devices that we have in continuous integration) more than the allowed one second to run. Increasing the timeout for those devices will make the test pass but will also mean that the test isn't really testing anything because the delay it is trying to catch was a delay of up to one second. I don't think this is an appropriate test to run once on CI. If there is to be a test for the issue in question, it should be a benchmark test that is run a large number of times. We don't really have such tests in CI yet I would argue that this test is actively a problem. It does not reliably catch the issue it is supposed to catch, nor can it likely be made to do so. (To do so would likely require converting it to a benchmarking test as previously described. We don't run those in CI, at least not at this time.) Because this test will have both false positives and false negatives, especially on the slower devices, it contributes to a culture of dismissing failed tests. It does not reliably identify an issue nor does it reliably pass on a working code base. This test should be removed. Ref: #4277 PR-URL: #4786 Reviewed-By: James M Snell <[email protected]> Reviewed-By: Brian White <[email protected]>
Merged
MylesBorins
pushed a commit
that referenced
this pull request
Mar 2, 2016
test-http-exit-delay has a flaky history. Examination of the bug it was written to find suggests that the test should be removed. * The test is trying to find a delay of up to 1 second, but the delay may also be much smaller than that. So the test will not catch the bug all the time. It is therefore flaky when the bug exists. * Experience has shown that the test is flaky as well when the bug is absent in the code because it can sometimes take slower devices (such as the Raspberry Pi devices that we have in continuous integration) more than the allowed one second to run. Increasing the timeout for those devices will make the test pass but will also mean that the test isn't really testing anything because the delay it is trying to catch was a delay of up to one second. I don't think this is an appropriate test to run once on CI. If there is to be a test for the issue in question, it should be a benchmark test that is run a large number of times. We don't really have such tests in CI yet I would argue that this test is actively a problem. It does not reliably catch the issue it is supposed to catch, nor can it likely be made to do so. (To do so would likely require converting it to a benchmarking test as previously described. We don't run those in CI, at least not at this time.) Because this test will have both false positives and false negatives, especially on the slower devices, it contributes to a culture of dismissing failed tests. It does not reliably identify an issue nor does it reliably pass on a working code base. This test should be removed. Ref: #4277 PR-URL: #4786 Reviewed-By: James M Snell <[email protected]> Reviewed-By: Brian White <[email protected]>
scovetta
pushed a commit
to scovetta/node
that referenced
this pull request
Apr 2, 2016
test-http-exit-delay has a flaky history. Examination of the bug it was written to find suggests that the test should be removed. * The test is trying to find a delay of up to 1 second, but the delay may also be much smaller than that. So the test will not catch the bug all the time. It is therefore flaky when the bug exists. * Experience has shown that the test is flaky as well when the bug is absent in the code because it can sometimes take slower devices (such as the Raspberry Pi devices that we have in continuous integration) more than the allowed one second to run. Increasing the timeout for those devices will make the test pass but will also mean that the test isn't really testing anything because the delay it is trying to catch was a delay of up to one second. I don't think this is an appropriate test to run once on CI. If there is to be a test for the issue in question, it should be a benchmark test that is run a large number of times. We don't really have such tests in CI yet I would argue that this test is actively a problem. It does not reliably catch the issue it is supposed to catch, nor can it likely be made to do so. (To do so would likely require converting it to a benchmarking test as previously described. We don't run those in CI, at least not at this time.) Because this test will have both false positives and false negatives, especially on the slower devices, it contributes to a culture of dismissing failed tests. It does not reliably identify an issue nor does it reliably pass on a working code base. This test should be removed. Ref: nodejs#4277 PR-URL: nodejs#4786 Reviewed-By: James M Snell <[email protected]> Reviewed-By: Brian White <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
test-http-exit-delay has a flaky history. Examination of the bug it was
written to find suggests that the test should be removed.
may also be much smaller than that. So the test will not catch the bug
all the time. It is therefore flaky when the bug exists.
absent in the code because it can sometimes take slower devices (such as
the Raspberry Pi devices that we have in continuous integration) more
than the allowed one second to run. Increasing the timeout for those
devices will make the test pass but will also mean that the test isn't
really testing anything because the delay it is trying to catch was a
delay of up to one second.
I don't think this is an appropriate test to run once on CI. If there is
to be a test for the issue in question, it should be a benchmark test
that is run a large number of times. We don't really have such tests in
CI yet
I would argue that this test is actively a problem. It does not reliably
catch the issue it is supposed to catch, nor can it likely be made to do
so. (To do so would likely require converting it to a benchmarking test
as previously described. We don't run those in CI, at least not at this
time.)
Because this test will have both false positives and false negatives,
especially on the slower devices, it contributes to a culture of
dismissing failed tests. It does not reliably identify an issue nor does
it reliably pass on a working code base. This test should be removed.
Ref: #4277
/cc @nodejs/testing