Skip to content
This repository has been archived by the owner on May 13, 2019. It is now read-only.

Timers and tests #27

Closed
Trott opened this issue Mar 18, 2016 · 10 comments
Closed

Timers and tests #27

Trott opened this issue Mar 18, 2016 · 10 comments

Comments

@Trott
Copy link
Member

Trott commented Mar 18, 2016

What should be the general things we want to happen with timers in tests?

  • Recommend against using timers if at all possible. (Obviously, if the test is specifically testing timers, then using them is fine.)
  • Do we distinguish between setTimeout(), setImmediate(), and process.nextTick()? I think only setTimeout() is generally problematic. The other two are usually fine.
  • Tests that rely on timers should go in sequential and not in parallel.

What else?

@Trott
Copy link
Member Author

Trott commented Mar 18, 2016

Idea for allowing timers from CLI but not in CI: Maybe a common.testTimeOut() in common that can detect if it is being run on CI or not. If on the CLI, it sets a timer that calls common.fail() with a message supplied by the test. If run from CI, it is a no-op.

@orangemocha
Copy link
Contributor

If run from CI, it is a no-op.

What about specifying test timeouts based on a "reference system" and applying different multipliers on each platform. CI needs not be an exception.

@Trott
Copy link
Member Author

Trott commented Mar 22, 2016

What about specifying test timeouts based on a "reference system" and applying different multipliers on each platform. CI needs not be an exception.

The problem is that these things are bugs/anti-patterns, and not features. Getting them out of CI is what we're aiming for. It's not a grudging compromise to give up a desirable feature.

  • There are already timeouts on CI. Code-based timeouts are unnecessary on CI.
  • Running the code-based timeouts in CI as we are currently doing is easily the top cause of flakiness. Making it platform-dependent (via common.platformTimeout()) sometimes helps, but it does not always eliminate the flakiness.
  • The arbitrariness of the timeout values is a code smell. (If the test starts failing at 500ms, is that the sign of a problem or does it just mean that we need to bump it up to 1000ms?)
  • As far as I am aware, the one compelling reason to include the code-based timeouts in CI is to improve error messages. If the error messages are still there in the code but not used by CI, I think that's a good compromise.

@joaocgreis
Copy link
Member

@orangemocha
Copy link
Contributor

I should have stated that I am in full agreement with this statement:

Recommend against using timers if at all possible. (Obviously, if the test is specifically testing timers, then using them is fine.)

So I was thinking of the 60 second timeout that is now used in CI to determine that a test is hanging. If I understood correctly, devs would like that timeout to be shorter when running tests on their machine. What is the right value of that timeout? Maybe 10 seconds? But for tests that take a bit longer to complete, what's the way to declare an exception to the default?

@Trott
Copy link
Member Author

Trott commented Mar 23, 2016

I think it's more like this: There's a bunch of tests where there is code like this:

setTimeout(() => {common.fail('This test took too long! Event foo must not be firing on object bar!');}, X);

...where X is some arbitrary number. Might be 5000, might be 500.

Sometimes, we go to remove these things, and it sails through. Sometimes we go to remove them and someone states the opinion that if you run the test at the command line (without the test.py wrapper), you should get a decent error message when it fails rather than having the test hang and hang until you end it.

In cases like that, it would be nice to have some pre-baked function loaded from common.js that sets the arbitrary timeout if and only if you are not on CI (or maybe if and only if you are not running via test.py, I'd be fine with that too). It might look like this:

common.testTimeoutForCLIOnly(someFunction, someArbitraryNumberOfMilliseconds);

@santigimeno
Copy link
Member

What should be the general things we want to happen with timers in tests?

  • Recommend against using timers if at all possible. (Obviously, if the test is specifically testing timers, then using them is fine.)

I totally agree.

The problem is that these things are bugs/anti-patterns, and not features. Getting them out of CI is what we're aiming for. It's not a grudging compromise to give up a desirable feature.

  • There are already timeouts on CI. Code-based timeouts are unnecessary on CI.
  • Running the code-based timeouts in CI as we are currently doing is easily the top cause of flakiness. Making it platform-dependent (via common.platformTimeout()) sometimes helps, but it does not always eliminate the flakiness.
  • The arbitrariness of the timeout values is a code smell. (If the test starts failing at 500ms, is that the sign of a problem or does it just mean that we need to bump it up to 1000ms?)

I wouldn't have said it better.

Sometimes we go to remove them and someone states the opinion that if you run the test at the command line (without the test.py wrapper), you should get a decent error message when it fails rather than having the test hang and hang until you end it.

Has this response been that common? I don't remember anyone complaining about it (probably I have missed those comments). It seems to me that lately the PR's that remove the timers have been accepted with little (no) resistance.

I think that, for the reasons that @Trott enumerated, we should consider that, in addition to recommending against the use of timers (and probably its removal too), not to implement any special mechanism to force a special timeout behavior. If somebody wants to run the tests with a specific timeout there's test.py. If somebody is running a specific test from the command line and the test takes a long time, it should be clear what the problem is (even more if it's a test in parallel).

Just my 2 cents.

@Trott
Copy link
Member Author

Trott commented Mar 27, 2016

Has this response been that common?

Honestly, no. I've perhaps provided too much weight to a single instance.

@Fishrock123
Copy link

I think only setTimeout() is generally problematic. The other two are usually fine.

Correct, the others are guaranteed to fire in pretty much any case.

@Trott
Copy link
Member Author

Trott commented Dec 24, 2017

Closing as this WG is defunct and this issue has been dormant for a year. Feel free to comment (or, if GitHub allows, re-open) if you think it should stay open.

@Trott Trott closed this as completed Dec 24, 2017
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants