fix(api): Less magical async tests fix#9990
fix(api): Less magical async tests fix#9990amitlissack merged 7 commits intoless_magical_async_testsfrom
Conversation
Codecov Report
@@ Coverage Diff @@
## less_magical_async_tests #9990 +/- ##
=========================================================
Coverage 74.93% 74.93%
=========================================================
Files 2077 2077
Lines 54807 54807
Branches 5527 5527
=========================================================
Hits 41069 41069
Misses 12642 12642
Partials 1096 1096
Flags with carried forward coverage won't be shown. Click here to find out more. |
|
Looks like we got the same test hang: https://github.com/Opentrons/opentrons/runs/6066130935?check_suite_focus=true |
Found a few more un-clean module tests. I don't what the root cause is, but it's good to get rid of all those warnings. |
SyntaxColoring
left a comment
There was a problem hiding this comment.
I think this fixed the CI failures? CI on this PR is green, except for one thing, which is "skipped." I don't know why it's skipped.
That aside, here's a couple of minor comments, but this looks good to me to merge. Feel free to merge into either my less_magical_async_tests branch, or edge. I'm not sure how we usually handle these things.
Thanks again!
| # return v1 if sim_model is not passed | ||
| assert status["model"] == "temp_deck_v1.1" | ||
| assert status["version"] == "dummyVersionTD" | ||
| await subject.cleanup() |
There was a problem hiding this comment.
Is this redundant with the await temp.cleanup() line in the subject fixture?
There was a problem hiding this comment.
It sure is! Thanks.
|
|
||
| cancellable_task.cancel() | ||
| other_task.cancel() |
There was a problem hiding this comment.
-
Should we cancel
cancellable_taskif we've already asserted that it's been canceled byExecutionManager? It should be harmless either way, so I guess this is just a test style/readability question. -
Cancellation isn't instant, so even after we've called
.cancel()on a task, we should stillawaitit, as a general rule. (And we'd expect thatawaitto raiseasyncio.CancelledError.)I think we should at least await
other_task, because this test function is in full control over it. What to do aboutcancellable_taskis less clear to me. I sort of think awaiting it should be the job ofExecutionManager—soawait execution_manager.cancel()wouldn't return until all the tasks have actually stopped. But I'm out of my depth in this part of the codebase, so take that with a grain of salt.
I'm certainly fine leaving this test as you've written it for this PR, if we're not sure about either of these points. This is definitely better than it was.
There was a problem hiding this comment.
You are right on point 1.
The ExecutionManager doesn't await the tasks that it cancels. I'm shooting for no warnings/errors in tests so I'll take the safe path.
Overview
An attempt to fix hanging tests. Lots of errors generated by
test_modulesthat tasks are destroyed yet are pending. In the past I could correlate this to hanging tests.Changelog
Review requests
The hardware controller doesn't clean up its owned modules. Rather than rocking the boat, I went with the ugly solution that only affects tests.
Risk assessment
None