-
Notifications
You must be signed in to change notification settings - Fork 618
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding functional tests for container ordering #1889
Conversation
4377f9c
to
520ac95
Compare
overall lgtm. tests are failing, make sure to have them pass or point failed tests out if they are flaky |
c3b4349
to
f26acfd
Compare
6a61635
to
d8f37e3
Compare
--- FAIL: TestContainerOrderingTimedout (35.58s) |
1b77465
to
0b5b9d0
Compare
--- FAIL: TestContainerOrdering (0.35s) |
Some tests use "0" cpu and this value makes it possible for us to test them.
0b5b9d0
to
a59e320
Compare
Rebased and fixed a typo in the json file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
once all the tests pass!
Checked the failing arm test: Its working when i validate it locally. we will increase the timeout in a separate pr |
Linux Integ test failed because of flaky test TestDockerStateToContainerState Reference: #1899 |
Windows integ tests are failing because of flaky test TestStatsEngineWithNewContainersWithPolling : #1912 |
@@ -0,0 +1,77 @@ | |||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, I understand that this task def uses many of the fields necessary for ordering containers. However, I'm unable to understand how running this test ensures that the feature is working correctly. As in, how are we validating that "success"
is only running when "success-dependency"
has reached the "SUCCESS"
state? For all we know, ECS agent could be just ignoring those fields. Shouldn't that functionality testing be part of a functional test?
If you're testing that somewhere and I'm missing that/misreading this test, please point me to the place where such validation is happening.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are validating that the feature is working fine as part of integ tests in ordering-integ-tests.go. Here we are just validating the happy case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, I get that. But, functional tests by definition should test the functionality of the feature that they were written for. I think the task definition should do more than just setting a field and ensuring that the task runs. For example, you could have a "producer"
container, which creates a file in a shared task volume. The "consumer"
container, which is ordered to start after the "producer"
would ensure the existence of the file and that would be a better test than just doing nothing in the task def. If we're not actually testing the functionality of the feature, then there's very little value in adding these tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We used similar hard-coded overrides while testing to ensure that tests wouldn't get stuck in an infinite startup cycle.
Explicitly checking order is something we have been doing in our integration tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I get what you are saying. We'll make an action to add additional checks in our functional workflow before the next release.
We will be validating this behavior manually over the next few days, too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We used similar hard-coded overrides while testing to ensure that tests wouldn't get stuck in an infinite startup cycle.
Explicitly checking order is something we have been doing in our integration tests.
I'm glad that we have integration test coverage for this. But, I'm questioning functional test coverage here. Given that these (functional tests) are considered black-box tests used for testing the acceptance criteria, we can surely strive for a higher bar by having more meaningful task defs? For example, there's a reason this "awslogs"
test tests that a log entry is available in cw logs endpoint:
func TestAWSLogsDriver(t *testing.T) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I get what you are saying. We'll make an action to add additional checks in our functional workflow before the next release.
Why not this release? Why is it ok for the bar to be lower in this release?
We will be validating this behavior manually over the next few days, too.
Again, glad that you're manually testing this. But, I think it'd be equally good/better use of resources/time to automate the same.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Blocking this until we have a resolution for #1889 (review)
The following checks failed, and this was still merged:
Considering the conversation in #1889 (review) and the insistence that this is covered in integration tests, the failure of the integration tests is pretty concerning. |
Summary
Implementation details
Testing
New tests cover the changes:
Description for the changelog
Licensing
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.