-
-
Notifications
You must be signed in to change notification settings - Fork 654
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[wip] many exercises: use subtests for table driven tests #1254
Conversation
update minimum go version to 1.7
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for contributing here @martint17r.
Sorry about the delay in responding on this.
I have experimented a bit with a couple of exercises using t.Run to get the feel for what changes.
Using the t.Run approach for table driven tests that don't currently use Logf("PASS: ... ", ...) does provide more information in the verbose case "go test -v". But for tests that already use Logf for passing test cases, the verbose output is a bit noisy with the extra t.Run subtest output; so seems only one should be used. I also have a slight preference for the test .description (or .name) shown "as-is" with the Logf verbose output, not with spaces changed to underscores with t.Run subtest labeling.
See my one review comment about testing the t.Run return value as a means to break out of the test case loop; without t.Run, use of Fatalf would abort the testing function, stopping on the first failed case; use of Fatalf with t.Run lets the case loop continue unless the return value is tested and used to break. Some exercises use Errorf and continue on a failure(about 24), while most(about 104) use Fatalf and stop on first failure at present.
I'd like to hear any comments from @bitfield, @ferhatelmas, @hilary, @sebito91, and @kytrinyx. I'm undecided as to whether it is worth it to move exercises to use t.Run. I welcome some discussion.
t.Fatalf("Build for test case %q returned %s but was expected to return %s.", | ||
tt.name, actual, tt.expected) | ||
} | ||
t.Run(tt.name, func(t *testing.T) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just something to consider. The bool return value from t.Run could be used to break out of the loop. Previously (as t.Run wasn't used), when t.Fatalf is called, it does stop the looping over the test cases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there's value in continuing to try to run all tests. I will change that if it's needed to get merged.
No worries on the delay. I agree that Logf output looks nicer, but it sets the focus on passed tests. I observed that the students had problems to identify the failed table driven tests. The blog post introducing the subtests lists many reasons why using them is beneficial. In addition subtests are understood by IDEs such as Goland and Visual Studio Code so they display helpful context. The single most useful feature with subtests for me is that I am able to rerun exactly one subtest using the -run flag while developing, so that the output is not cluttered with any other test. Another solution would be to ensure that the test name or description is included in every t.Error or t.Fatal call and the students can identify it as such. |
I'm a big fan of The semantics of
I never output any other information during tests (except for debugging), so that failures are very obvious and easy to identify. I strongly encourage people to use subtests and |
I would go forward and refactor all the tests if we agree on a path forward ; that's just a way for me of giving back some of the profit excercism provides. I am going to update this PR to include a changed generator as well later this week, so we can discuss that, too. |
@bitfield Regarding your semantics of |
Thanks for the helpful discussion. I'm now convinced that using t.Run is valuable. Approach to consider:
|
PTAL
Do you really want me to add the test name in front of every error? When a test fails, the test name is always printed so it looks kind of weird. |
Layout of test cases is fine. I don't mind loss of compactness for gaining description. Test name is not necessary for error output, since the underscored sub-test title shows enough. The most useful thing is showing the inputs given and the actual and desired output. So it's good. For future reference, prefer this style on the commit subject line: LG |
Stumbled over this issue by chance. Here one more reason to use |
Thanks @tehsphinx for the comment. I think that bumps the incentive to move forward with integrating the use of @martint17r , do you want to continue adapting the exercise tests to use t.Run ? We can phase them in by exercise as time permits. I can help as well. |
Yes, I do want to help, but life got in the way ;) There's no reason why we can't work in parallel - I think we agreed on creating a separate PR for each exercise, so there is very little conflict pontential. |
I am closing all PRs that are over a year old. Please reopen if you are still working on this. |
This is a work in progress to use subtests for all table driven tests for all exercises. I use exercism for teaching/coaching and a lot of the students have problems identifying the failing test case. Subtests make this much easier.
There is a downside though: using t.Run raises the minimum go version to 1.7. Go 1.7 was released back in August '16. In my opinion it is prudent to require a version that old.
If the maintainers agree all table driven tests will be converted in all exercises