-
-
Notifications
You must be signed in to change notification settings - Fork 654
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Migrate data-driven tests to subtests #2098
Comments
If anyone wants to work on this issue, it is OK (and even recommended) the make PR's for individual exercises - working on this issue doesn't mean necessarily fixing all exercises yourself all at once. On the contrary, several people are more than welcome to make PR's for individual exercises where this is a problem. We'll use this issue as a tracking issue to track all the PR's related to this. |
@andrerfcsantos Did you already check whether this issue might also be related to code that produced via the generator? I have a hunch this is not just a matter of making individual PRs but this might be a generator issues that needs to be addressed. |
@junedev Some exercises suffering from this problem do have generators, but others don't. So I don't think it's a generator specific problem, although for the exercises that do have the generator, we must adjust it. Your suggestion would be to add this capability to the generator first and then work on individual exercises? (Although exercises without a generator currently don't have this dependency) |
Hey, to get an overview I used the following script:
Seems like the following exercises still need subtests (some might not need subtests):
Might not be needed:
|
@junedev or @andrerfcsantos |
Left my analysis below. This was a quick analysis of each exercise. If there's an exercise for which I wrote that we maybe could add subtests, but when tackling you conclude that is not viable, feel free to say so. I also noticed some exercises are missing generators. Adding generators is of course optional for this task, but some refactoring into subtests could be made in a way that makes writing a future generator easier. But since you are our expert at writing generators, feel free to also add generators and we'll size the PRs accordingly ;) But it's totally ok to do some of the refactoring into subtests now and leave the generator for later, especially because I feel the generators for some exercises will require quite some work. Seems robot_simulator_test.go might not need it but seems robot_simulator_step2_test.go and robot_simulator_step3_test.go need it, as each iteration of the loop seems to be a test? grade_school_test.go might not need right now, but with a little refactoring maybe. For instance, it seems that I think it's safe to skip this one - it only has one test and I don't think it'll ever change, so it's fine to be a regular test. The tests here were tailor-made for the track and there's only a single test for each function, so I think we can also skip this one. A similar situation to Grade School - there's some duplicated code between functions which could be grouped into single functions and maybe we could add a generator. Although I expect the creation of the generator and the refactoring to be significantly harder and require more thought than Grade School. These could also be 2 separate tasks - 1 to refactor into subtests and another one to add the generator. There seems to also be some duplicated code in this one between functions, so adding subtests at least for some tests could be a good idea. The tests seem to be a list of instructions, maybe the subtest definition itself can contain a list of string containing the operation to do at each stage. Only has one test currently and canonical data also only specify a single test, so it's fine to leave as it is. We could add a subtest, but it seems overkill for this case. Similar to Simple Linked List - the tests can be refactored into subtests in a similar way. Looks like some functions are looping over tests, so maybe they are good candidates for subtests. Also some duplication in the code that maybe can be refactored into subtests. Some duplication in code, the function Safe to skip this one, tests are custom for this exercise and there isn't really code duplication. I also don't see an obvious way to refactor this into subtests. |
Following this Slack thread and this PR discussion
In the track, there are still some data-driven tests like:
Such tests could benefit from rewriting them with subtests:
Pros:
Cons:
t.Run
A (possibly incomplete) list of exercises without subtests:
... (around 43 exercises in total - according to a search on
t.Logf("PASS
)The text was updated successfully, but these errors were encountered: