-
-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Redundant & different test cases for different algorithms solving the same problem #462
Comments
Totally agree |
Ah - I was about to open this issue/suggestion myself :-) |
Note that in #485 I added quickcheck as a dependency for testing properties in probablistic data structures. For sort algorithms, Property-Based-Testing could greatly reduce the amount of tests to write, since we exactly know what we want to test: the result is sorted. So either:
We could keep the tests for best-case, worst-case, edge-cases if need be (for readability maybe), but overall, we could change all tests by: #[quickcheck]
fn must_be_the_same_as_rust_sort(original: Vec<i32>) {
let mut unsorted = items.clone();
items.sort();
let hopefully_sorted = my_sort_implementation(unsorted); // or &mut unsorted I don't know how they got implemented
assert_eq!(items, hopefully_sorted);
} In addition to reducing the amount of tests to write, PBT often catches bug humans might have forgotten to write tests for (like edge cases, empty vectors, etc.), this also brings shrinking into the party which may be useful. Let me know if this is something you're interested in. |
I think it's worth having both a fixed set of test cases as a basis that's always checked and some randomized tests as an exploratory tool that may at some point find new failure scenarios which can then be added to the fixed test cases |
This alone does not suffice. You also need to verify that the array after sorting is a permutation of the array before sorting. It is simplest to indeed check against the built-in sort, or to check against other (usually less efficient) sorting algorithms which can easily be seen to be correct (e.g. check a complex quicksort against a simple selectionsort). I definitely like randomizing test cases, though I've only ever done it manually; I don't like relying on the magic of a powerful tool like QuickCheck too much. |
In my opinion, it's either relying "on the magic tool of a powerful tool" that includes shrinking and seeding for both understanding & reproducibility or nothing. I don't really see the point of manually randomizing, unless we actually implement shrinking and seeding. |
Shrinking is like delta debugging, right? We can implement it and see how convenient that is to work with the tests |
I added some utils fn to help generate and evaluate the sorting fn correctness and performance at #540 . My idea is to apply every sorting algorithm to the same test cases, and then generate the reports maybe. Maybe I can send a PR to let you guys review. |
Consider e.g. the sorting algorithms: They all solve the same problem, yet each has its own test cases. This is only warranted if the tests are to test implementation details.
However, the better approach here is to write one comprehensive test suite - perhaps even using fuzzing - and testing all implementations against it. This will both increase coverage and reduce test duplication.
The text was updated successfully, but these errors were encountered: