-
-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fetching exercises overwrites existing submission files #1039
Comments
Hm. That's not the intended behavior. Thanks! |
What's the intention with stub files by the way? I've had a quick search through the issues list, which seems to suggest it's about providing ready-made boilerplate - but there doesn't actually seem to be a lot of boiler-plate with most of the languages (excepting Scala, where the boilerplate is mostly the directory structure (so can be created without actually creating a solution file) and Objective-C (where arguably you might want to provide a ready-made header file)). I ask because I think the creation of source files in the right format for the provided tests is actually a useful learning experience in itself. It gets you used to the conventions of the language etc etc. Having done a few of the Ruby exercises as training sessions recently, running the tests and working out from the output what file needs to be created has proved a useful and consistent first step in attacking a problem. Akin to clearing your throat :-) |
The initial idea for stub files (or supporting files) was to provide datatypes in Haskell or similar things. I, personally, find the process of working from scratch to be quite soothing, and when teaching I find it to be a very helpful routine. Since I haven't done any of the elixir exercises I can't speak to their purpose there. |
I'm using stub files for two purposes:
That said it depends on the language. While learning OCaml I've been working (very slowly, don't know if I'll ever finish it) on an OCaml track. With those exercises I define the interface in a .mli file and don't provide a stub .ml file. How about this as a solution for both this issue and for letting the user choose whether to look at the desired interface or figure it out himself/herself: the stubs go into a file |
An OCaml track - exciting! If you want any help with that, give me a shout (I don't know OCaml but it's one I want to learn) :-) How easy do you think it would it be to make the tests drive out the required datastructures etc unambiguously? If the student doesn't have to guess but can be informed by test failures that's a useful extra step in learning, I think. Or, is it possible to prevent the tests from relying on certain datastructures etc at all? Even if it leads to slightly more verbose, less pretty test code, if we could achieve a higher level of decoupling of the tests from the implementation code (and test based on a much smaller interface) then we open the exercise to a wider variety of solutions. Having said that, I liked the way the elixir triangle test made you return |
Help would certainly be welcome. The biggest issue right now is simply coverage of the standard batch of tests so I'd really appreciate any contributed translations. Just let me know if you plan to work on a test so I don't do the same work. I've opened a PR at #1048 to keep track of things. Driving out the required data structures is possible but as we develop more complicated tests forcing the user to figure out the specs becomes more of a distraction. The ability to help the user by supplying data structure and functions becomes more useful the more complicated a test becomes. For the standard batch it isn't really all that useful, I've found myself mainly using the stub files to let the user know what the names of the functions are and which arguments are needed. A convenience more than anything. I agree that decoupling is good. It depends on the test whether decoupling makes sense though. For example the zipper test in Elixir and Haskell is all about making a zipper that works with a predefined data structure, the format of the data structure is an important part of the test there so I do really want the user to work with that particular format. In other cases, such as grade-school, the tests rely specifically on a HashDict while the relevant interface is Dict. The tests thus force the user to adopt a specific data structure. I suspect it's because you can't compare dictionaries for equality unless they're of the same type (and even then it's potentially wrong, |
Great! Will let you know when I pick up an OCaml exercise then. WRT your two examples, yes, it sounds like talking about your stubs has uncovered an already existing coupling in some of the exercises that potentially we might want to get rid of. With the exercises where the datastructures are an integral part of the problem - that sounds to me like more of a case where the tests should drive you to the correct implementation. Haven't done the zipper exercise yet so can't comment on that one though. |
I intend to fix the CLI so that it doesn't overwrite existing files. I've been doing a lot of traveling lately (and am off on another trip today), but it's at the top of my list. |
Fetching new exercises will overwrite submission files if there is stub submission defined for the exercise (currently all of the Elixir exercises have those) and the user has a partial submission file.
So if the user is doing exercises in two languages at the same time fetching new exercises because the user is done with exercise 1 in language A might overwrite the submission the user is working on for exercise 2 in language B.
The text was updated successfully, but these errors were encountered: