Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fetching exercises overwrites existing submission files #1039

Closed
pminten opened this issue Nov 26, 2013 · 9 comments
Closed

Fetching exercises overwrites existing submission files #1039

pminten opened this issue Nov 26, 2013 · 9 comments

Comments

@pminten
Copy link

pminten commented Nov 26, 2013

Fetching new exercises will overwrite submission files if there is stub submission defined for the exercise (currently all of the Elixir exercises have those) and the user has a partial submission file.

So if the user is doing exercises in two languages at the same time fetching new exercises because the user is done with exercise 1 in language A might overwrite the submission the user is working on for exercise 2 in language B.

@kytrinyx
Copy link
Member

Hm. That's not the intended behavior. Thanks!

@rsslldnphy
Copy link

What's the intention with stub files by the way? I've had a quick search through the issues list, which seems to suggest it's about providing ready-made boilerplate - but there doesn't actually seem to be a lot of boiler-plate with most of the languages (excepting Scala, where the boilerplate is mostly the directory structure (so can be created without actually creating a solution file) and Objective-C (where arguably you might want to provide a ready-made header file)).

I ask because I think the creation of source files in the right format for the provided tests is actually a useful learning experience in itself. It gets you used to the conventions of the language etc etc. Having done a few of the Ruby exercises as training sessions recently, running the tests and working out from the output what file needs to be created has proved a useful and consistent first step in attacking a problem. Akin to clearing your throat :-)

@kytrinyx
Copy link
Member

kytrinyx commented Dec 1, 2013

The initial idea for stub files (or supporting files) was to provide datatypes in Haskell or similar things. I, personally, find the process of working from scratch to be quite soothing, and when teaching I find it to be a very helpful routine.

Since I haven't done any of the elixir exercises I can't speak to their purpose there.

@pminten
Copy link
Author

pminten commented Dec 1, 2013

I'm using stub files for two purposes:

  1. Making the expected interface explicit. The exercises I build tend to be a bit more complex than most of the original exercises and I don't want the user to have to guess the interface.
  2. Providing some predefined data structures for use by the user's code and the tests. Because the tests include the submission file (in Elixir at least) I can't define them in the test. I don't want the user to have to guess how the data structure should look if that's not part of the test and including it in a comment in the test file with a note "copy this to your code" feels hackish as well.

That said it depends on the language. While learning OCaml I've been working (very slowly, don't know if I'll ever finish it) on an OCaml track. With those exercises I define the interface in a .mli file and don't provide a stub .ml file.

How about this as a solution for both this issue and for letting the user choose whether to look at the desired interface or figure it out himself/herself: the stubs go into a file anagram_stub.exs or something like that. The user can simply copy that to anagram.exs if desired. Because the stub file isn't meant to be edited by the user overwriting it won't be a problem.

@rsslldnphy
Copy link

An OCaml track - exciting! If you want any help with that, give me a shout (I don't know OCaml but it's one I want to learn) :-)

How easy do you think it would it be to make the tests drive out the required datastructures etc unambiguously? If the student doesn't have to guess but can be informed by test failures that's a useful extra step in learning, I think.

Or, is it possible to prevent the tests from relying on certain datastructures etc at all? Even if it leads to slightly more verbose, less pretty test code, if we could achieve a higher level of decoupling of the tests from the implementation code (and test based on a much smaller interface) then we open the exercise to a wider variety of solutions.

Having said that, I liked the way the elixir triangle test made you return ok or error tuples. Felt like it was encouraging you to be more Erlang-y.

@pminten
Copy link
Author

pminten commented Dec 1, 2013

Help would certainly be welcome. The biggest issue right now is simply coverage of the standard batch of tests so I'd really appreciate any contributed translations. Just let me know if you plan to work on a test so I don't do the same work. I've opened a PR at #1048 to keep track of things.

Driving out the required data structures is possible but as we develop more complicated tests forcing the user to figure out the specs becomes more of a distraction. The ability to help the user by supplying data structure and functions becomes more useful the more complicated a test becomes. For the standard batch it isn't really all that useful, I've found myself mainly using the stub files to let the user know what the names of the functions are and which arguments are needed. A convenience more than anything.

I agree that decoupling is good. It depends on the test whether decoupling makes sense though. For example the zipper test in Elixir and Haskell is all about making a zipper that works with a predefined data structure, the format of the data structure is an important part of the test there so I do really want the user to work with that particular format.

In other cases, such as grade-school, the tests rely specifically on a HashDict while the relevant interface is Dict. The tests thus force the user to adopt a specific data structure. I suspect it's because you can't compare dictionaries for equality unless they're of the same type (and even then it's potentially wrong, == uses structural equality, which all the problems that entails). So for these tests more decoupling would improve things. This is an issue which existed before I created the stubs though.

@rsslldnphy
Copy link

Great! Will let you know when I pick up an OCaml exercise then.

WRT your two examples, yes, it sounds like talking about your stubs has uncovered an already existing coupling in some of the exercises that potentially we might want to get rid of. With the exercises where the datastructures are an integral part of the problem - that sounds to me like more of a case where the tests should drive you to the correct implementation. Haven't done the zipper exercise yet so can't comment on that one though.

@kytrinyx
Copy link
Member

kytrinyx commented Dec 1, 2013

I intend to fix the CLI so that it doesn't overwrite existing files. I've been doing a lot of traveling lately (and am off on another trip today), but it's at the top of my list.

kytrinyx added a commit to exercism/cli that referenced this issue Dec 1, 2013
@kytrinyx
Copy link
Member

kytrinyx commented Dec 1, 2013

@kytrinyx kytrinyx closed this as completed Dec 1, 2013
simonjefford pushed a commit to simonjefford/go-exercism that referenced this issue May 15, 2014
lcowell pushed a commit to lcowell/cli that referenced this issue Jan 25, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants