Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integrate coverage testing somehow #85

Open
abingham opened this issue Apr 18, 2015 · 7 comments
Open

Integrate coverage testing somehow #85

abingham opened this issue Apr 18, 2015 · 7 comments

Comments

@abingham
Copy link
Contributor

It would be nice to be able to constrain which tests get run based on coverage analysis. That is, when a change is made, determine what gets changed and only apply mutation to that section. Moreover, use coverage information to then only run the tests that apply to the mutated parts.


@drorasaf
Copy link
Contributor

In order to do so, don't we need to serialize the nodes between every run?

@abingham
Copy link
Contributor Author

don't we need to serialize the nodes between every run?

I don't think so. We'll rely on a coverage tool to tell us the correspondence between tests and code-under-test. Then when there are changes we can determine which tests to run (ie those which correspond to changed lines) and which parts of the code to mutate (ie those which were changed).

This leaves a lot of unanswered questions, of course. Can coverage tools even give us this information? How will we need to rework the test runners (and maybe other parts of CR)? Do the underlying test systems allow us to be that selective about which tests are run? A lot of questions.

But now, I don't think we need to to anything as heavey-weight as serializing AST nodes. The approach I'm describing here heuristic; it's designed to be "pretty good" and provide easy performance gains.

@rob-smallshire
Copy link
Contributor

I just had the same idea while I was waiting for Cosmic Ray to do its thing. It's sort-of obvious, so I'm not surprised it has already been suggested.

It seems to me this could provide a huge performance win.

@blueyed
Copy link
Contributor

blueyed commented Nov 9, 2017

For pytest there is testmon, which might be useful to hook into.

There's an issue where the author of mutmut gave some insights of a slowdown when using it though (at tarpas/pytest-testmon#76).

@rob-smallshire
Copy link
Contributor

There's an interesting thread here on extracting this information from coverage: https://bitbucket.org/ned/coveragepy/issues/170/show-who-tests-what

@abingham
Copy link
Contributor Author

abingham commented Nov 9, 2017

It seems to me this could provide a huge performance win.

In my thinking, this is one of the only ways to get mutation testing working in daily continuous integration processes on projects of any real size.

@rob-smallshire
Copy link
Contributor

Bingo?! https://github.com/chrisbeaumont/smother

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants