Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question on code coverage support for manual UI tests #520

Closed
quincycs opened this issue Feb 10, 2017 · 4 comments
Closed

Question on code coverage support for manual UI tests #520

quincycs opened this issue Feb 10, 2017 · 4 comments
Labels

Comments

@quincycs
Copy link

quincycs commented Feb 10, 2017

Hi

My app has a big suite of manual UI tests that we run by our own fingers. I would find it interesting to know how much code coverage we are achieving with our current process. It takes about 3.5 days of 3 engineers to run our 700+ tests. The app shuts down in between test cases.

Curious if you all have any suggestions.

@JaKXz
Copy link
Member

JaKXz commented Feb 11, 2017

This sounds well beyond the scope of nyc and istanbuljs as a whole. While I appreciate the huge complexity that your team has to deal with, as a fellow dev I am honestly cringing at the sound of

3.5 days of 3 engineers to run our 700+ tests [manually]

Theoretically with that many manual tests you're beyond the acceptance level and are exercising your entire app. My guess is that if code is not hit in those tests, it is dead code, not being used in your app.

FWIW, a naive suggestion is to looking into some sort of automated acceptance testing. nightmarejs looks pretty cool, however we are actively dealing with a possible bug in our instrumentation: #514

I'm going to close this for now, since like I said it sounds well out of scope of this project and org (unless I'm missing something). Good luck!

@JaKXz JaKXz closed this as completed Feb 11, 2017
@JaKXz JaKXz added the invalid label Feb 11, 2017
@quincycs
Copy link
Author

Thanks @JaKXz . Even with an automated UI test suite, sounds like its beyond the scope of this project as well.

@bcoe
Copy link
Member

bcoe commented Feb 11, 2017

@quincycs worth noting that nyc can stitch together multiple test run outputs into a single report; while I don't think we'll add this specific use-case to the project, it would be worth you looking at:

  1. babel-plugin-istanbul which would allow you to instrument your running code with coverage (you could then output the global variable coverage at some point during the testing process).
  2. istanbul-midleware might also be worth looking at, this project will let you get test coverage for functional tests.

@quincycs
Copy link
Author

Thanks so much. That sounds great :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants