-
Notifications
You must be signed in to change notification settings - Fork 146
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Measure code coverage #200
Comments
I updated tox.ini in my fork to use |
@callahad I'm a bit concerned that the coverage for It would be nice if they were placed elsewhere other than the end of the unit test output, as I often find myself needing to scroll up past it to get to the more immediate need: the failing test output. This probably isn't the right issue to discuss making the output more useful, but I wonder if we want to hold back on adding coverage output to other projects (i.e Sygnal) until we've got a meaningful setup for it on Synapse. |
This is exactly where I want to end up. It seems like Trial's built-in reporting doesn't do much for us in that regard, but coverage.py has a nice which you can then click around in to see line-by-line information:
I think it might be nice to figure things out in a lower stakes repo (Sygnal) before pushing that rock all the way up the hill that is Synapse 🙂 |
As a specific end goal, I'd love to see us upload coverage data to CodeCov or Coveralls which would provide the pretty longitudinal graphs, ability to click around into files, etc. But for now, I think the first milestone is to set up coverage.py and get it working in a Tox environment. It seems like running Then it's on to getting those reports added to Pull Requests -- so selecting between CodeCov and Coveralls, adding report generation it into our GitHub Actions workflow, making sure it runs on the right triggers, etc. I think CodeCov is likely to be easier as it supports uploading coverage information on public repositories without requiring a secret GitHub access token, but I'm open to being convinced that Coveralls is a better fit. |
I can take a look at making this happen. |
We should find a way to measure the code coverage of our unit tests (e.g., by running our tests under coverage.py or using Twisted Trial's built in
--coverage
flag)The text was updated successfully, but these errors were encountered: