-
Notifications
You must be signed in to change notification settings - Fork 841
Report code coverage back to GitHub #4574
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
aef5f3f to
7dacf50
Compare
This comment was marked as outdated.
This comment was marked as outdated.
b004dbd to
891f17d
Compare
This isn't a helpful metric. For pull request reviews, there are two valuable pieces of information:
Other metrics, including any metrics that do not separate lines of code changed by a PR from other lines, are actively harmful due to their misleading nature. I would recommend not posting such metrics anywhere for review, but especially inside a PR. |
|
@sharwell Are your recommendations different if the policy for the repo is that all code should have 100% test coverage? |
Really? The high-level metrics seem helpful to me, it's good to know if coverage is improving or regressing before digging into a long report. I would also want easy access to the report for details (e.g. linked from the same comment). |
891f17d to
9d40c5c
Compare
It is valuable in the context of this repo which has established guidelines and procedures around code coverage. One of which - the coverage can't go down. And it's down, then it's up to the author to figure out what went wrong.
Good idea. I was too tired last night to implement it. |
This comment was marked as outdated.
This comment was marked as outdated.
9d40c5c to
d864de5
Compare
|
Table format needs work 😆 |
This comment was marked as outdated.
This comment was marked as outdated.
Indeed 😉 |
d864de5 to
81aa7e4
Compare
This comment was marked as outdated.
This comment was marked as outdated.
81aa7e4 to
9e6d44f
Compare
🎉 Good job! The coverage increased 🎉
Full code coverage report: https://dev.azure.com/dnceng-public/public/_build/results?buildId=455807&view=codecoverage-tab |
9e6d44f to
7fd52e1
Compare
|
Ready now. Rebased on top of the release/8.0. |
|
🎉 Good job! The coverage increased 🎉
Full code coverage report: https://dev.azure.com/dnceng-public/public/_build/results?buildId=455823&view=codecoverage-tab |
"100% test coverage" can have many definitions. Regardless, the question seems more academic than practical. My suggestion was derived based on successful outcomes observed at scale, particularly surrounding the fact that many people have limited historical exposure to code coverage tools and almost always have been incorrectly instructed around aggregate metrics being a relevant indicator of [something] when in reality the differential code coverage is the only report you generally need to see.
If the report isn't roughly the same size as the pull request, then either you've been pointed to an unnecessarily long report or the pull request significantly changed the coverage of code not touched by the pull request. With filtering correctly applied, a report is only long if it's important. As the repository grows in size, the coverage changes trend towards 0%, and eventually "improving" or "regressing" is an irrelevantly small number. It's best to just ignore it from the start, which both improves focus on items that actually matter and avoids misleading new users into thinking aggregate metrics are useful numbers. A great example of an incorrectly filtered code coverage report can be viewed by clicking the link in the bot comment.
This is part of the point of my comments. When it's clear that a repository has problematic code coverage guidelines, it would be better to revisit those guidelines by teaching better ways to produce and consume code coverage information. Maintaining these policies doesn't just hurt the repository where they are used, but it suggests to the broader community that the guidelines are helpful and other teams are likely to start adopting similar processes not realizing they tend to hurt developer productivity. While the guidelines as seen here are not necessarily going to result in a buggy product, they are likely to increase product cost and timelines while reducing the ability to implement features past the MVP deliverable. In other words, the problems are related more to practical engineering efficiency and not so much towards theoretical correctness on an unbounded timeline. |
The report is very long, though I do like the project level summary shown by the bot now. I agree having some granularity is more meaningful than an overall metric, and project level seems like a good compromise, it tells me where in the report to look for details. |
Report code coverage increase/decrease back to the PRs.
Microsoft Reviewers: Open in CodeFlow