-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Diagnosing slow coverage reports #111
Comments
Alright, the coverage completed, after about 50 minutes. The coverage file contains 126 anonymous type references. Can I tell AltCover to not instrument anonymous types somehow? And is there anything else I should look for in the file, to see why it is slow? |
Emphasis: By "slow" I don't mean AltCover is slow. 10.000 methods tracked per second is quite impressive, considering what it's doing. I am simply referring to the total time, due to the nature of my exact project. |
The .acv files are binary versions of the visit data, deflate compressed. Given that the compression is achieving more than 25 visits per byte, that looks like there is a great deal of repetitive looping going on. That, unfortunately, is the absolute worst sort of behaviour for the instrumented code, as it magnifies the instrumentation overhead -- see also issues #51 and #57 for previous examples. The "x visits in y seconds" message comes from the post-test collection phase, and only considers the unpacking of the recorded data. Time spent executing the tests gets shown by the test runner, like
"Anonymous types" are just types, that happen to have names containing angle brackets, dollar signs or commercial ats, all of which are unutterable as identifier names at the user level. At the IL level, they are treated no differently to any user-named class, as in this example from one of the AltCover self-tests
|
Interesting. Will things like I'll try that, because I'm not really interested in knowing the counts. |
Ah, I can't do that and still have the call context. That also makes sense. |
Is there any way that I can easily spot the biggest "sinners" in terms of call counts, to quickly be able to see if there's a fundamental code problem? The given count seems incredibly high, even for the given test-run. |
In the completed .xml report, look at the sequence point visit counts, like
In 2 lines of powershell
will list the methods owning each line in descending order of visit count. |
Great idea! I modified the command slightly, since it sorted naturally (alphabetically) and not numerically:
That identified some obvious flaws in my tests that I can definitely solve. That being said, is there anything that can be done to further improve performance? For instance, does every hit cause something to be written to disk? And if so, can we move this into memory? Perhaps with some IPC? |
Running under As for offloading to a separate process via some IPC mechanism, having three years ago now hit the VSTest abrupt finish limitation, but not discovered what it was that was cutting the time available to the |
I see. Thank you so much for keeping up the good work and the detailed explanations. I just reduced the amount of calls - it was way too extensive. That helped on the test duration, but it still seems like it's the parsing of the temporary Could it be possible to make That way, the size would be way smaller. It is currently around 27 megabytes per project. |
At the inevitable cost of more memory (keeping a set of location + caller visits, rather than just location), it would be possible, same with location+timestamp. The slow processing is purely the expansion of the few megabytes on disk into the likely actual uncompressed gigabyte-level amount of data thus represented, a price also paid on the way in -- so perhaps the extra processing associated with the extra memory would be offset by removing all that. |
That sounds like a reasonable tradeoff to make. Dev computers and build servers these days have plenty of RAM to spare, but time is much more important IMO. |
Have an experimental build which should allow --single and --callContext together |
This works, and it does seem a lot faster! I can barely notice the collection time anymore! |
Alright @SteveGilham, I have been thoroughly using this for about two working days, and there is no problem with it. I think it is shippable! Very nice! |
Released as v7.4.807 |
This is great stuff - thank you! |
I'm having a test run that seems to go on forever.
Here's my log so far, although the tests are still running:
I tried opening these
.acv
files, but I can't seem to read them.Is there any way that I can get more information on why such a large amount of calls is being recorded, and why it never ends?
Also, it's super nice that it seems to produce the
X visits recorded in Y
message once in a while, chunking things up. However, as you can see, the message didn't appear until after 10 minutes (so I thought my build was frozen). Can it perhaps chunk up based on size AND time, so that feedback is given no matter what, once in a while?The text was updated successfully, but these errors were encountered: