Provide a way for users to turn on logging for formatting, to help resolve bugs#12304
Provide a way for users to turn on logging for formatting, to help resolve bugs#12304davidwengier merged 17 commits intodotnet:mainfrom
Conversation
| } | ||
| catch (IOException) | ||
| { | ||
| // Swallow IO exceptions, logging is best effort |
There was a problem hiding this comment.
While it's a best effort, it feels like we could probably do a bit better than this. As written, every LogObject and LogSourceText for the same name will throw because FileMode.CreateNew is used. Would it be better to just stomp on old files?
There was a problem hiding this comment.
Thank you for this comment, doing it this way was a deliberate choice but I completely failed to remember that and missed adding something to the test logger. My thinking was that its trivial for us to ensure that names are unique, and our test logger can validate that, therefore we don't need to worry about doing file exists checks, or logic to find unique file names etc. If we come up with a scenario later that we want to stomping on old files, I would think that should be an explicit choice via an API of some kind.
src/Razor/src/Microsoft.CodeAnalysis.Razor.Workspaces/Formatting/FormattingLoggerFactory.cs
Outdated
Show resolved
Hide resolved
src/Razor/src/Microsoft.CodeAnalysis.Razor.Workspaces/Formatting/Passes/HtmlFormattingPass.cs
Show resolved
Hide resolved
src/Razor/src/Microsoft.CodeAnalysis.Razor.Workspaces/Formatting/RazorFormattingService.cs
Outdated
Show resolved
Hide resolved
|
Thanks for all the feedback, especially around the documentation. This is ready for a second look (and should build now!) |
We've gotten a couple of reports recently of formatting issues that have shown bugs in the formatting engine, but that haven't reproduced when trying the same scenario locally. I took some inspiration from the Edit and Continue logging infra, and things like complog and binlog, and wanted to create something so that we can ask users to turn on "formatting logging", and get a bunch of data out of the system that we can use to essentially replay what happened on their machine.
We already had most of this logging for tests, and it has proven invaluable in the past for fixing these issues, assuming a failing test can be created. Hopefully now that we can get the same logging from users, we will always be able to get a failing test. But sadly, only time will tell, because I can't repro the issues in order to test that the logging is enough to repro the issues 😂