-
Notifications
You must be signed in to change notification settings - Fork 180
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Breaking Change] Add a mechanism for permanent suppressions (baselining) #7259
Comments
I don't know much about Sarif, but of what little I do, it seems like a reasonable approach to consolidate error/warning output into a common format. That would make a lot of things easier, including posting to GitHub (probably converting to markdown tables or something, or even pinning to specific PR lines like some bots do), comparing against a suppressions file, etc. Hashing the surrounding context is a good idea and would probably work for most cases. But what about, when JSON, using a JSONPath. We do that for autorest in many cases, for example, and it rarely changes. I also imagine most suppressions would work fine with a JSONPath e.g., suppressions:
- from: foo.json
where: $.definitions.Bar.prop_name
code: 1234 # or maybe friendly code name e.g., UseCamelCase
reason: need to match external schema |
@heaths while it is by no means a rejected approach or anything, main issue with Autorest way is that so far it has been a pain to get right, causing a lot of rework. You can observe our troubleshooting section for it: and the most recent addition that should show up in few minutes: |
I would also be careful with hashing line text as that makes things more fragile in a number of cases as well as makes it harder for people to get the suppressions correct. Instead of a hash we should try to help ensure the error message itself has enough context to suppress the correct instance. |
Just jumping in here at Konrad's encouragement on some SARIF things. For sure, your output looks very SARIF friendly, you have conceptual quality concerns, locations where these standards aren't met, user-facing reports, descriptive text for the standards enforced, etc. Using SARIF will unlock the eco-system of consumers, which includes Visual Studio, VS Code, there are multiple react controls for rendering, etc., GitHub actions speak SARIF natively for its GHAS, as does Microsoft's equivalent offering. In re: JSON paths, SARIF has a construct called a logical location (as opposed to a physical location, such as a file on disk) that can be used to express this information. There are multiple SARIF-exporting tools at MS that use this data to record JSON paths for scan tools. In re: suppressions, I think your current approach, to produce a JSON path that's specific to a finding, is a reasonable approach that's reasonably resilient. We have had some other JSON scenarios I'm aware of where a tool will elide some path information in an attempt to make the suppression less fragile. For example, a precise index into an array (assuming that this data isn't required to uniquely identify a problem). The SARIF Finally, it sounds like you're already concerned about the fragility of a per-line hash, the SARIF SDK has a rolling hash algorithm built into it that's intended to help produce more stability for code churn. But this hash will be even more fragile than a single line hash, so probably not of interest. Will have the undesirable characteristic of being opaque to user, which @weshaggard warned again. I have authored some tools historically that emitted a working suppression in the user-facing output and this could be an option for you. A typical approach would be to emit your finding to the users in a well-formed sentence. And then to add another sentence that describes the precise suppression data that can be cut-and-pasted elsewhere to suppress. According to SARIF, consumers can always choose to select your first sentence of output only (when UX real estate is limited), and provide an expander or something else to get additional content. The VS SARIF viewer uses this approach a lot. In console output, of course, the entire finding will be sent to output. |
Bigger picture discussion about suppressions here: |
I had a chat with @weshaggard how to approach first prototype of this work. Consider this example breaking change log
mentioning this spec element.
For full context of this example data, see this comment. For the first prototype, I will do the following:
The value will be a JSON path derived from the available information. Probably we could pull it out from the openapi-diff somehow. You can verify the JSON path at https://jsonpath.com/ The ADO build log will also point to aka.ms link explaining how to apply these snippets, as described below.
Design considerations
|
Let's not abuse md files for these suppressions let's make it a proper yml file so |
Per our chat with @mikeharder @weshaggard: Example of the currently proposed format in the - tool: Breaking Change
filePattern: ./specs/arm/KeyVault/**/
jsonPath: ...
code: ParamRemoval
reason: matching actual impl.
- tool: TypeSpec requirement
filePattern: ./specs/arm/KeyVault/**/
jsonPath: ""
code: ""
reason: brownfield The suppressions file to read is determined as follows: for given API spec being evaluated, walk the directory tree until you find the first |
Just a thought: should we consider JMESPath instead? This is what |
@heaths We use Still, we could consider supporting |
Eh, I wouldn't bother. 😄 Thanks for noting you already use JSONPath. Certainly in this case that is more consistent. The two are close, and I think supporting both would just lead to confusion. I didn't realize you already used JSONPath, so I only suggested considering it since |
This work will build on: |
Currently, there is no way to permanently suppress breaking change violations, as can be seen at:
aka.ms/azsdk/pr-suppressions
This is a major gap causing tons of rework. We want to fix it by providing a mechanism for this. One idea we have is a solution that does comparison between the tool error output and a baseline text file checked into the repository. If all the output error lines have a match in the baseline file, the tool reports success.
Preliminary design considerations
There are several design considerations to make before we proceed with implementation. Some of the aspects we identified so far:
Related work, docs and context
RE: [backlog discussion] Breaking Changes for Microsoft.Sql 2023-02-01-preview Version Release
RE: Breaking Change Review - MobileNetwork ARM API
@weshaggard @mikeharder @heaths FYI
The text was updated successfully, but these errors were encountered: