[Azure Stream Analytics] Add Power BI and Azure Data Lake Store output support#1546
[Azure Stream Analytics] Add Power BI and Azure Data Lake Store output support#1546olydis merged 2 commits intoAzure:currentfrom atpham256:current
Conversation
- Updated swagger spec to include support for Power BI and Azure Data Lake Store outputs - Added/updated x-ms-examples for PUT, PATCH, and GET requests with regards to outputs - Updated some descriptions to be more accurate
|
@atpham256, |
|
Ran validation tools and there were no new errors compared to the previous time I ran it for the last PR. The OBJECT_ADDITIONAL_PROPERTIES errors for oav validate-example seems to be a bug on the tool where it cannot handle response bodies with polymorphic response object models which was also present in my last PR. The examples are definitely valid in this case. |
|
Hi There, I am the AutoRest Linter Azure bot. I am here to help. My task is to analyze the situation from the AutoRest linter perspective. Please review the below analysis result: File: AutoRest Linter Guidelines | AutoRest Linter Issues Send feedback and make AutoRest Linter Azure Bot smarter day by day! Thanks for your co-operation. |
|
@ravbhatnagar Breaking change in existing API version (addition of new type in polymorphic hierarchy that can be returned from server => deserialization can fail in previously generated artifacts) |
|
@ravbhatnagar ping |
|
@atpham256 - The service seems to be supporting a new model type (PowerBIOutput). Please update the api-version for this change since this is a breaking change. |
|
@ravbhatnagar Hi Gaurav, this is an additive change that we do not consider a breaking change for our customers. From our talks with other teams such as ADF, they seem to be following a similar model of not considering additive changes as breaking changes. We have been using this model for a while and have not had any complaints from our customers about additive changes. Also, this has also been exposed in our service for a while (if you look at Portal, you can create a Power BI or ADLS output for ASA jobs). Having to add a new API version for each small change we do does not seem scalable since we will end up with tons of API versions we will need to keep track of. |
|
Hi Gaurav - this is how we have shipped SDK for last 3 yrs. We internally talked about this and potentially changing this. We talked to few other teams and decided to continue this model as others are doing the same and also we have not heard any feedback so far on this model. Given the cost of changing API version for every change, we are only changing API version for any breaking changes for existing behaviors/APIs. all additive changes are treated as non-breaking currently. |
|
@venkatcmsft @atpham256 - I think we discussed this in our call at length and we had agreement that this was the right thing to do for the customers. Otherwise, how will be customers know when you start supporting new capabilities in your service (ex - by adding new properties etc.). This is an Azure wide standard we want to follow and push teams towards. The fact that some other team is also not following the recommendation should not be the benchmark here. |
|
Gaurav - Yes, we had a meeting about it. We then talked internally about how to execute on it. We concluded that incrementing API version for non-breaking additive changes is not required. reasons: Unless users do strict schema validation of the response, they would not fail on deserialization and that is not common (we have 15k unique customers and no one ever complained about this as this is how APIs have been developed for the last 3 yrs in this service team). That is the reason we went around talking to various teams about how they are managing this kind of changes and found that it has been a problem for these other services we talked to as well. so, they decided to do what is working for their customers. Let us take it offline and close on this as it is hard to get to closure on this thread. |
|
Please note that if you are using AutoRest to generate clients, strong client side validation will indeed be happening (at least in Python and Node/javascript). The client will fail the deserialization when discriminator values that it does not know about are encountered. |
|
@ravbhatnagar Hi Gaurav, as discussed on Friday, can we approve this PR as an exception to unblock work for a dependent team? These 2 new polymorphic types have also already been exposed in this API version for a long time so we would like them to be in the Swagger spec to accurately document the API version. |
|
@olydis - It was agreed over skype call that we will let this through since this change has been in on the service side for a long time now. This has been the model Stream Analytics has been following when adding new properties to existing APIs etc. And so, although not ideal, we are approving this PR. Stream Analytics team will take a workitem on their side to start support api-versioning for their service for such scenarios. |
|
@olydis, I think the ball is in your court now :) |
|
@olydis Ping! :) |
|
reviewing 😉 |
|
No modification for AutorestCI/azure-sdk-for-node |
|
No modification for AutorestCI/azure-sdk-for-python |
* create MS.Codespaces RP * fixing validation errors * fixing integer validation errors
This checklist is used to make sure that common issues in a pull request are addressed. This will expedite the process of getting your pull request merged and avoid extra work on your part to fix issues discovered during the review process.
PR information
api-versionin the path should match theapi-versionin the spec).Quality of Swagger