-
Notifications
You must be signed in to change notification settings - Fork 75
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Generate types for new MFD design #2455
Conversation
createFileFromStream, | ||
type CreateFileOptions, | ||
type CreateFileFromStreamOptions, | ||
} from "@azure/core-rest-pipeline"; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is it because of filename and content type attributes so we could remove these helpers?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, that, and the fact that we now accept streams as part bodies, so no need for createFileFromStream.
Thanks for making this! The approach looks good to me :)~ |
const file = createFileFromStream(() => content, "hello.jpg", { | ||
type: "image/jpg" | ||
}); | ||
it("allows specifying MIME type and filename", async () => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just curious without the helper createFileFromStream
how to feed the big file in a streaming way?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can just pass the stream directly:
const result = await client
.path("/multipart/form-data/check-filename-and-content-type")
.post({
contentType: "multipart/form-data",
body: [
{ name: "profileImage", body: fs.createReadStream("/path/to/file"), filename, contentType }
]
});
This reverts commit adaa2bc.
"@azure-rest/core-client": specSource === "Swagger" ? "^1.4.0" : "^2.0.0", | ||
// Swagger and modular libraries currently only support the old multipart/form-data implementation. | ||
"@azure-rest/core-client": | ||
specSource === "Swagger" || isModularLibrary ? "^1.4.0" : "^2.0.0", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why modular is depend on v1?
I was wondering what is our plan for v2 adoption?
- HLC -> v2?
- Modular -> v2
- RLC from swagger -> v1 or v2? // my preference is v2 if the upgrading effort is not too much
- RLC from typespec -> v2
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For modular to depend on v2, we need to update the modular API models as well, and probably also figure out how we want to handle serialization, which is a whole other piece of work which is I think out of scope for this PR. So I kept modular on the old version for now. Obviously we aren't going to keep it like that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For modular to depend on v2, we need to update the modular API models as well, and probably also figure out how we want to handle serialization
Personally I prefer to always use the latest version in modular. We didn't support mfd feature in core rest v1 either, it's fine considering there is no real need to modular mfd yet except openai(but i heard that we plan to depreate azure openai ), another reason is modular is still in early stage. The version mismatch may introduce un-necessary maintainance effort.
We could either generate as regular models or throw errors for modular mfd.
@joheredi How do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it depends, if making modular take a dependency on v2 means there is additional effort in this PR I would favor keeping it in v1. If it is just a matter fo switching the string, go for it. However I think it is most likely the former, if so I'm fine keeping the 2 versions in the meantime.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah, I think I see what you mean. I was under the impression that MFD was somewhat working in modular v1 given OpenAI is using it. But that is because they are doing customization, right? In that case I will change this to use 2.0, since there won't be a regression (it was already broken).
At the moment I have modular just generating normal models for MFD so as not to break the compilation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, openai has customization here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good! Great improvement :)
This reverts commit f0d9cc1.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, just curious what would happen if a customer passes inconsistent content type and body in multipart form data operation ? how would the core serialize the body and send the request?
*/ | ||
VerifyImage: | ||
export interface LivenessSessionWithVerifyImageCreationContentParametersPartDescriptor { | ||
name: "Parameters"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Glad to see this happened.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@@ -40,6 +48,19 @@ export function buildObjectInterfaces( | |||
) { | |||
continue; | |||
} | |||
|
|||
// FIXME: disabling new multipart generation for modular while we figure out the story | |||
if (objectSchema.isMultipartBody && !model.options?.isModularLibrary) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: what RLC version will be generated in modular SDK? I image that new design would align with v2, old design would align with v1.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We just generate a normal RLC model in the modular SDK (as if it was a plain old JSON operation) because of this check. This shape is more in line with v1 of the Core package. We can't generate the v2 models here yet as that results in a compile error, since the serializers don't serialize into the array shape.
} | ||
]; | ||
|
||
function buildMultipartPartDefinitions( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we consider the case for A extends B
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not currently but we can revisit if this becomes a requirement
If you set the content type to multipart/form-data and pass something unexpected it currently just JSON.stringifies it, which is what we were also doing in the old MFD implementation. I opened an issue for Core tracking this earlier: Azure/azure-sdk-for-js#29424 |
See #2447 for context. This version generates without a type helper. Hopefully this will improve understanding.
The scope of this first round of work is to create the API surface correctly. In follow up PRs, we should look at
These follow up items will improve the user experience.