Skip to content

openai, update to typespec-java 0.11.1#37663

Merged
weidongxu-microsoft merged 6 commits intoAzure:mainfrom
weidongxu-microsoft:openai_update-client.tsp
Nov 15, 2023
Merged

openai, update to typespec-java 0.11.1#37663
weidongxu-microsoft merged 6 commits intoAzure:mainfrom
weidongxu-microsoft:openai_update-client.tsp

Conversation

@weidongxu-microsoft
Copy link
Member

@weidongxu-microsoft weidongxu-microsoft commented Nov 14, 2023

Target spec PR Azure/azure-rest-api-specs#26699

Follow up of #37629

Live test pass on testChatFunctionAutoPreset.

Description

Please add an informative description that covers that changes made by the pull request and link all relevant issues.

If an SDK is being regenerated based on a new swagger spec, a link to the pull request containing these swagger spec changes has been included above.

All SDK Contribution checklist:

  • The pull request does not introduce [breaking changes]
  • CHANGELOG is updated for new features, bug fixes or other significant changes.
  • I have read the contribution guidelines.

General Guidelines and Best Practices

  • Title of the pull request is clear and informative.
  • There are a small number of commits, each of which have an informative message. This means that previously merged commits do not appear in the history of the PR. For more information on cleaning up the commits in your PR, see this page.

Testing Guidelines

  • Pull request includes test coverage for the included changes.

@azure-sdk
Copy link
Collaborator

API change check

API changes are not detected in this pull request.

Copy link
Member

@jpalvarezl jpalvarezl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @weidongxu-microsoft to even going through the effort to even adjust our customizations to be able to take the autorest update. It looks good to me, but I would like for @mssfang have a look at this too and will defer the approval to him.

RequestOptions requestOptions = new RequestOptions();
return serviceClient.beginBeginAzureBatchImageGenerationWithModelAsync(
BinaryData.fromObject(imageGenerationOptions), requestOptions);
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are a couple of new methods here that didn't used to get generated. Is it normal that an update in autorest should result in more methods?

  • Mono<ChatCompletions> getChatCompletionsWithAzureExtensions(String deploymentOrModelName, ChatCompletionsOptions chatCompletionsOptions)
  • PollerFlux<BatchImageGenerationOperationResponse, BatchImageGenerationOperationResponse> beginBeginAzureBatchImageGeneration(ImageGenerationOptions imageGenerationOptions)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Both methods are private anyway, but it would be nice to understand where this is coming from.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah. Since they are not public APIs it doesn't block us. We need to replace the LRO ImageGeneration APIs by the non-LRO/blocking ImageGeneration APIs, Let's clean this API learn if needed.

Copy link
Member Author

@weidongxu-microsoft weidongxu-microsoft Nov 15, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They are the method that marked as @@access(, internal) in client.tsp. So it is generated as "package private".
aka

@@access(Azure.OpenAI.beginAzureBatchImageGeneration, Access.internal);
@@access(Azure.OpenAI.getChatCompletionsWithAzureExtensions, Access.internal);

I'd like to actually add @@convenientApi(false) to them (so these added one won't appear). But sadly @convenientApi decorator does not have scope, so I cannot do it for Java alone.

// Generated convenience method for beginBeginAzureBatchImageGenerationWithModel
RequestOptions requestOptions = new RequestOptions();
return serviceClient.beginBeginAzureBatchImageGenerationWithModel(BinaryData.fromObject(imageGenerationOptions),
requestOptions);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as the comment for the async version of these methods. Just leaving a comment for visibility's sake.

@@ -1290,7 +1056,7 @@ public Response<BinaryData> getCompletionsWithResponse(
* parameters: Object (Optional)
* }
* ]
* function_call: FunctionCallModelBase (Optional)
* function_call: BinaryData (Optional)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this will have ramifications on our code customizations. We should pay attantion to what happends at ChatCompletionsOption.setFunctionCallInternal and ChatCompletionsOption.setFunctionCall. The latter should still receive the custom type FunctionCallConfig. Now we pass either subtype of FunctionCallModelBase (either preset or name). We should adjust the logic to operate in the BinaryData directly. Wdyt, @mssfang ?

Copy link
Contributor

@mssfang mssfang Nov 14, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah. We can adjust our implementation logic regarding these changes internally. As it doesn't affect the public surface APIs now, we are fine to apply these changes. We can add a task to the 2023-12-01 milestone to address this concern.

Copy link
Member Author

@weidongxu-microsoft weidongxu-microsoft Nov 15, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, we will make it easier to spot the Union in generated code.

Plan for future is

  1. in protocol API javadoc, we should add more info there, e.g. "FunctionName | FunctionPreset"
  2. in the model of ChatCompletionOptions, we should add comment on the property and its setter/getter, e.g. "Union of FunctionName | FunctionPreset, advice developer customization"

* <pre>{@code
* {
* id: String (Required)
* created: long (Required)
* expires: Long (Optional)
* result (Optional): {
* created: long (Required)
* data: DataModelBase (Required)
* data: BinaryData (Required)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another union type that we will need to adjust. Just leaving a comment for our visibility and coordinate work around migrating this appropriately.

* <pre>{@code
* {
* id: String (Required)
* created: long (Required)
* expires: Long (Optional)
* result (Optional): {
* created: long (Required)
* data: DataModelBase (Required)
* data: BinaryData (Required)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bookmarking union usage. @mssfang 😁

.setContext(requestOptions != null && requestOptions.getContext() != null ? requestOptions.getContext()
: Context.NONE)
.setServiceVersion(this.getServiceVersion().getVersion())),
TypeReference.createInstance(BinaryData.class), TypeReference.createInstance(BinaryData.class));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just like in the async and sync clients we have the appearnce of 2 new methods. We probably don't need to replicate this in the non-Azure OpenAI case, as the async/sync client methods are private. Let's sync on what we need to do with these @mssfang

Copy link
Member

@jpalvarezl jpalvarezl Nov 14, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see this comment

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is inside of /implementation folder, which doesn't affect the public API. We can leave as it is and do a clean up when the blocking ImageGeneration is introduced.

.setServiceVersion(this.getServiceVersion().getVersion())),
TypeReference.createInstance(BinaryData.class),
TypeReference.createInstance(BinaryData.class));
public PollerFlux<BatchImageGenerationOperationResponse, BatchImageGenerationOperationResponse>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The return of the PollerFlux change from <BinaryData, BinaryData> to <BatchImageGenerationOperationResponse, BatchImageGenerationOperationResponse> . We don't generally have types that aren't BinaryData as return types for the HTTP client implementation. Is this an expected change on the emitter?

Copy link
Member Author

@weidongxu-microsoft weidongxu-microsoft Nov 15, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems be diff tool artifact.

The beginBeginAzureBatchImageGenerationAsync is still here.

@ServiceMethod(returns = ReturnType.LONG_RUNNING_OPERATION)
public PollerFlux<BinaryData, BinaryData>
beginBeginAzureBatchImageGenerationAsync(BinaryData imageGenerationOptions, RequestOptions requestOptions) {
return PollerFlux.create(Duration.ofSeconds(1),
() -> this.beginAzureBatchImageGenerationWithResponseAsync(imageGenerationOptions, requestOptions),
new DefaultPollingStrategy<>(new PollingStrategyOptions(this.getHttpPipeline())
.setEndpoint("{endpoint}/openai".replace("{endpoint}", this.getEndpoint()))
.setContext(requestOptions != null && requestOptions.getContext() != null ? requestOptions.getContext()
: Context.NONE)
.setServiceVersion(this.getServiceVersion().getVersion())),
TypeReference.createInstance(BinaryData.class), TypeReference.createInstance(BinaryData.class));
}

This one is beginBeginAzureBatchImageGenerationWithModelAsync (note "WithModel"). The reason is that it is very hard to convert SyncPoller<BinaryData, BinaryData> to poller with models SyncPoller<BatchImageGenerationOperationResponse, BatchImageGenerationOperationResponse>. Hence codegen had to generate another LRO method as "WithModel" (this name only exist in Impl)

.setServiceVersion(this.getServiceVersion().getVersion())),
TypeReference.createInstance(BinaryData.class),
TypeReference.createInstance(BinaryData.class));
public SyncPoller<BatchImageGenerationOperationResponse, BatchImageGenerationOperationResponse>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here as above. The types for the SyncPoller changed from <BinaryData, BinaryData> to <BatchImageGenerationOperationResponse, BatchImageGenerationOperationResponse>

this.functionCall = BinaryData.fromObject(FunctionCallPreset.fromString(this.functionCallConfig.getName()));
} else {
this.functionCall = new FunctionNameFunctionCallModel(new FunctionName(this.functionCallConfig.getName()));
this.functionCall = BinaryData.fromObject(new FunctionName(this.functionCallConfig.getName()));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I see, you have already adjusted the customization to account for the types used in the union.

additionalDirectories:
- specification/cognitiveservices/OpenAI.Authoring
commit: 817861452040bf29d14b57ac7418560e4680e06e
commit: 0d48109775c587aff0ce83873daa016713f2ff98
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess we need to merge this PR into main so we can update this hash to one from the main before proceeding with this PR.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, the spec PR would go first.

I've answered most of the questions (they are good!). Same as you, I'd regret to having "getChatCompletionsWithAzureExtensions" and "beginBeginAzureBatchImageGeneration" get added. But though we can mark them as internal in client.tsp, we cannot mark them as convenientApi(false) for Java only.

@mssfang
Copy link
Contributor

mssfang commented Nov 14, 2023

This PR doesn't have public API change(especially no breaking changes), and passes all tests in CI. I can address internal implementation changes after this PR is merged. Approve this PR.

@weidongxu-microsoft weidongxu-microsoft merged commit 0cb00d4 into Azure:main Nov 15, 2023
@weidongxu-microsoft weidongxu-microsoft deleted the openai_update-client.tsp branch November 15, 2023 05:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants