-
Notifications
You must be signed in to change notification settings - Fork 2.1k
[OpenAI] Added sample and updated READMEs #36806
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[OpenAI] Added sample and updated READMEs #36806
Conversation
mssfang
commented
Sep 18, 2023
- Added Samples
- Updated READMEs
sdk/openai/azure-ai-openai/README.md
Outdated
| For a complete sample example, see sample [Image Generation][sample_image_generation]. | ||
|
|
||
| ### Audio Transcription | ||
| The OpenAI service starts supporting `audio transcription` since model `Whisper`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would word this a little bit differently. How about "with the introduction of Whisper models" ?
|
|
||
| byte[] file = BinaryData.fromFile(filePath).toBytes(); | ||
| AudioTranscriptionOptions transcriptionOptions = new AudioTranscriptionOptions(file) | ||
| .setResponseFormat(AudioTranscriptionFormat.JSON); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was wondering if should actually have a convenience method encapsulating this bit of logic so that users don't have to repeat the boilerplate each time. WDYT? I think is good to have the documentation as is though. Files might not be the only source of byte[] that a user may want to use.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. We should have a Public API to take byte[] data. Which should be convenience for customers.
jpalvarezl
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have a couple of suggestions that are not blocking but I think could be an improvement on this PR. Thanks for doing this!
* Early code generation from topic branch for whisper * Added simplest test * Regened with correct paths * Fixed name of method in the test * Added test file for translations * [OpenAI] BYO Multipart form request support (#36621) * Still error, but almost hack for multipart data request * Replaced whitespace with line breaks as necessary * Somehow still failing. A bit out of ideas * Changed the encoding to ASCII * Using CRLF instead * Test pass. Renamed variables to be more selfexplanatory * Code regen and adjustments to new methods * Using latest commit * plain text works * Code gen works * Code regen with looser types, no hooks for content-type nor length * Migrated multiform implementation over from the strongly typed branch * Added headers * Added classes * reran code gen * Compiles with modded tsp defintion, including content-type * Corrected wrong value passed for content-length * It works! * Removed pattern instanceof for older compatibility version * Refactored the MultipartHelper to be testable * Added test definition for MultipartDataHelper class * Added happy path test and model to the list to be serialized * Added tests for the MultipartDataHelper class * Refactored audio translation tests to use testRunners * Added tests for miused formats * Added more negative tests for wrong formats * Renamed tests * Finished Azure OAI sync test suite * Added support for nonAzure translations * Added Async translation methods * Added tests and async functionality for translations * Async translation tests for non-Azure * Extracted audioTranscription assertion statements to method * Added sync transcription functionality and AOAI tests * Added license to source files * Added todo markers where docs are missing * Added async implementation and minimal testing for transcription * Added tests for nonAzure OAI * Code regen * Corrected content type for bodyParam nonAzure * Added remaing transcription tests for AOAI sync case * Added tests for async AOAI * Added transcription tests for nonAzure OAI sync API * Added tests for nonAzure OAI async API * Commited whisper session-record changes * Inlined methods * Added documentation to sync/async client for translation and transcription methods * Added documentation to multipart helper classes * Replaced start imports with single class imports * Simplified tests and added logger to async client * Added missing asset * Added recordings for nonAzure tests * Style checks * Style check * Style check * Style check done * Changelog update and static bug analysis issues addressed * Last 2 replacement of monoError * [OpenAI] Added sample and updated READMEs (#36806) * suppression spotbugs for allowing external mutation on the bytep[ (#36826) * fixed unknown cspell error, 'mpga' * fixed sample broken links * regenerated, no changes but only indents alignment * Hardcoded boundary value for multipart requests * Updated test records for nonAzure * Most test passing with latest service version * Rolled back test records for regressed tests * Removed unused import --------- Co-authored-by: Shawn Fang <[email protected]>