[Frontend] Add /v1/audio/translations OpenAI API endpoint#19615
[Frontend] Add /v1/audio/translations OpenAI API endpoint#19615DarkLight1337 merged 18 commits intovllm-project:mainfrom
/v1/audio/translations OpenAI API endpoint#19615Conversation
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Summary of Changes
Hello @NickLucche, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request implements the OpenAI-compatible /v1/audio/translations API endpoint. It builds on previous work and includes a significant refactor to create a shared base class for audio processing, making the transcription and translation code more maintainable. Documentation and a client example are also included.
Highlights
- New Translation API Endpoint: I've added the
/v1/audio/translationsendpoint to the OpenAI-compatible API server, enabling audio translation using supported models like Whisper. - Code Refactoring for Audio Handling: I've introduced a new base class
OpenAISpeechToTextto consolidate common audio processing and streaming logic, reducing duplication between the transcription and translation handlers. - Documentation and Examples: I've updated the documentation to include the new translation API endpoint and added a dedicated example client script to demonstrate its usage.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configureGemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Code Review
This PR adds the /v1/audio/translations OpenAI API endpoint. The changes include documentation updates, a new example client, and modifications to the API server and protocol definitions. The refactoring of the serving logic for speech-to-text tasks into a new base class OpenAISpeechToText is a good improvement. Ensure documentation accurately reflects supported sampling parameters and verify the model used in the example client.
|
cc @ywang96 can you help with the S3 upload? |
|
This pull request has merge conflicts that must be resolved before it can be |
d757296 to
34b8a92
Compare
|
@DarkLight1337 I've now added test cases along with a new sample to translate (thanks @simon-mo !). Also caught a bug in the chunk splitting, as we were not sending the preamble, the first chunk would be translated while the second onward would be transcribed (lol). Something to keep in mind, the repeatability of translations of longer audio appears to be more brittle (ie sometimes using different words). Not sure whether uncertainty is input or model dependent. Let me know what you think. |
|
This pull request has merge conflicts that must be resolved before it can be |
Signed-off-by: NickLucche <nlucches@redhat.com>
Signed-off-by: NickLucche <nlucches@redhat.com>
Signed-off-by: NickLucche <nlucches@redhat.com>
Signed-off-by: NickLucche <nlucches@redhat.com>
Signed-off-by: NickLucche <nlucches@redhat.com>
Signed-off-by: NickLucche <nlucches@redhat.com>
Signed-off-by: NickLucche <nlucches@redhat.com>
Signed-off-by: NickLucche <nlucches@redhat.com>
Signed-off-by: NickLucche <nlucches@redhat.com>
Signed-off-by: NickLucche <nlucches@redhat.com>
ac87daa to
29fbaf8
Compare
vllm/commit_id.py
Outdated
| @@ -0,0 +1,3 @@ | |||
| # SPDX-License-Identifier: Apache-2.0 | |||
| # SPDX-FileCopyrightText: Copyright contributors to the vLLM project | |||
| __commit__ = "933dc175653650d405b1e344822a57dad241c075" | |||
There was a problem hiding this comment.
This file seems to be added by mistake
There was a problem hiding this comment.
sorry about that..
DarkLight1337
left a comment
There was a problem hiding this comment.
LGTM now, sorry for the delay!
| # TODO investigate higher model uncertainty in for longer translations. | ||
| assert out.count("nor will i ever") == 2 |
There was a problem hiding this comment.
@NickLucche @DarkLight1337 @ywang96
We're testing vLLM with PyTorch 2.8, and this assertion gets triggered -- this model generates different text from PyTorch 2.7 to PyTorch 2.8. I see a todo for higher model uncertainty: is this behavior expected?
PyTorch 2.7:
nor will i ever touch the sacred places where my body is made of jacquero, my treasure, which mirrors you in the shadow of the greek sea, from which the virgines are born to come, and faithfully to that island he confuses them with his first smile, he waves his naked and your foreheads, the incline towards him that the water sings of fatal fate, and the different exile, for which, beautiful of fame and disdain, nor will i ever touch i will touch the sacred places where my body makes the water drop, my zacinto, which mirrors you in the wave of the greek sea, from which the virgin water comes, and faithfully to that island it flutters with its first smile. the waves are not a tack, your clean clouds and your fronts, the incline towards him that the water sings of fatal, and the different exile, for which, beautiful of fame and of adventure,
PyTorch 2.8:
nor do i ever touch the sacred places where my body is made of jacquero, my treasure, which mirrors you in the shadow of the greek sea, from which the virgines come into water, and faithfully to that island he confuses them with his first smile, he waves his naked and naked limbs, the incline towards him that the water sings of fatal fate, and the different exile, for which, beautiful of fame and disdain, nor do i ever touch i will touch the sacred places where my body makes the water drop, my zacinto, which mirrors you in the wave of the greek sea, from which the virgin water comes, and faithfully to that island it flutters with its first smile. the waves are not a tack, your clean clouds and your fronts, the incline towards him that the water sings of fatal, and the different exile, for which, beautiful of fame and of adventure
There was a problem hiding this comment.
Hey, I think we can safely change the test here.
My conclusion was that a) translation wasn't a primary task (latest whisper -turbo model doesn't support it) so the model isn't as resilient and b) this particular sample may just be hard, and scores for the second token end up being quite similar.
Either way, I will put up a PR for this as I've also just witnessed that behavior on blackwell.
Carrying @ywang96 work in https://github.com/vllm-project/vllm/pull/15910/files across the finish line.
This PR adds the
/v1/audio/translationsOpenAI API endpoint.Before this PR is ready to land, I need to add (at least) a sample audio for testing purposes to the s3 assets bucket. @DarkLight1337 may I ask for your help here?
Other than that the usage should be fairly identical to the transcription endpoints, other than
languagebeing an extra argument as openai only supports auto-detection of the language.Work for next PRs: