[BugFix] Fixed a precision issue with one-word answers.#3385
Conversation
Signed-off-by: amy-why-3459 <wuhaiyan17@huawei.com>
| similarity = cosine_similarity_text(audio_content.lower(), text_content.lower()) | ||
| print(f"similarity is: {similarity}") | ||
| assert similarity > 0.9, "The audio content is not same as the text" | ||
| assert similarity > 0.8, "The audio content is not same as the text" |
There was a problem hiding this comment.
why relax to 0.8?
After communicating with @yenuo26 , in order to eliminate the influence of whisper, we relaxed the similarity to 0.8.
|
I closed the temperature fix, I think this is a better approach. However the PR seems a bit unstructured.
|
Thank you very much for your suggestion. I will add a description of the changes in the PR. |
hsliuustc0106
left a comment
There was a problem hiding this comment.
BLOCKING:
Scope - This PR mixes the core bugfix with unrelated changes: commented Bagel tests, test threshold relaxations, and retry increases. The key collision fix looks reasonable, but please address the structural issues flagged by oglok:
- Remove commented Bagel code and test changes not related to this issue
- Explain why the meta key change avoids collision with stage_input_processors pipeline in the PR description
- If the EOS fix is correct, the retry increase from 3 to 10 should not be needed
- Justify the similarity threshold changes from 0.9 to 0.8
Co-authored-by: Canlin Guo <961750412@qq.com> Signed-off-by: Hongsheng Liu <liuhongsheng4@huawei.com>
|
It's already merged but the suggestions were not addressed. |
The third and fourth points concern the modification of the test cases, which has been agreed upon with the test case owner @yenuo26 . |
These test cases were not related to the purpose of the fix, now, another developer had to write a PR to clean it up. See: #3407
Again, if the fix works, we should not need 10 retries. 3 should be enough.
|
Thank you very much for your suggestion. However, this test case was originally supposed to be deleted. I simply commented it out and am waiting for @fake0fan to completely remove it. You can check out the comments on this PR: #2396. |
…#3385) Signed-off-by: amy-why-3459 <wuhaiyan17@huawei.com> Signed-off-by: Hongsheng Liu <liuhongsheng4@huawei.com> Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com> Co-authored-by: Canlin Guo <961750412@qq.com>


PLEASE FILL IN THE PR DESCRIPTION HERE ENSURING ALL CHECKLIST ITEMS (AT THE BOTTOM) HAVE BEEN CONSIDERED.
Purpose
Fix #3341
When the thinker finishes, the talker needs to use an EOS token to mark that the thinker output is complete. The incorrect use of "finished" as the marker caused the request to fail to concatenate EOS, resulting in the talker outputting extra tokens.
Test Plan
pytest -sv tests/e2e/online_serving/test_qwen3_omni_expansion.py::test_one_word_prompt_001 -m "full_model" --run-level "full_model" --count=30Test Result
Essential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model. Please runmkdocs serveto sync the documentation editions to./docs.BEFORE SUBMITTING, PLEASE READ https://github.com/vllm-project/vllm-omni/blob/main/CONTRIBUTING.md (anything written below this line will be removed by GitHub Actions)