fix: print outputt offline_inference/base/chat.py example#25744
fix: print outputt offline_inference/base/chat.py example#25744vllm-bot merged 1 commit intovllm-project:mainfrom
Conversation
Signed-off-by: Iceber Gu <caiwei95@hotmail.com>
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run You ask your reviewers to trigger select CI tests on top of Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add If you have any questions, please reach out to us on Slack at https://slack.vllm.ai. 🚀 |
There was a problem hiding this comment.
Code Review
This pull request fixes a bug in the examples/offline_inference/basic/chat.py example script where the output was not being printed when a custom chat template was used. The change adds the missing print_outputs(outputs) call, which correctly resolves the issue. The fix is simple, direct, and effectively addresses the problem described.
…ct#25744) Signed-off-by: Iceber Gu <caiwei95@hotmail.com>
Signed-off-by: Iceber Gu <caiwei95@hotmail.com> Signed-off-by: yewentao256 <zhyanwentao@126.com>
…ct#25744) Signed-off-by: Iceber Gu <caiwei95@hotmail.com>
…ct#25744) Signed-off-by: Iceber Gu <caiwei95@hotmail.com>
…ct#25744) Signed-off-by: Iceber Gu <caiwei95@hotmail.com>
…ct#25744) Signed-off-by: Iceber Gu <caiwei95@hotmail.com>
When the chat-template-path parameter is used, the related output is not printed
Purpose
Test Plan
Test Result
Essential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.