Skip to content

[Frontend] Avoid startup error log for models without chat template#37040

Merged
njhill merged 1 commit intovllm-project:mainfrom
DarkLight1337:fix-chat-log
Mar 14, 2026
Merged

[Frontend] Avoid startup error log for models without chat template#37040
njhill merged 1 commit intovllm-project:mainfrom
DarkLight1337:fix-chat-log

Conversation

@DarkLight1337
Copy link
Member

@DarkLight1337 DarkLight1337 commented Mar 14, 2026

Purpose

During chat template warmup, models without a valid chat template raise ChatTemplateResolutionError, causing some confusion among users as it was logged with a stack trace. This PR catches the specific error and downgrades the log to INFO level.

Test Plan

Test Result


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
@DarkLight1337 DarkLight1337 requested a review from noooop March 14, 2026 04:28
@DarkLight1337 DarkLight1337 requested a review from njhill as a code owner March 14, 2026 04:28
@DarkLight1337 DarkLight1337 added the ready ONLY add when PR is ready to merge/full CI is needed label Mar 14, 2026
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly handles ChatTemplateResolutionError during chat template warmup by catching the specific exception and logging an informational message instead of an error with a stack trace. This improves the user experience by avoiding confusing error logs for models that intentionally lack a chat template. My review includes one suggestion regarding a local import, which points to a potential circular dependency that could be addressed to improve long-term code maintainability.

For multi-modal requests:
- Importing libraries such as librosa triggers JIT compilation.
"""
from vllm.entrypoints.chat_utils import ChatTemplateResolutionError
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This local import suggests a potential circular dependency between vllm.renderers and vllm.entrypoints. While this works, circular dependencies can make the code harder to maintain and refactor. A better long-term solution is to break the cycle, for example, by moving ChatTemplateResolutionError to a common exceptions file. If refactoring is out of scope for this PR, adding a comment to explain the necessity of this local import would be helpful for future maintainability.

Suggested change
from vllm.entrypoints.chat_utils import ChatTemplateResolutionError
# Local import to avoid a circular dependency.
from vllm.entrypoints.chat_utils import ChatTemplateResolutionError
References
  1. According to PEP 8, imports should usually be at the top of the file. Local imports are acceptable to resolve circular dependencies, but this often points to a need for refactoring. Adding a comment clarifies the reason for deviating from the standard. (link)

Copy link
Member

@njhill njhill left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @DarkLight1337! Would be nice to make the other suggested change at the same time. It's TMI IMO otherwise

from vllm.entrypoints.chat_utils import ChatTemplateResolutionError

try:
logger.info("Warming up chat template processing...")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should also change this one to debug.

@njhill njhill merged commit 5467d13 into vllm-project:main Mar 14, 2026
43 checks passed
@njhill
Copy link
Member

njhill commented Mar 14, 2026

OK I just merged because it was already green, can open another PR for the other thing...

@njhill
Copy link
Member

njhill commented Mar 14, 2026

@DarkLight1337 I opened #37062 for this

@DarkLight1337 DarkLight1337 deleted the fix-chat-log branch March 15, 2026 03:03
athrael-soju pushed a commit to athrael-soju/vllm that referenced this pull request Mar 15, 2026
…llm-project#37040)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: Athrael Soju <athrael.soju@gmail.com>
athrael-soju pushed a commit to athrael-soju/vllm that referenced this pull request Mar 16, 2026
…llm-project#37040)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: Athrael Soju <athrael.soju@gmail.com>
Lucaskabela pushed a commit to Lucaskabela/vllm that referenced this pull request Mar 17, 2026
wendyliu235 pushed a commit to wendyliu235/vllm-public that referenced this pull request Mar 18, 2026
fxdawnn pushed a commit to fxdawnn/vllm that referenced this pull request Mar 19, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants