Skip to content

[Log] Reduce duplicate log#37313

Merged
MatthewBonanni merged 1 commit intomainfrom
wentao-reduce-duplicate-log
Mar 18, 2026
Merged

[Log] Reduce duplicate log#37313
MatthewBonanni merged 1 commit intomainfrom
wentao-reduce-duplicate-log

Conversation

@yewentao256
Copy link
Copy Markdown
Member

@yewentao256 yewentao256 commented Mar 17, 2026

Purpose

For example

INFO 03-17 15:12:19 [dp_utils.py:30] Using CPU all reduce to synchronize DP padding between ranks.
INFO 03-17 15:12:19 [dp_utils.py:30] Using CPU all reduce to synchronize DP padding between ranks.
INFO 03-17 15:12:19 [dp_utils.py:30] Using CPU all reduce to synchronize DP padding between ranks.
INFO 03-17 15:12:19 [dp_utils.py:30] Using CPU all reduce to synchronize DP padding between ranks.

->

INFO 03-17 15:12:19 [dp_utils.py:30] Using CPU all reduce to synchronize DP padding between ranks.

Signed-off-by: yewentao256 <zhyanwentao@126.com>
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses an issue of duplicate log messages, particularly in distributed setups. The changes systematically replace standard logging calls (logger.info, logger.debug, logger.warning) with their *_once counterparts and add the scope="local" parameter to existing *_once calls. This ensures that log messages intended to be displayed only once per worker do not get duplicated across multiple workers or ranks, significantly improving log clarity. The modifications are applied consistently across various components of the vLLM codebase, including compilation, scheduling, and model execution layers. The implementation is correct and effectively resolves the described problem.

@yewentao256 yewentao256 added the ready ONLY add when PR is ready to merge/full CI is needed label Mar 17, 2026
@mergify mergify bot added qwen Related to Qwen models nvidia v1 labels Mar 17, 2026
Copy link
Copy Markdown
Collaborator

@MatthewBonanni MatthewBonanni left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@github-project-automation github-project-automation bot moved this to Ready in NVIDIA Mar 18, 2026
@MatthewBonanni MatthewBonanni merged commit c373b5c into main Mar 18, 2026
79 checks passed
@MatthewBonanni MatthewBonanni deleted the wentao-reduce-duplicate-log branch March 18, 2026 14:57
@github-project-automation github-project-automation bot moved this from Ready to Done in NVIDIA Mar 18, 2026
fxdawnn pushed a commit to fxdawnn/vllm that referenced this pull request Mar 19, 2026
Signed-off-by: yewentao256 <zhyanwentao@126.com>
SouthWest7 pushed a commit to SouthWest7/vllm that referenced this pull request Mar 27, 2026
Signed-off-by: yewentao256 <zhyanwentao@126.com>
khairulkabir1661 pushed a commit to khairulkabir1661/vllm that referenced this pull request Mar 27, 2026
Signed-off-by: yewentao256 <zhyanwentao@126.com>
Monishver11 pushed a commit to Monishver11/vllm that referenced this pull request Mar 27, 2026
Signed-off-by: yewentao256 <zhyanwentao@126.com>
Signed-off-by: Monishver Chandrasekaran <monishverchandrasekaran@gmail.com>
JiantaoXu pushed a commit to JiantaoXu/vllm that referenced this pull request Mar 28, 2026
Signed-off-by: yewentao256 <zhyanwentao@126.com>
vrdn-23 pushed a commit to vrdn-23/vllm that referenced this pull request Mar 30, 2026
Signed-off-by: yewentao256 <zhyanwentao@126.com>
Signed-off-by: Vinay Damodaran <vrdn@hey.com>
EricccYang pushed a commit to EricccYang/vllm that referenced this pull request Apr 1, 2026
Signed-off-by: yewentao256 <zhyanwentao@126.com>
Signed-off-by: EricccYang <yangyang4991@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

nvidia qwen Related to Qwen models ready ONLY add when PR is ready to merge/full CI is needed v1

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

2 participants