Skip to content

[release] Add GLM5 known issue for 2-node PD mixed deployment#7436

Merged
MengqingCao merged 2 commits intovllm-project:mainfrom
MrZ20:release_update
Mar 18, 2026
Merged

[release] Add GLM5 known issue for 2-node PD mixed deployment#7436
MengqingCao merged 2 commits intovllm-project:mainfrom
MrZ20:release_update

Conversation

@MrZ20
Copy link
Copy Markdown
Contributor

@MrZ20 MrZ20 commented Mar 18, 2026

What this PR does / why we need it?

Documented an issue in the 2-node PD mixed deployment scenario where inference may hang when concurrency exceeds 8.(GLM5)

Noted that the issue has been fixed in PR:

Does this PR introduce any user-facing change?

How was this patch tested?

Signed-off-by: MrZ20 <2609716663@qq.com>
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request updates the release notes to include a critical known issue related to GLM5's behavior in a specific deployment configuration. The update provides clarity on a past problem where inference could hang under certain conditions, while also reassuring users by referencing the pull requests where the fix was implemented.

Highlights

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Comment thread docs/source/user_guide/release_notes.md Outdated
Signed-off-by: Mengqing Cao <cmq0113@163.com>
@MengqingCao MengqingCao enabled auto-merge (squash) March 18, 2026 10:02
@MengqingCao MengqingCao merged commit adc57c5 into vllm-project:main Mar 18, 2026
18 checks passed
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request successfully adds a known issue regarding GLM5's 2-node PD mixed deployment to the v0.17.0rc1 release notes. The documentation clearly states the issue and references the PRs where the fix was implemented. However, the pull request title and summary do not fully adhere to the repository's specified style guide (lines 12-88). Please update them to match the required format for better clarity and consistency across pull requests.

Suggested PR Title:

[release][Doc][BugFix] Document GLM5 2-node PD mixed deployment hang issue

Suggested PR Summary:

### What this PR does / why we need it?
This PR documents a known issue in the v0.17.0rc1 release notes regarding GLM5 in a 2-node PD mixed deployment scenario. Specifically, inference may hang when concurrency exceeds 8. This documentation is necessary to inform users about this specific behavior. The issue has been fixed in PRs #7235 and #7290.

Fixes #

### Does this PR introduce _any_ user-facing change?
No, this PR only updates the release notes documentation.

### How was this patch tested?
This is a documentation-only change, so no specific code testing was performed. The change was reviewed for accuracy and adherence to documentation standards.

@github-actions github-actions bot added the documentation Improvements or additions to documentation label Mar 18, 2026
@github-actions
Copy link
Copy Markdown
Contributor

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

@MrZ20 MrZ20 deleted the release_update branch March 18, 2026 11:21
starmountain1997 pushed a commit to starmountain1997/vllm-ascend that referenced this pull request Mar 25, 2026
…roject#7436)

### What this PR does / why we need it?
Documented an issue in the 2-node PD mixed deployment scenario where
inference may hang when concurrency exceeds 8.(GLM5)

Noted that the issue has been fixed in PR:
- vllm-project#7235 
- vllm-project#7290.
---------
Signed-off-by: MrZ20 <2609716663@qq.com>
Signed-off-by: Mengqing Cao <cmq0113@163.com>
Co-authored-by: Mengqing Cao <cmq0113@163.com>
lihaokun-2026 pushed a commit to lihaokun-2026/vllm-ascend that referenced this pull request Mar 29, 2026
…roject#7436)

### What this PR does / why we need it?
Documented an issue in the 2-node PD mixed deployment scenario where
inference may hang when concurrency exceeds 8.(GLM5)

Noted that the issue has been fixed in PR:
- vllm-project#7235 
- vllm-project#7290.
---------
Signed-off-by: MrZ20 <2609716663@qq.com>
Signed-off-by: Mengqing Cao <cmq0113@163.com>
Co-authored-by: Mengqing Cao <cmq0113@163.com>
chenchuw886 pushed a commit to chenchuw886/vllm-ascend that referenced this pull request Apr 1, 2026
…roject#7436)

### What this PR does / why we need it?
Documented an issue in the 2-node PD mixed deployment scenario where
inference may hang when concurrency exceeds 8.(GLM5)

Noted that the issue has been fixed in PR:
- vllm-project#7235 
- vllm-project#7290.
---------
Signed-off-by: MrZ20 <2609716663@qq.com>
Signed-off-by: Mengqing Cao <cmq0113@163.com>
Co-authored-by: Mengqing Cao <cmq0113@163.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants