[release] Add GLM5 known issue for 2-node PD mixed deployment#7436
[release] Add GLM5 known issue for 2-node PD mixed deployment#7436MengqingCao merged 2 commits intovllm-project:mainfrom
Conversation
Signed-off-by: MrZ20 <2609716663@qq.com>
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request updates the release notes to include a critical known issue related to GLM5's behavior in a specific deployment configuration. The update provides clarity on a past problem where inference could hang under certain conditions, while also reassuring users by referencing the pull requests where the fix was implemented. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
Signed-off-by: Mengqing Cao <cmq0113@163.com>
There was a problem hiding this comment.
Code Review
This pull request successfully adds a known issue regarding GLM5's 2-node PD mixed deployment to the v0.17.0rc1 release notes. The documentation clearly states the issue and references the PRs where the fix was implemented. However, the pull request title and summary do not fully adhere to the repository's specified style guide (lines 12-88). Please update them to match the required format for better clarity and consistency across pull requests.
Suggested PR Title:
[release][Doc][BugFix] Document GLM5 2-node PD mixed deployment hang issueSuggested PR Summary:
### What this PR does / why we need it?
This PR documents a known issue in the v0.17.0rc1 release notes regarding GLM5 in a 2-node PD mixed deployment scenario. Specifically, inference may hang when concurrency exceeds 8. This documentation is necessary to inform users about this specific behavior. The issue has been fixed in PRs #7235 and #7290.
Fixes #
### Does this PR introduce _any_ user-facing change?
No, this PR only updates the release notes documentation.
### How was this patch tested?
This is a documentation-only change, so no specific code testing was performed. The change was reviewed for accuracy and adherence to documentation standards.|
👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:
If CI fails, you can run linting and testing checks locally according Contributing and Testing. |
…roject#7436) ### What this PR does / why we need it? Documented an issue in the 2-node PD mixed deployment scenario where inference may hang when concurrency exceeds 8.(GLM5) Noted that the issue has been fixed in PR: - vllm-project#7235 - vllm-project#7290. --------- Signed-off-by: MrZ20 <2609716663@qq.com> Signed-off-by: Mengqing Cao <cmq0113@163.com> Co-authored-by: Mengqing Cao <cmq0113@163.com>
…roject#7436) ### What this PR does / why we need it? Documented an issue in the 2-node PD mixed deployment scenario where inference may hang when concurrency exceeds 8.(GLM5) Noted that the issue has been fixed in PR: - vllm-project#7235 - vllm-project#7290. --------- Signed-off-by: MrZ20 <2609716663@qq.com> Signed-off-by: Mengqing Cao <cmq0113@163.com> Co-authored-by: Mengqing Cao <cmq0113@163.com>
…roject#7436) ### What this PR does / why we need it? Documented an issue in the 2-node PD mixed deployment scenario where inference may hang when concurrency exceeds 8.(GLM5) Noted that the issue has been fixed in PR: - vllm-project#7235 - vllm-project#7290. --------- Signed-off-by: MrZ20 <2609716663@qq.com> Signed-off-by: Mengqing Cao <cmq0113@163.com> Co-authored-by: Mengqing Cao <cmq0113@163.com>
What this PR does / why we need it?
Documented an issue in the 2-node PD mixed deployment scenario where inference may hang when concurrency exceeds 8.(GLM5)
Noted that the issue has been fixed in PR:
Does this PR introduce any user-facing change?
How was this patch tested?