Skip to content

[Docs] Fix GLM-5 deploy command#6711

Merged
wangxiyuan merged 1 commit intovllm-project:mainfrom
gcanlin:glm-5
Feb 12, 2026
Merged

[Docs] Fix GLM-5 deploy command#6711
wangxiyuan merged 1 commit intovllm-project:mainfrom
gcanlin:glm-5

Conversation

@gcanlin
Copy link
Copy Markdown
Collaborator

@gcanlin gcanlin commented Feb 11, 2026

This pull request refines the GLM-5 deployment documentation by updating the Docker run command to include a more comprehensive set of device mappings and by removing an extraneous quantization flag from the vllm serve commands. These changes aim to correct and clarify the deployment instructions, ensuring users can successfully set up and run the GLM-5 model as intended.

Signed-off-by: Canlin Guo <961750412@qq.com>
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @gcanlin, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request refines the GLM-5 deployment documentation by updating the Docker run command to include a more comprehensive set of device mappings and by removing an extraneous quantization flag from the vllm serve commands. These changes aim to correct and clarify the deployment instructions, ensuring users can successfully set up and run the GLM-5 model as intended.

Highlights

  • Updated Docker device mappings: Expanded the list of --device mappings in the Docker run command for GLM-5 deployment, adding /dev/davinci8 through /dev/davinci15 to ensure all necessary devices are exposed.
  • Removed quantization flag from serve commands: Eliminated the --quantization ascend flag from the vllm serve commands in the GLM-5 deployment instructions, streamlining the command and correcting an unnecessary parameter.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • docs/source/tutorials/models/GLM5.md
    • Added davinci device mappings (davinci8-davinci15) to the docker run command.
    • Removed the --quantization ascend flag from the first vllm serve command.
    • Removed the --quantization ascend flag from the second vllm serve command.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@github-actions github-actions Bot added the documentation Improvements or additions to documentation label Feb 11, 2026
@github-actions
Copy link
Copy Markdown
Contributor

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the documentation for deploying the GLM-5 model. The changes correctly expand the docker run command to include all 16 davinci devices for an Atlas A3 setup and remove the --quantization ascend flag from the multi-node deployment commands for the non-quantized GLM5-bf16 model. The changes are accurate and improve the deployment instructions. I have no issues to report.

@wangxiyuan wangxiyuan merged commit 052cc4e into vllm-project:main Feb 12, 2026
8 checks passed
845473182 pushed a commit to 845473182/vllm-ascend that referenced this pull request Feb 12, 2026
…to qwen3next_rebase

* 'main' of https://github.com/vllm-project/vllm-ascend:
  [Docs] Fix GLM-5 deploy command (vllm-project#6711)
  [npugraph_ex]enable npugraph_ex by default (vllm-project#6664)
  [doc]add GLM5.md (vllm-project#6709)
  [Model] GLM5 adaptation (vllm-project#6642)
  [Bugfix] Update target probs to target logits in rejection sample (vllm-project#6685)
  [Main][Ops] Make triton rope support index_selecting from cos_sin_cache (vllm-project#5450)
  [CI]fix nightly multi node test error for wait for pod ready (vllm-project#6675)
  [main  to main] upgrade main 0210 (vllm-project#6673)
  [main][Quant] Remove unused rotation functions and parameters from W4A4 LAOS quantization (vllm-project#6648)
  [Test][BugFix] Fix torch.rand usage in triton penalty test (vllm-project#6680)
  Add Worker Interface:check_health (vllm-project#6681)
chenchuw886 pushed a commit to chenchuw886/vllm-ascend that referenced this pull request Feb 12, 2026
This pull request refines the GLM-5 deployment documentation by updating
the Docker run command to include a more comprehensive set of device
mappings and by removing an extraneous quantization flag from the `vllm
serve` commands. These changes aim to correct and clarify the deployment
instructions, ensuring users can successfully set up and run the GLM-5
model as intended.

- vLLM version: v0.15.0
- vLLM main:
vllm-project/vllm@9562912

Signed-off-by: Canlin Guo <961750412@qq.com>
Signed-off-by: momochenchuw <chenchuw@huawei.com>
@wangxiyuan wangxiyuan mentioned this pull request Feb 24, 2026
banxiaduhuo pushed a commit to banxiaduhuo/vllm-ascend that referenced this pull request Feb 26, 2026
This pull request refines the GLM-5 deployment documentation by updating
the Docker run command to include a more comprehensive set of device
mappings and by removing an extraneous quantization flag from the `vllm
serve` commands. These changes aim to correct and clarify the deployment
instructions, ensuring users can successfully set up and run the GLM-5
model as intended.


- vLLM version: v0.15.0
- vLLM main:
vllm-project/vllm@9562912

Signed-off-by: Canlin Guo <961750412@qq.com>
ZRJ026 pushed a commit to ZRJ026/vllm-ascend that referenced this pull request Feb 28, 2026
This pull request refines the GLM-5 deployment documentation by updating
the Docker run command to include a more comprehensive set of device
mappings and by removing an extraneous quantization flag from the `vllm
serve` commands. These changes aim to correct and clarify the deployment
instructions, ensuring users can successfully set up and run the GLM-5
model as intended.

- vLLM version: v0.15.0
- vLLM main:
vllm-project/vllm@9562912

Signed-off-by: Canlin Guo <961750412@qq.com>
Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
maoxx241 pushed a commit to maoxx241/vllm-ascend that referenced this pull request Mar 2, 2026
This pull request refines the GLM-5 deployment documentation by updating
the Docker run command to include a more comprehensive set of device
mappings and by removing an extraneous quantization flag from the `vllm
serve` commands. These changes aim to correct and clarify the deployment
instructions, ensuring users can successfully set up and run the GLM-5
model as intended.


- vLLM version: v0.15.0
- vLLM main:
vllm-project/vllm@9562912

Signed-off-by: Canlin Guo <961750412@qq.com>
ZRJ026 pushed a commit to ZRJ026/vllm-ascend that referenced this pull request Mar 4, 2026
This pull request refines the GLM-5 deployment documentation by updating
the Docker run command to include a more comprehensive set of device
mappings and by removing an extraneous quantization flag from the `vllm
serve` commands. These changes aim to correct and clarify the deployment
instructions, ensuring users can successfully set up and run the GLM-5
model as intended.

- vLLM version: v0.15.0
- vLLM main:
vllm-project/vllm@9562912

Signed-off-by: Canlin Guo <961750412@qq.com>
Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
LCAIZJ pushed a commit to LCAIZJ/vllm-ascend that referenced this pull request Mar 7, 2026
This pull request refines the GLM-5 deployment documentation by updating
the Docker run command to include a more comprehensive set of device
mappings and by removing an extraneous quantization flag from the `vllm
serve` commands. These changes aim to correct and clarify the deployment
instructions, ensuring users can successfully set up and run the GLM-5
model as intended.


- vLLM version: v0.15.0
- vLLM main:
vllm-project/vllm@9562912

Signed-off-by: Canlin Guo <961750412@qq.com>
yangzhe-2026 pushed a commit to yangzhe-2026/vllm-ascend that referenced this pull request May 6, 2026
This pull request refines the GLM-5 deployment documentation by updating
the Docker run command to include a more comprehensive set of device
mappings and by removing an extraneous quantization flag from the `vllm
serve` commands. These changes aim to correct and clarify the deployment
instructions, ensuring users can successfully set up and run the GLM-5
model as intended.


- vLLM version: v0.15.0
- vLLM main:
vllm-project/vllm@9562912

Signed-off-by: Canlin Guo <961750412@qq.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants