[Doc][ReleaseNote] Add release notes for v0.16.0rc1#7067
[Doc][ReleaseNote] Add release notes for v0.16.0rc1#7067MengqingCao merged 11 commits intovllm-project:mainfrom
Conversation
Signed-off-by: gcanlin <canlinguosdu@gmail.com>
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces the comprehensive release notes for vLLM Ascend v0.16.0rc1. It documents significant advancements in model support, quantization capabilities, performance optimizations, and hardware compatibility, providing a clear overview of the changes and improvements in this release candidate. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:
If CI fails, you can run linting and testing checks locally according Contributing and Testing. |
There was a problem hiding this comment.
Code Review
This pull request adds the release notes for version v0.16.0rc1. The changes are limited to documentation. I have no specific comments on the file changes as per the review criteria, which require issues to be of high or critical severity, and documentation issues are typically of lower severity. However, I noticed that the pull request description does not follow the format specified in the repository's style guide (lines 12-39). Please consider updating the description to match the provided template for consistency.
Note: Security Review has been skipped due to the limited scope of the PR.
|
|
||
| - DeepSeek V3.2 now supports graph mode (piecewise and full_decode_only) with PCP and DCP context parallel. Additionally, PCP now supports MTP and chunked prefill for DeepSeek V3.2. [#6940](https://github.com/vllm-project/vllm-ascend/pull/6940) [#6917](https://github.com/vllm-project/vllm-ascend/pull/6917) | ||
| - Qwen3-Next now supports PCP and DCP context parallel. [#6091](https://github.com/vllm-project/vllm-ascend/pull/6091) | ||
| - MXFP8 MoE quantization is now supported for Qwen MoE models. [#6670](https://github.com/vllm-project/vllm-ascend/pull/6670) |
There was a problem hiding this comment.
this can be removed, it's only for 950
| ### Deprecation & Breaking Changes | ||
|
|
||
| - `enable_flash_comm_v1` config option has been renamed back to `enable_sp`. [#6883](https://github.com/vllm-project/vllm-ascend/pull/6883) | ||
| - Reverted auto-detect quantization format from model files feature. [#6873](https://github.com/vllm-project/vllm-ascend/pull/6873) |
|
|
||
| - DeepSeek V3.2 now supports graph mode (piecewise and full_decode_only) with PCP and DCP context parallel. Additionally, PCP now supports MTP and chunked prefill for DeepSeek V3.2. [#6940](https://github.com/vllm-project/vllm-ascend/pull/6940) [#6917](https://github.com/vllm-project/vllm-ascend/pull/6917) | ||
| - Qwen3-Next now supports PCP and DCP context parallel. [#6091](https://github.com/vllm-project/vllm-ascend/pull/6091) | ||
| - 310P now supports W8A8S quantization and saving W8A8SC state. [#6878](https://github.com/vllm-project/vllm-ascend/pull/6878) |
There was a problem hiding this comment.
| - 310P now supports W8A8S quantization and saving W8A8SC state. [#6878](https://github.com/vllm-project/vllm-ascend/pull/6878) |
| - fused_qkvzba_split_reshape now supports token number greater than 65536, removing the previous limitation. [#6740](https://github.com/vllm-project/vllm-ascend/pull/6740) | ||
| - NPUWorker Profiler now supports profile_prefix for better profiling experience. [#6968](https://github.com/vllm-project/vllm-ascend/pull/6968) | ||
| - EPLB profiling now displays expert hotness comparison and time required for eplb adjustment. [#6877](https://github.com/vllm-project/vllm-ascend/pull/6877) [#7001](https://github.com/vllm-project/vllm-ascend/pull/7001) | ||
| - Adapt to RecomputeScheduler in vLLM 0.16.0. [#6898](https://github.com/vllm-project/vllm-ascend/pull/6898) |
There was a problem hiding this comment.
| - Adapt to RecomputeScheduler in vLLM 0.16.0. [#6898](https://github.com/vllm-project/vllm-ascend/pull/6898) |
|
|
||
| ### Hardware and Operator Support | ||
|
|
||
| - 310P now supports w8a8sc quantization method. [#7075](https://github.com/vllm-project/vllm-ascend/pull/7075) |
There was a problem hiding this comment.
I think we'd better don't metion 310p progress here
| ### Deprecation & Breaking Changes | ||
|
|
||
| - `enable_flash_comm_v1` config option has been renamed back to `enable_sp`. [#6883](https://github.com/vllm-project/vllm-ascend/pull/6883) | ||
| - Reverted auto-detect quantization format from model files feature. [#6873](https://github.com/vllm-project/vllm-ascend/pull/6873) |
There was a problem hiding this comment.
It seems #7111 have some issues with CI, I think we'd better to include it back in the next version to make sure of the quality
Co-authored-by: Mengqing Cao <cmq0113@163.com> Signed-off-by: Canlin Guo <961750412@qq.com>
Co-authored-by: Mengqing Cao <cmq0113@163.com> Signed-off-by: Canlin Guo <961750412@qq.com>
MengqingCao
left a comment
There was a problem hiding this comment.
LGTM, thanks for this!
…to qwen3next_graph * 'main' of https://github.com/vllm-project/vllm-ascend: (88 commits) [main][bugfix] Fixed the problem of speculative decoding in FULL mode (vllm-project#7148) fixed fia pad logic in graph mode. (vllm-project#7144) [Doc] fix DSV3.1 PD configs (vllm-project#7187) refactor: add a check before layer_sharding logging (vllm-project#7186) [Build] Add support for Ascend950 chip (vllm-project#7151) Revert "[CI] fix skiped e2e test when upgrade vllm version (vllm-project#6654)" (vllm-project#7166) [MODELRUNNERV2]fix penality ops (vllm-project#7013) [Bugfix][LoRA] Fix the issue when enable LoRA + tp + fully_sharded_loras (vllm-project#6650) [KV Pool]get_num_new_matched_tokens return 0 if token length < block_size (vllm-project#7146) [CI] Build Image for v0.16.0rc1 (vllm-project#7155) [CI] Skip `test_mooncake_layerwise_connector.py` in `ut` (vllm-project#7147) [BugFix]Fix recomputed scheduler bug (vllm-project#7137) [Model] Support Minimax-m2.5 on NPU (vllm-project#7105) [P/D]Mooncake Layerwise Connector supports hybrid attention manager with multiple kvcache groups (vllm-project#7022) Add patch_qwen3_5 for triton ops fused_recurrent_gated_delta_rule (vllm-project#7109) [Doc][ReleaseNote] Add release notes for v0.16.0rc1 (vllm-project#7067) [Misc] Download on both hk and guiyang region (vllm-project#7129) [bugdix] The problem that the w4a8 weight fails to be loaded when the EP is not enabled is resolved. (vllm-project#7090) [eagle][cp] fix eagle_cp enable bug2 (vllm-project#7079) [CI]Upgrade niglty multi-node-tests max-parallel to 2 (vllm-project#7035) ...
Add release notes for v0.16.0rc1 - vLLM version: v0.16.0 - vLLM main: vllm-project/vllm@4034c3d --------- Signed-off-by: gcanlin <canlinguosdu@gmail.com> Signed-off-by: Canlin Guo <961750412@qq.com> Co-authored-by: Mengqing Cao <cmq0113@163.com>
Add release notes for v0.16.0rc1