[Community] Nominate whx-sjtu as maintainer#6268
[Community] Nominate whx-sjtu as maintainer#6268wangxiyuan merged 1 commit intovllm-project:mainfrom
Conversation
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
|
👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:
If CI fails, you can run linting and testing checks locally according Contributing and Testing. |
There was a problem hiding this comment.
Code Review
This pull request nominates whx-sjtu as a new maintainer, reflecting their significant contributions to the project. The changes update the .github/CODEOWNERS file to grant ownership over the attention, ops, and sample modules, and add whx-sjtu to the list of committers in docs/source/community/contributors.md. The modifications are consistent with the provided rationale and appear to be correctly implemented. After a thorough review, I have not identified any issues of high or critical severity.
|
LGTM! |
…to qwen3next_rebase * 'main' of https://github.com/vllm-project/vllm-ascend: (86 commits) [refactor] refactor excute_model and _dymmy_run method (vllm-project#6043) [Refactor] profiler config optimze (vllm-project#6141) [Graph][Fusion] Add MatmulAllReduceAddRMSNorm graph fusion for npugraph_ex. (vllm-project#6006) [UT]: refactoring 310p ops ut (vllm-project#6296) [Refact.]: refactoring 310p-kv cache allocator, align with main branch (vllm-project#6270) [Misc] Removes unnecessary graph size re-initialization (vllm-project#6280) [Main2Main] Upgrade vllm commit to 0123 (vllm-project#6169) [BugFix] Fix wheel package build workflow (vllm-project#6276) [CI][BugFix] Qwen3-Next nightly test fix. (vllm-project#6247) [Doc] quick fix for vllm-ascend version (vllm-project#6278) [Community] Nominate whx-sjtu as maintainer (vllm-project#6268) [Lint] Fix mypy issue to make CI happy (vllm-project#6272) BugFix: Fix moe_load accumulation error in ACL graph mode (vllm-project#6182) [Patch] Remove the patch of ECExampleConnector (vllm-project#5976) [Bugfix] Fix PP+PCP and PP+flashcomm1 bugs (vllm-project#5416) [Feat] proxy delay to remove instances (vllm-project#5934) [CI] Add workfolw_dispatch for nightly image build (vllm-project#6269) [bugfix][npugraph_ex]fix static kernel uninstall issue (vllm-project#6128) [Doc] 310P Documents update (vllm-project#6246) [Feature] Mooncake connector get remote ptp size (vllm-project#5822) ...
Since the first release v0.13.0rc2 and v0.14.0rc1 in 2026 are released. We consider to refresh the maintainer team. I nominate whx-sjtu as the new maintainer. - vLLM version: v0.14.1 - vLLM main: vllm-project/vllm@d682094 Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
Since the first release v0.13.0rc2 and v0.14.0rc1 in 2026 are released. We consider to refresh the maintainer team. I nominate whx-sjtu as the new maintainer. - vLLM version: v0.14.1 - vLLM main: vllm-project/vllm@d682094 Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com> Signed-off-by: momochenchuw <chenchuw@huawei.com>
Since the first release v0.13.0rc2 and v0.14.0rc1 in 2026 are released. We consider to refresh the maintainer team. I nominate whx-sjtu as the new maintainer. Nominate reason:
✅Review Quality:
He has completed 80+ reviews since Oct. 2025, including#pullrequestreview-3459750305,#discussion_r2453770390, #discussion_r2472268022,#discussion_r2480554911 with high quality.
✅ Sustained Contributions
He has deep understanding of vLLM and vLLM Ascend codebases and solid contributions of more than 60+ PR merged, including contributions both to vLLM and vLLM-Ascend. Especially, performance optimization, disaggregated-prefill, scheduler, accuracy bugfix and MLA related contributions are the main reason why I nominated him:
✅Quality Contribution:
SharedFusedMoEby contributing to vllm community in Pull Request #24849. This not only solves the accuracy issue of GLM4.5, but also helps with the refactor of our MoE backend.✅Community Involvement:
In 2025 Q2/Q3, he leads [RFC]: Performance optimation of decode in DeepSeek Large EP situation. · Issue #2905 · vllm-project/vllm-ascend which optimizes performance of large EP situation.
He leads refactor of deepseek models and successfully delete all LLM models in vLLM-Ascend: