Skip to content

[Model] Support TP/PP/mamba2 kernel for PLaMo2#19674

Merged
tlrmchlsmth merged 18 commits intovllm-project:mainfrom
pfnet:plamo2-follow-up
Jul 28, 2025
Merged

[Model] Support TP/PP/mamba2 kernel for PLaMo2#19674
tlrmchlsmth merged 18 commits intovllm-project:mainfrom
pfnet:plamo2-follow-up

Conversation

@Alnusjaponica
Copy link
Contributor

@Alnusjaponica Alnusjaponica commented Jun 16, 2025

Essential Elements of an Effective PR Description Checklist

  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.

Purpose

Follow-up #14323 to support

  • pipeline parallel
  • tensor parallel
  • mamba2 kernel usage
  • chunked prefill

Test Plan

Manually modify tests to use only pfnet/plamo-2-1b run the following tests.

python -m pytest /home/team-prj-mfm-platform/pfnet-vllm/tests/quantization/test_experts_int8.py
pytest /home/team-prj-mfm-platform/pfnet-vllm/tests/distributed/test_pipeline_parallel.py
pytest /home/team-prj-mfm-platform/pfnet-vllm/tests/models/language/generation/test_hybrid.py # without test_distributed_correctness
pytest /home/team-prj-mfm-platform/pfnet-vllm/tests/models/language/generation/test_hybrid.py::test_distributed_correctness

Test Result

> python -m pytest /home/team-prj-mfm-platform/pfnet-vllm/tests/distributed/test_pipeline_parallel.py
======================== test session starts ========================
platform linux -- Python 3.12.3, pytest-8.4.1, pluggy-1.6.0
rootdir: /home/team-prj-mfm-platform/pfnet-vllm
configfile: pyproject.toml
plugins: anyio-4.9.0
collected 3 items                                                   

tests/distributed/test_pipeline_parallel.py .ss                                                          [100%]

> python -m pytest /home/team-prj-mfm-platform/pfnet-vllm/tests/quantization/test_experts_int8.py
======================== test session starts ========================
platform linux -- Python 3.12.3, pytest-8.4.1, pluggy-1.6.0
rootdir: /home/team-prj-mfm-platform/pfnet-vllm
configfile: pyproject.toml
plugins: anyio-4.9.0
collected 2 items                                                   

tests/quantization/test_experts_int8.py ..                                                                 [100%]

>  python -m pytest /home/team-prj-mfm-platform/pfnet-vllm/tests/models/language/generation/test_hybrid.py
============================================================ test session starts =============================================================
platform linux -- Python 3.12.3, pytest-8.4.1, pluggy-1.6.0
rootdir: /home/team-prj-mfm-platform/pfnet-vllm
configfile: pyproject.toml
plugins: anyio-4.9.0
collected 11 items                                                                                                                           

pfnet-vllm/tests/models/language/generation/test_hybrid.py ...........                    [100%]

> python -m pytest /home/team-prj-mfm-platform/pfnet-vllm/tests/models/language/generation/test_hybrid.py::test_distributed_correctness
========================================== test session starts ===========================================
platform linux -- Python 3.12.3, pytest-8.4.1, pluggy-1.6.0
rootdir: /home/team-prj-mfm-platform/pfnet-vllm
configfile: pyproject.toml
plugins: anyio-4.9.0
collected 1 item                                                                                         

tests/models/language/generation/test_hybrid.py .                                                  [100%]

(Optional) Documentation Update

Updated docs/models/supported_models.md in 89247de

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @Alnusjaponica, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the PLaMo2 model implementation by adding support for Pipeline Parallelism and improving Tensor Parallelism handling. It integrates the Mamba2 kernel for more efficient state-space model computations, including support for chunked prefill. The changes also involve refactoring the model's internal structure to use standard vLLM components and updating the weight loading logic accordingly.

Highlights

  • Pipeline Parallelism Support: Implemented support for Pipeline Parallelism (PP) by modifying the Plamo2Decoder and Plamo2PreTrainedModel forward passes to handle intermediate tensors and using the make_layers utility.
  • Tensor Parallelism Support: Updated linear layers (ColumnParallelLinear, MergedColumnParallelLinear, RowParallelLinear) in the Mamba and Attention mixers to correctly handle tensor parallelism, including weight loading and parameter shapes.
  • Mamba2 Kernel Integration: Refactored the Plamo2MambaMixer to align with the Mamba2 implementation, integrating the mamba_chunk_scan_combined kernel for prefill and selective_state_update for decode, enabling chunked prefill and continuous batching for Mamba layers.
  • Unified Mamba/Attention Mixer Initialization: Modified the Plamo2MambaMixer and Plamo2AttentionMixer constructors to accept VllmConfig directly, simplifying initialization.
  • Standard Component Usage: Replaced custom RMSNorm and activation functions (_rms_norm, _swiglu) with vLLM's standard RMSNorm and SiluAndMul layers, and integrated the standard Sampler.
  • Weight Loading Updates: Adjusted the load_weights method to handle the new RMSNorm weights in the Attention mixer and reshape the in_proj weights in the Mamba mixer to match the expected format for MergedColumnParallelLinear, supporting both unquantized and quantized weights.
  • Documentation Update: Updated the supported_models.md file to indicate PLaMo2 now supports Tensor Parallelism.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configureGemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@mergify mergify bot added the documentation Improvements or additions to documentation label Jun 16, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request significantly enhances the PLaMo2 model by adding support for Tensor Parallelism (TP), Pipeline Parallelism (PP), and integrating the Mamba2 kernel for its Mamba layers. Key changes include:

  • Parallelism: Standard vLLM utilities like make_layers, is_pp_missing_parameter, and get_pp_group are used to enable TP and PP. Model components (Plamo2Model, Plamo2Decoder, Plamo2ForCausalLM) are updated to handle distributed execution and intermediate tensor passing.
  • Mamba Mixer Refactor: Plamo2MambaMixer is updated to use Mamba2-style parameters (e.g., A, D shapes related to num_heads) and leverages the mamba_chunk_scan_combined kernel for prefill, while retaining selective_state_update for decode. It now accepts Mamba2Metadata.
  • Attention Mixer Update: Plamo2AttentionMixer now applies RMSNorm per head to Q and K projections, a common feature in newer LLMs. The old custom RMSNorm and SwiGLU implementations are correctly replaced by standard vLLM layers (RMSNorm, SiluAndMul).
  • Configuration: Model components now consistently use VllmConfig for initialization.
  • Weight Loading: The load_weights method is substantially updated to handle new parameter names (e.g., for per-head Q/K norms) and complex reshaping for Mamba's in_proj layer to align with MergedColumnParallelLinear expectations. It also correctly skips loading weights for layers not on the current PP rank.
  • Documentation: The supported_models.md file is updated to reflect the new TP and PP capabilities of PLaMo2.

The changes appear well-structured and aim to integrate PLaMo2 more deeply into the vLLM ecosystem. The most critical areas for review by the author would be the correctness of the Mamba parameter shape changes (if upgrading to Mamba2), the new per-head Q/K normalization in attention, and the intricate weight reshaping logic for in_proj in load_weights, especially concerning various quantization methods. Comprehensive testing across different parallelism configurations and quantization schemes will be essential.

Comment on lines +472 to +488
self.q_norm = RMSNorm(config.hidden_size_per_head,
eps=config.rms_norm_eps)
self.q_norm.weight = torch.nn.Parameter(
torch.ones((self.num_heads, config.hidden_size_per_head)))
self.k_weight = torch.nn.Parameter(
set_weight_attrs(self.q_norm.weight,
{"weight_loader": sharded_weight_loader(0)})
self.k_norm = RMSNorm(config.hidden_size_per_head,
eps=config.rms_norm_eps)
self.k_norm.weight = torch.nn.Parameter(
torch.ones((self.num_kv_heads, config.hidden_size_per_head)))
# Tensor-parallelism shards the K norm weights to the tp ranks
# in a head-wise manner. This approach does not work if there is only
# a single KV head, as is the case for PLaMo 2-1B.
if self.total_num_kv_heads != 1:
set_weight_attrs(self.k_norm.weight,
{"weight_loader": sharded_weight_loader(0)})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The q_weight and k_weight parameters have been replaced with q_norm and k_norm respectively, which are RMSNorm layers applied per-head. This is a significant architectural change, similar to what's seen in models like Llama 3.

  1. Please confirm if this per-head normalization for Q and K is an intended upgrade/modification for the PLaMo2 architecture or if PLaMo2 originally had this feature.
  2. The sharded_weight_loader(0) is used for q_norm.weight and k_norm.weight. This implies these weights are sharded along the head dimension. The conditional sharding for k_norm.weight when self.total_num_kv_heads == 1 is a good detail to handle GQA/MQA correctly with TP.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. per-head RMSNorm is expected. We had own implementation for this, but this PR utilizes the vLLM native implementation.
  2. Thanks. As noted in the comment, some model has only one head, which is why we have this conditional sharding.

Comment on lines 163 to +171
self.A = nn.Parameter(
torch.empty(
self.intermediate_size // tp_size,
self.ssm_state_size,
divide(self.num_heads, self.tp_size),
dtype=torch.float32,
))
self.D = nn.Parameter(torch.ones(self.intermediate_size // tp_size))
self.D = nn.Parameter(torch.ones(divide(self.num_heads, self.tp_size)))
self.dt_bias = nn.Parameter(
torch.ones(divide(self.num_heads, self.tp_size)))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The shapes and initialization of self.A, self.D, and self.dt_bias have changed significantly.

Previously:

  • self.A was (self.intermediate_size // tp_size, self.ssm_state_size).
  • self.D was torch.ones(self.intermediate_size // tp_size).
  • self.dt_bias was initialized using get_initial_dt_bias(self.num_heads).

Now:

  • self.A is (divide(self.num_heads, self.tp_size), dtype=torch.float32) (note: the second dimension for d_state seems to be missing in the torch.empty call, but the weight loader for A implies it's related to A_log which is typically (d_inner, d_state) or (num_heads, d_state) in Mamba variants. The current definition torch.empty(divide(self.num_heads, self.tp_size), dtype=torch.float32) seems to define a 1D parameter per TP shard, which might be incorrect if A is intended to be a matrix. Please double-check the intended shape and initialization for self.A based on Mamba2's architecture. The a_weight_loader uses lambda x: -torch.exp(x.float()), which is common for A_log parameter in Mamba.
  • self.D is torch.ones(divide(self.num_heads, self.tp_size)).
  • self.dt_bias is torch.ones(divide(self.num_heads, self.tp_size)).

These changes suggest a shift from intermediate_size based parameters to num_heads based parameters, which is characteristic of Mamba2's per-head independent SSMs. Ensure these new shapes and initializations correctly reflect the PLaMo2 model's intended Mamba variant (original or Mamba2 upgrade). If A is meant to be A_log, its typical shape is (d_inner, d_state) or (n_heads, d_state). The current definition for self.A seems to be (n_heads_per_tp,) which might be missing the d_state dimension.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change is totally expected. We have been manually broadcasted those parameters in the load_weights to fit mamba1 kernel API, though PLaMo2's architecture is rather mamba2 in the ssm layer. The duplication step in the load_weights is also removed in this PR.

if ".A" in name:
loaded_weight = loaded_weight[:, None, None].expand(
-1, self.config.hidden_size_per_head,
self.config.mamba_d_state)
loaded_weight = loaded_weight.reshape(
-1, self.config.mamba_d_state)
elif ".D" in name:
loaded_weight = loaded_weight[:, None].expand(
-1, self.config.hidden_size_per_head)
loaded_weight = loaded_weight.reshape(-1)

@Alnusjaponica Alnusjaponica marked this pull request as ready for review June 25, 2025 07:05
Alnusjaponica and others added 14 commits June 26, 2025 10:30
Signed-off-by: Shinichi Hemmi <shemmi@preferred.jp>
Signed-off-by: Shinichi Hemmi <50256998+Alnusjaponica@users.noreply.github.com>
Signed-off-by: Shinichi Hemmi <50256998+Alnusjaponica@users.noreply.github.com>
Signed-off-by: Shinichi Hemmi <50256998+Alnusjaponica@users.noreply.github.com>
Signed-off-by: Shinichi Hemmi <50256998+Alnusjaponica@users.noreply.github.com>
Signed-off-by: Shinichi Hemmi <50256998+Alnusjaponica@users.noreply.github.com>
Signed-off-by: Shinichi Hemmi <50256998+Alnusjaponica@users.noreply.github.com>
Signed-off-by: Shinichi Hemmi <50256998+Alnusjaponica@users.noreply.github.com>
Signed-off-by: Shinichi Hemmi <50256998+Alnusjaponica@users.noreply.github.com>

Co-authored-by: Calvin Metzger <metzger@preferred.jp>
Signed-off-by: Shinichi Hemmi <50256998+Alnusjaponica@users.noreply.github.com>

Co-authored-by: Sixue Wang <cecilwang@preferred.jp>
Signed-off-by: Shinichi Hemmi <50256998+Alnusjaponica@users.noreply.github.com>
Signed-off-by: Shinichi Hemmi <50256998+Alnusjaponica@users.noreply.github.com>

Co-authored-by: Calvin Metzger <metzger@preferred.jp>
Signed-off-by: Shinichi Hemmi <50256998+Alnusjaponica@users.noreply.github.com>
Signed-off-by: Shinichi Hemmi <50256998+Alnusjaponica@users.noreply.github.com>

Co-authored-by: Calvin Metzger <metzger@preferred.jp>
Copy link
Member

@tlrmchlsmth tlrmchlsmth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Eval results look good to me

main (TP=1, PP=1)

|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value|   |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match|↑  |0.202|±  |0.0127|
|     |       |strict-match    |     5|exact_match|↑  |0.520|±  |0.0158|

This PR (TP=2, PP=2)

|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value|   |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match|↑  |0.205|±  |0.0128|
|     |       |strict-match    |     5|exact_match|↑  |0.517|±  |0.0158|

Copy link
Member

@tlrmchlsmth tlrmchlsmth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please merge latest main? Once that's done I'll mark it ready and turn on automerge - thank you!

composed_weight_loader, default_weight_loader, sharded_weight_loader)
from vllm.model_executor.models.interfaces import (HasInnerState, IsHybrid,
SupportsV0Only)
SupportsPP, SupportsV0Only)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Alnusjaponica do you have plans to add support for V1?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nopperl is currently working on V1 support at https://github.com/pfnet/vllm/tree/plamo2-follow-up-v1 but has not yet caught up with the latest main branch. We are going to submit it as a separate PR.

Signed-off-by: Shinichi Hemmi <50256998+Alnusjaponica@users.noreply.github.com>
@Alnusjaponica
Copy link
Contributor Author

@tlrmchlsmth Thanks for your review! I've merge the latest main and added assertion for tp_size check. Could you take another look?

@tlrmchlsmth tlrmchlsmth added the ready ONLY add when PR is ready to merge/full CI is needed label Jul 28, 2025
@tlrmchlsmth tlrmchlsmth enabled auto-merge (squash) July 28, 2025 02:45
@tlrmchlsmth tlrmchlsmth merged commit c7ffe93 into vllm-project:main Jul 28, 2025
78 checks passed
liuyumoye pushed a commit to liuyumoye/vllm that referenced this pull request Jul 31, 2025
Signed-off-by: Shinichi Hemmi <shemmi@preferred.jp>
Signed-off-by: Shinichi Hemmi <50256998+Alnusjaponica@users.noreply.github.com>
Co-authored-by: Calvin Metzger <metzger@preferred.jp>
Co-authored-by: Sixue Wang <cecilwang@preferred.jp>
HsChen-sys pushed a commit to HsChen-sys/vllm that referenced this pull request Aug 1, 2025
Signed-off-by: Shinichi Hemmi <shemmi@preferred.jp>
Signed-off-by: Shinichi Hemmi <50256998+Alnusjaponica@users.noreply.github.com>
Co-authored-by: Calvin Metzger <metzger@preferred.jp>
Co-authored-by: Sixue Wang <cecilwang@preferred.jp>
x22x22 pushed a commit to x22x22/vllm that referenced this pull request Aug 5, 2025
Signed-off-by: Shinichi Hemmi <shemmi@preferred.jp>
Signed-off-by: Shinichi Hemmi <50256998+Alnusjaponica@users.noreply.github.com>
Co-authored-by: Calvin Metzger <metzger@preferred.jp>
Co-authored-by: Sixue Wang <cecilwang@preferred.jp>
Signed-off-by: x22x22 <wadeking@qq.com>
Pradyun92 pushed a commit to Pradyun92/vllm that referenced this pull request Aug 6, 2025
Signed-off-by: Shinichi Hemmi <shemmi@preferred.jp>
Signed-off-by: Shinichi Hemmi <50256998+Alnusjaponica@users.noreply.github.com>
Co-authored-by: Calvin Metzger <metzger@preferred.jp>
Co-authored-by: Sixue Wang <cecilwang@preferred.jp>
npanpaliya pushed a commit to odh-on-pz/vllm-upstream that referenced this pull request Aug 6, 2025
Signed-off-by: Shinichi Hemmi <shemmi@preferred.jp>
Signed-off-by: Shinichi Hemmi <50256998+Alnusjaponica@users.noreply.github.com>
Co-authored-by: Calvin Metzger <metzger@preferred.jp>
Co-authored-by: Sixue Wang <cecilwang@preferred.jp>
jinzhen-lin pushed a commit to jinzhen-lin/vllm that referenced this pull request Aug 9, 2025
Signed-off-by: Shinichi Hemmi <shemmi@preferred.jp>
Signed-off-by: Shinichi Hemmi <50256998+Alnusjaponica@users.noreply.github.com>
Co-authored-by: Calvin Metzger <metzger@preferred.jp>
Co-authored-by: Sixue Wang <cecilwang@preferred.jp>
Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com>
paulpak58 pushed a commit to paulpak58/vllm that referenced this pull request Aug 13, 2025
Signed-off-by: Shinichi Hemmi <shemmi@preferred.jp>
Signed-off-by: Shinichi Hemmi <50256998+Alnusjaponica@users.noreply.github.com>
Co-authored-by: Calvin Metzger <metzger@preferred.jp>
Co-authored-by: Sixue Wang <cecilwang@preferred.jp>
Signed-off-by: Paul Pak <paulpak58@gmail.com>
diegocastanibm pushed a commit to diegocastanibm/vllm that referenced this pull request Aug 15, 2025
Signed-off-by: Shinichi Hemmi <shemmi@preferred.jp>
Signed-off-by: Shinichi Hemmi <50256998+Alnusjaponica@users.noreply.github.com>
Co-authored-by: Calvin Metzger <metzger@preferred.jp>
Co-authored-by: Sixue Wang <cecilwang@preferred.jp>
Signed-off-by: Diego-Castan <diego.castan@ibm.com>
epwalsh pushed a commit to epwalsh/vllm that referenced this pull request Aug 28, 2025
Signed-off-by: Shinichi Hemmi <shemmi@preferred.jp>
Signed-off-by: Shinichi Hemmi <50256998+Alnusjaponica@users.noreply.github.com>
Co-authored-by: Calvin Metzger <metzger@preferred.jp>
Co-authored-by: Sixue Wang <cecilwang@preferred.jp>
zhewenl pushed a commit to zhewenl/vllm that referenced this pull request Aug 28, 2025
Signed-off-by: Shinichi Hemmi <shemmi@preferred.jp>
Signed-off-by: Shinichi Hemmi <50256998+Alnusjaponica@users.noreply.github.com>
Co-authored-by: Calvin Metzger <metzger@preferred.jp>
Co-authored-by: Sixue Wang <cecilwang@preferred.jp>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants