Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 2 additions & 7 deletions docs/source/tutorials/Qwen3-Dense.md
Original file line number Diff line number Diff line change
Expand Up @@ -171,9 +171,6 @@ export TASK_QUEUE_ENABLE=1
# Enable the AIVector core to directly schedule ROCE communication
export HCCL_OP_EXPANSION_MODE="AIV"

# Enable MLP prefetch for better performance.
export VLLM_ASCEND_ENABLE_PREFETCH_MLP=1

# Enable FlashComm_v1 optimization when tensor parallel is enabled.
export VLLM_ASCEND_ENABLE_FLASHCOMM1=1

Expand All @@ -187,7 +184,7 @@ vllm serve /model/Qwen3-32B-W8A8 \
--max-model-len 5500 \
--max-num-batched-tokens 40960 \
--compilation-config '{"cudagraph_mode": "FULL_DECODE_ONLY"}' \
--additional-config '{"pa_shape_list":[48,64,72,80]}' \
--additional-config '{"pa_shape_list":[48,64,72,80], "weight_prefetch_config":{"enabled":true}}' \
Comment thread
leo-pony marked this conversation as resolved.
--port 8113 \
--block-size 128 \
--gpu-memory-utilization 0.9
Expand Down Expand Up @@ -348,9 +345,7 @@ Weight prefetching optimizes memory usage by preloading weights into the cache b

In dense model scenarios, the MLP's gate_up_proj and down_proj linear layers often exhibit relatively high MTE utilization. To address this, we create a separate pipeline specifically for weight prefetching, which runs in parallel with the original vector computation pipeline, such as RMSNorm and SiLU, before the MLP. This approach allows the weights to be preloaded to L2 cache ahead of time, reducing MTE utilization during the MLP computations and indirectly improving Cube computation efficiency by minimizing resource contention and optimizing data flow.
Comment thread
leo-pony marked this conversation as resolved.

It is important to emphasize that, since we use vector computations to hide the weight prefetching pipeline, the setting of the prefetch buffer size is crucial. If the buffer size is too small, the optimization benefits will not be fully realized, while a larger buffer size may lead to resource contention, resulting in performance degradation. To accommodate different scenarios, we have exposed two environment variables `VLLM_ASCEND_MLP_GATE_UP_PREFETCH_SIZE` and `VLLM_ASCEND_MLP_DOWN_PREFETCH_SIZE` to allow for flexible buffer size configuration based on the specific workload.

This optimization requires setting the environment variable `VLLM_ASCEND_ENABLE_PREFETCH_MLP = 1` to be enabled.
Previously, the environment variables VLLM_ASCEND_ENABLE_PREFETCH_MLP used to enable MLP weight prefetch and VLLM_ASCEND_MLP_GATE_UP_PREFETCH_SIZE and VLLM_ASCEND_MLP_DOWN_PREFETCH_SIZE used to set the weight prefetch size for MLP gate_up_proj and down_proj were deprecated. Please use the following configuration instead: "weight_prefetch_config": { "enabled": true, "prefetch_ratio": { "mlp": { "gate_up": 1.0, "down": 1.0}}}. See User Guide->Feature Guide->Weight Prefetch Guide for details.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See User Guide->Feature Guide->Weight Prefetch Guide for details. this can be set to link instead.


### 6. Zerolike Elimination

Expand Down
6 changes: 5 additions & 1 deletion docs/source/user_guide/configuration/additional_config.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ The details of each configuration option are as follows:
| Name | Type | Default | Description |
|------------------|------|-------------------------------------------------------------|------------------------------------|
| `enabled` | bool | `False` | Whether to enable weight prefetch. |
| `prefetch_ratio` | dict | `{"attn": {"qkv": 1.0, "o": 1.0}, "moe": {"gate_up": 0.8}}` | Prefetch ratio of each weight. |
| `prefetch_ratio` | dict | `{"attn": {"qkv": 1.0, "o": 1.0}, "moe": {"gate_up": 0.8}, "mlp": { "gate_up": 1.0, "down": 1.0}}` | Prefetch ratio of each weight. |

**finegrained_tp_config**

Expand Down Expand Up @@ -115,6 +115,10 @@ An example of additional configuration is as follows:
},
"moe": {
"gate_up": 0.8
},
"mlp": {
"gate_up": 1.0,
"down": 1.0
}
},
},
Expand Down
1 change: 1 addition & 0 deletions docs/source/user_guide/feature_guide/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,4 +23,5 @@ layer_sharding
speculative_decoding
context_parallel
npugraph_ex
weight_prefetch
:::
73 changes: 73 additions & 0 deletions docs/source/user_guide/feature_guide/weight_prefetch.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
# Weight Prefetch Guide

Weight prefetching optimizes memory usage by preloading weights into the cache before they are needed, minimizing delays caused by memory access during model execution. Linear layers sometimes exhibit relatively high MTE utilization. To address this, we create a separate pipeline specifically for weight prefetching, which runs in parallel with the original vector computation pipeline, such as quantize, MoE gating top_k, RMSNorm and SwiGlu. This approach allows the weights to be preloaded to L2 cache ahead of time, reducing MTE utilization during the linear layer computations and indirectly improving Cube computation efficiency by minimizing resource contention and optimizing data flow.

Since we use vector computations to hide the weight prefetching pipeline, it has effect on computation, if you prioritize low latency over high throughput, then it it best not to enable prefetching.

## Quick Start

With `--additional-config '{"weight_prefetch_config": {"enabled": true}}'` to open weight prefetch.

## Fine-tune Prefetch Ratio

Since weight prefetch use vector computations to hide the weight prefetching pipeline, the setting of the prefetch size is crucial. If the size is too small, the optimization benefits will not be fully realized, while a larger size may lead to resource contention, resulting in performance degradation. To accommodate different scenarios, we have add `prefetch_ratio` to allow for flexible size configuration based on the specific workload, detail as following:

With `prefetch_ratio` in `"weight_prefetch_config"` to custom the weight prefetch ratio for specify linear layers.

The “attn” and “moe” configuration options are used for MoE model, detail as following:

`"attn": { "qkv": 1.0, "o": 1.0}, "moe": {"gate_up": 0.8}`

The “mlp” configuration option is used to optimize the performance of the Dense model, detail as following:

`"mlp": {"gate_up": 1.0, "down": 1.0}`

Above value are the default config, the default value has a good performance for Qwen3-235B-A22B-W8A8 when `--max-num-seqs`is 144, for Qwen3-32B-W8A8 when `--max-num-seqs`is 72.

However, this may not be the optimal configuration for your scenario. For higher concurrency, you can try increasing the prefetch size. For lower concurrency, prefetching may not offer any advantages, so you can decrease the size or disable prefetching. Determine if the prefetch size is appropriate by collecting profiling data. Specifically, check if the time required for the prefetch operation (e.g., MLP Down Proj weight prefetching) overlaps with the time required for parallel vector computation operators (e.g., SwiGlu computation), and whether the prefetch operation is no later than the completion time of the vector computation operator. In the profiling timeline, a prefetch operation appears as a CMO operation on a single stream; this CMO operation is the prefetch operation.

Notices:

1) Weight prefetch of MLP `down` project prefetch dependence sequence parallel, if you want open for mlp `down` please also enable sequence parallel.
2) Due to the current size of the L2 cache, the maximum prefetch cannot exceed 18MB. If `prefetch_ration * lineaer_layer_weight_size >= 18 * 1024 * 1024` bytes, the backend will only prefetch 18MB.

## Example

1) For MoE model:

```shell
--additional-config \
'{
"weight_prefetch_config": {
"enabled": true,
"prefetch_ratio": {
"attn": {
"qkv": 1.0,
"o": 1.0
},
"moe": {
"gate_up": 0.8
}
}
}
}'
```

2) For dense model:

Following is the default configuration that can get a good performance for `--max-num-seqs`is 72 for Qwen3-32B-W8A8

```shell
--additional-config \
'{
"weight_prefetch_config": {
"enabled": true,
"prefetch_ratio": {
"mlp": {
"gate_up": 1.0,
"down": 1.0
}
}
}
}'
```
Original file line number Diff line number Diff line change
Expand Up @@ -222,7 +222,7 @@ def test_qwen3_dense_fc1_tp2(model):


@pytest.mark.parametrize("model", QWEN_DENSE_MODELS)
@patch.dict(os.environ, {"VLLM_ASCEND_ENABLE_PREFETCH_MLP": "1"})
@patch.dict(os.environ, {"VLLM_ASCEND_ENABLE_FLASHCOMM1": "1"})
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The test test_qwen3_dense_prefetch_mlp_weight_tp2 is intended to test MLP weight prefetching. However, it is patching the environment variable VLLM_ASCEND_ENABLE_FLASHCOMM1, which is related to FlashComm optimization, not MLP prefetching. Since MLP prefetching is now configured via additional_config (as correctly done in line 240 of this test), this environment variable patch is misleading and potentially incorrect, as it might not be enabling the intended feature for this specific test, or it's patching an unrelated feature. This could lead to false positives or incorrect test coverage.

Suggested change
@patch.dict(os.environ, {"VLLM_ASCEND_ENABLE_FLASHCOMM1": "1"})
@pytest.mark.parametrize("model", QWEN_DENSE_MODELS)

def test_qwen3_dense_prefetch_mlp_weight_tp2(model):
example_prompts = [
"Hello, my name is",
Expand All @@ -236,6 +236,7 @@ def test_qwen3_dense_prefetch_mlp_weight_tp2(model):
tensor_parallel_size=2,
cudagraph_capture_sizes=[1, 2, 4, 8],
quantization="ascend",
additional_config={"weight_prefetch_config": {"enabled": True}},
) as vllm_model:
vllm_model.generate_greedy(example_prompts, max_tokens)

Expand Down
3 changes: 1 addition & 2 deletions tests/e2e/multicard/2-cards/test_qwen3_performance.py
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,6 @@ async def test_models(model: str) -> None:
env_dict = {
"TASK_QUEUE_ENABLE": "1",
"HCCL_OP_EXPANSION_MODE": "AIV",
"VLLM_ASCEND_ENABLE_PREFETCH_MLP": "1",
}
server_args = [
"--async-scheduling",
Expand All @@ -74,7 +73,7 @@ async def test_models(model: str) -> None:
"--compilation-config",
'{"cudagraph_mode": "FULL_DECODE_ONLY"}',
"--additional-config",
'{"pa_shape_list":[48,64,72,80]}',
'{"pa_shape_list":[48,64,72,80],"weight_prefetch_config":{"enabled":true}}',
"--block-size",
"128",
"--trust-remote-code",
Expand Down
4 changes: 2 additions & 2 deletions tests/e2e/nightly/single_node/models/test_qwen3_32b_int8.py
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,6 @@ async def test_models(model: str, mode: str, tp_size: int) -> None:
"TASK_QUEUE_ENABLE": "1",
"HCCL_OP_EXPANSION_MODE": "AIV",
"VLLM_ASCEND_ENABLE_FLASHCOMM": "1",
"VLLM_ASCEND_ENABLE_PREFETCH_MLP": "1"
}
compilation_config = {
"cudagraph_mode":
Expand All @@ -98,7 +97,8 @@ async def test_models(model: str, mode: str, tp_size: int) -> None:
str(port), "--max-model-len", "40960", "--max-num-batched-tokens",
"40960", "--block-size", "128", "--trust-remote-code",
"--reasoning-parser", "qwen3", "--gpu-memory-utilization", "0.9",
"--async-scheduling"
"--async-scheduling", "--additional-config",
'{"weight_prefetch_config":{"enabled":true}}',
]
if mode == "single":
server_args.append("--enforce-eager")
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,6 @@ async def test_models(model: str, tp_size: int) -> None:
"OMP_PROC_BIND": "false",
"VLLM_ASCEND_ENABLE_TOPK_OPTIMIZE": "1",
"VLLM_ASCEND_ENABLE_FLASHCOMM": "1",
"VLLM_ASCEND_ENABLE_PREFETCH_MLP": "1"
}
server_args = [
"--quantization", "ascend", "--tensor-parallel-size",
Expand All @@ -82,7 +81,8 @@ async def test_models(model: str, tp_size: int) -> None:
"0.9", "--block-size", "128", "--max-num-seqs", "256",
"--enforce-eager", "--max-model-len", "35840",
"--max-num-batched-tokens", "35840", "--additional-config",
'{"enable_weight_nz_layout":true}', "--compilation-config",
'{"enable_weight_nz_layout":true, "weight_prefetch_config":{"enabled": true}}',
"--compilation-config",
'{"cudagraph_mode":"FULL_DECODE_ONLY", "cudagraph_capture_sizes":[1,8,24,48,60]}'
]
with RemoteOpenAIServer(model,
Expand Down
5 changes: 2 additions & 3 deletions tests/e2e/nightly/single_node/models/test_qwq_32b.py
Original file line number Diff line number Diff line change
Expand Up @@ -75,8 +75,7 @@ async def test_models(model: str, mode: str, tp_size: int) -> None:
"OMP_PROC_BIND": "false",
"HCCL_OP_EXPANSION_MODE": "AIV",
"VLLM_ASCEND_ENABLE_FLASHCOMM": "1",
"VLLM_ASCEND_ENABLE_DEBSE_OPTIMIZE": "1",
"VLLM_ASCEND_ENABLE_PREFETCH_MLP": "1"
"VLLM_ASCEND_ENABLE_DEBSE_OPTIMIZE": "1"
}
server_args = [
"--tensor-parallel-size",
Expand All @@ -86,7 +85,7 @@ async def test_models(model: str, mode: str, tp_size: int) -> None:
"--gpu-memory-utilization", "0.9", "--compilation_config",
'{"cudagraph_mode":"FULL_DECODE_ONLY", "cudagraph_capture_sizes": [1, 8, 24, 48, 60]}',
"--reasoning-parser", "deepseek_r1", "--distributed_executor_backend",
"mp"
"mp", "--additional-config", '{"weight_prefetch_config":{"enabled":true}}'
]
if mode == "single":
server_args.remove("--compilation_config")
Expand Down
20 changes: 0 additions & 20 deletions tests/ut/ops/test_activation.py
Original file line number Diff line number Diff line change
Expand Up @@ -54,11 +54,7 @@ def test_QuickGELU_forward(mock_gelu, dummy_tensor, default_vllm_config):

@pytest.mark.skipif(is_310p_hw(), reason="non_310P device unittest case.")
@patch("torch_npu.npu_swiglu", side_effect=lambda x: x + 1)
@patch("torch.ops.vllm.maybe_wait_prefetch_done", side_effect=lambda x: None)
@patch("torch.ops.vllm.maybe_prefetch_mlp_down_proj", side_effect=lambda x: None)
def test_SiluAndMul_forward(
mock_maybe_prefetch_mlp_down_proj,
mock_maybe_wait_prefetch_done,
mock_swiglu,
dummy_tensor,
default_vllm_config,
Expand All @@ -67,15 +63,9 @@ def test_SiluAndMul_forward(
out = layer.forward(dummy_tensor)
expected_arg = dummy_tensor

# assert mock_maybe_prefetch_mlp_down_proj.call_count == 1
mock_maybe_prefetch_mlp_down_proj.assert_called_once()

# assert mock_swiglu.call_count == 1
mock_swiglu.assert_called_once()

# assert mock_maybe_wait_prefetch_done.call_count == 1
mock_maybe_wait_prefetch_done.assert_called_once()

actual_arg = mock_swiglu.call_args[0][0]
assert torch.allclose(actual_arg, expected_arg), "npu_swiglu called with unexpected input"

Expand All @@ -85,11 +75,7 @@ def test_SiluAndMul_forward(

@pytest.mark.skipif(not is_310p_hw(), reason="310P device unittest case.")
@patch("torch.nn.functional.silu", side_effect=lambda x: x + 1)
@patch("torch.ops.vllm.maybe_wait_prefetch_done", side_effect=lambda x: None)
@patch("torch.ops.vllm.maybe_prefetch_mlp_down_proj", side_effect=lambda x: None)
def test_SiluAndMul_forward_310p(
mock_maybe_prefetch_mlp_down_proj,
mock_maybe_wait_prefetch_done,
mock_silu,
dummy_tensor,
default_vllm_config,
Expand All @@ -99,15 +85,9 @@ def test_SiluAndMul_forward_310p(
h = dummy_tensor.shape[-1] // 2
expected_arg = dummy_tensor[..., :h]

# assert mock_maybe_prefetch_mlp_down_proj.call_count == 1
mock_maybe_prefetch_mlp_down_proj.assert_called_once()

# assert mock_silu.call_count == 1
mock_silu.assert_called_once()

# assert mock_maybe_wait_prefetch_done.call_count == 1
mock_maybe_wait_prefetch_done.assert_called_once()

actual_arg = mock_silu.call_args[0][0]
assert torch.allclose(actual_arg, expected_arg), "swiglu called with unexpected input"

Expand Down
10 changes: 7 additions & 3 deletions vllm_ascend/_310p/ops/activation.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,12 +19,16 @@
import torch.nn.functional as F

from vllm_ascend.ops.activation import AscendSiluAndMul
from vllm_ascend.utils import get_weight_prefetch_method


class AscendSiluAndMul310(AscendSiluAndMul):
def forward(self, x: torch.Tensor) -> torch.Tensor:
torch.ops.vllm.maybe_prefetch_mlp_down_proj(x)
weight_prefetch_method = get_weight_prefetch_method()
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe we shoult drop support for 310p first.

if weight_prefetch_method:
weight_prefetch_method.maybe_prefetch_mlp_weight_preprocess(weight_prefetch_method.MLP_DOWN, x)
h = x.shape[-1] // 2
out = F.silu(x[..., :h]) * x[..., h:]
torch.ops.vllm.maybe_wait_prefetch_done(out)
out = (F.silu(x[..., :h].to(torch.float32)) * x[..., h:].to(torch.float32)).to(torch.float16)
if weight_prefetch_method:
weight_prefetch_method.maybe_prefetch_mlp_weight_postprocess(out)
return out
38 changes: 35 additions & 3 deletions vllm_ascend/ascend_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import warnings
from typing import TYPE_CHECKING

from vllm.logger import logger
Expand Down Expand Up @@ -48,9 +49,7 @@ def __init__(self, vllm_config: "VllmConfig"):

# Dump / PrecisionDebugger configuration
self.dump_config_path = additional_config.get("dump_config_path", None)

weight_prefetch_config = additional_config.get("weight_prefetch_config", {})
self.weight_prefetch_config = WeightPrefetchConfig(weight_prefetch_config)
self._construct_weight_prefetch_config(additional_config)
self.layer_sharding = additional_config.get("layer_sharding", None)
logger.info_once(
f"Linear layer sharding enabled with config: {self.layer_sharding}. "
Expand Down Expand Up @@ -138,6 +137,29 @@ def __init__(self, vllm_config: "VllmConfig"):
"enable_kv_nz is only supported in pd scenario and can only be used in D node."
)

def _construct_weight_prefetch_config(self, additional_config):
weight_prefetch_config = additional_config.get("weight_prefetch_config", {})
self.weight_prefetch_config = WeightPrefetchConfig(weight_prefetch_config)
# Deprecated env var handling for backward compatibility
if os.getenv("VLLM_ASCEND_ENABLE_PREFETCH_MLP", "0") == "1":
MAX_PREFETCH_WEIGHT_SIZE: int = 18 * 1024 * 1024
gate_up_prefetch_size = int(os.getenv("VLLM_ASCEND_MLP_GATE_UP_PREFETCH_SIZE", MAX_PREFETCH_WEIGHT_SIZE))
down_prefetch_szie = int(os.getenv("VLLM_ASCEND_MLP_DOWN_PREFETCH_SIZE", MAX_PREFETCH_WEIGHT_SIZE))
self.weight_prefetch_config.set_mlp_pre_version_compatibale_config(
gate_up_prefetch_size, down_prefetch_szie
)
logger.info_once(
f"MLP weight prefetch enabled from env variable VLLM_ASCEND_ENABLE_PREFETCH_MLP."
f"gate_up_prefetch_size={gate_up_prefetch_size}, "
f"down_prefetch_szie={down_prefetch_szie}."
)
warnings.warn(
"VLLM_ASCEND_ENABLE_PREFETCH_MLP is deprecated and will be removed in a v0.16.0 version. "
"Please use weight_prefetch_config in additional-config for now instead.",
DeprecationWarning,
stacklevel=2,
)


class FinegrainedTPConfig:
"""
Expand Down Expand Up @@ -305,18 +327,28 @@ class WeightPrefetchConfig:
Configuration Object for weight_prefetch_config from additional_config
"""

mlp_pre_version_compatibale_config: dict = {}

prefetch_ratio: dict = {
"attn": {
"qkv": 1.0,
"o": 1.0,
},
"moe": {"gate_up": 0.8},
"mlp": {"gate_up": 1, "down": 1.0},
}

def __init__(self, weight_prefetch_config: dict):
self.enabled = weight_prefetch_config.get("enabled", False)
self.prefetch_ratio = weight_prefetch_config.get("prefetch_ratio", self.prefetch_ratio)

def set_mlp_pre_version_compatibale_config(self, gate_up_prefetch_size: int, down_prefetch_size: int):
config = {
"gate_up": gate_up_prefetch_size,
"down": down_prefetch_size,
}
self.mlp_pre_version_compatibale_config = config


class EplbConfig:
"""
Expand Down
14 changes: 2 additions & 12 deletions vllm_ascend/ascend_forward_context.py
Original file line number Diff line number Diff line change
Expand Up @@ -117,18 +117,8 @@ def set_ascend_forward_context(
if has_layer_idx(model_instance):
forward_context.layer_idx = model_instance.model.start_layer

# TODO(rjg-lyh): refactor mlp weight prefetch method
# set for mlp weight prefetch
prefetch_mlp_enabled = (
envs_ascend.VLLM_ASCEND_ENABLE_PREFETCH_MLP
and forward_context.layer_idx is not None
and num_tokens is not None
and num_tokens < 500
)
if prefetch_mlp_enabled:
forward_context.prefetch_mlp_gate_up_proj = False
forward_context.prefetch_mlp_down_proj = False
forward_context.prefetch_mlp_enabled = prefetch_mlp_enabled
forward_context.prefetch_mlp_gate_up_proj = False
forward_context.prefetch_mlp_down_proj = False
forward_context.model_instance = model_instance
forward_context.is_draft_model = is_draft_model

Expand Down
Loading
Loading