[MM][Perf] Merge Q/K split to simplify AscendApplyRotaryEmb for better performance#5799
[MM][Perf] Merge Q/K split to simplify AscendApplyRotaryEmb for better performance#5799wangxiyuan merged 2 commits intovllm-project:mainfrom
Conversation
Signed-off-by: shen-shanshan <467638484@qq.com>
|
👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:
If CI fails, you can run linting and testing checks locally according Contributing and Testing. |
There was a problem hiding this comment.
Code Review
This pull request refactors the AscendApplyRotaryEmb operator to improve performance. It achieves this by leveraging the base class's _pre_process and _post_process methods from upstream vLLM, which reduces code duplication. More importantly, it merges the Q/K split before applying rotary embeddings, allowing for a single call to torch_npu.npu_rotary_mul instead of two. This simplifies the logic and, as shown by the benchmarks, yields a significant performance improvement. The changes are well-contained and the removal of the einops import is correct as its direct usage has been eliminated. The code looks solid.
|
CC @wangxiyuan Full CI passed. |
…to eplb_refactor * 'main' of https://github.com/vllm-project/vllm-ascend: [CI] Fix lint CI (vllm-project#5880) [Feature] implement eagle spec decoding for model runner v2 (vllm-project#5840) [Quantization] Support compressed tensors moe w8a8 int8 dynamic weight (vllm-project#5718) [EPLB][Bugfix] Get expert map from layers (vllm-project#5817) [Bugfix] Fixed an accuracy problem of sp with eagle3 (vllm-project#5816) [P/D] bugfix for p node force free requset (vllm-project#5431) [Lint]Style: Convert `example` to `ruff format` (vllm-project#5863) [Main2Main] Upgrade vllm commit to 0109 (vllm-project#5752) [Bugfix][P/D] fix layerwise connector for decoder tp size > num kv heads (vllm-project#5846) [Test][e2e][LoRA] Add more e2e tests to cover scenarios of LoRA (vllm-project#4075) [CustomOp][Perf] Merge Q/K split to simplify AscendApplyRotaryEmb for better performance (vllm-project#5799) [Lint]Style: Convert `root`, `benchmarks`, `tools` and `docs` to `ruff format` (vllm-project#5843) enable ep32 for dispatch_ffn_combine (vllm-project#5787)
… better performance (vllm-project#5799) ### What this PR does / why we need it? - Use upstream util function (`_pre_process()` and `_post_process()`) to reduce redundant codes. (Find more details at https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/layers/rotary_embedding/common.py#L184-L213) - Merge Q/K split to simplify the logic of calling `torch_npu.npu_rotary_mul()` for better performance (TPOT has been reduced by **6.22%**). ### Does this PR introduce _any_ user-facing change? no. ### How was this patch tested? #### ✅ Functional test Launch the server: ```bash export VLLM_USE_MODELSCOPE=True vllm serve /root/.cache/modelscope/hub/models/Qwen/Qwen2.5-VL-7B-Instruct \ --dtype bfloat16 \ --limit-mm-per-prompt '{"image": 1}' \ --max-model-len 16384 \ --max-num-batched-tokens 16384 ``` Query the server: ```bash curl -X POST http://localhost:8000/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "/root/.cache/modelscope/hub/models/Qwen/Qwen2.5-VL-7B-Instruct", "messages": [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": [ {"type": "image_url", "image_url": {"url": "https://modelscope.oss-cn-beijing.aliyuncs.com/resource/qwen.png"}}, {"type": "text", "text": "What is the text in the illustrate? How does it look?"} ]} ], "max_tokens": 100 }' ``` Output: ``` {"id":"chatcmpl-b2911ab6989ef098","object":"chat.completion","created":1768202780,"model":"/root/.cache/modelscope/hub/models/Qwen/Qwen2.5-VL-7B-Instruct","choices":[{"index":0,"message":{"role":"assistant","content":"The text in the illustration is \"TONGYI Qwen.\" The word \"TONGYI\" is written in blue, and \"Qwen\" is written in gray. The text appears to be part of a logo or branding design, with \"TONGYI\" being more prominent and \"Qwen\" being slightly smaller and positioned below it. The font style is modern and clean, with \"TONGYI\" having a slightly bolder appearance compared to \"Qwen.\"","refusal":null,"annotations":null,"audio":null,"function_call":null,"tool_calls":[],"reasoning":null,"reasoning_content":null},"logprobs":null,"finish_reason":"length","stop_reason":null,"token_ids":null}],"service_tier":null,"system_fingerprint":null,"usage":{"prompt_tokens":78,"total_tokens":178,"completion_tokens":100,"prompt_tokens_details":null},"prompt_logprobs":null,"prompt_token_ids":null,"kv_transfer_params":null} ``` #### ✅ Benchmark Run: ```bash export VLLM_USE_MODELSCOPE=False export HF_ENDPOINT="https://hf-mirror.com" vllm bench serve \ --model /root/.cache/modelscope/hub/models/Qwen/Qwen2.5-VL-7B-Instruct \ --backend openai-chat \ --endpoint /v1/chat/completions \ --dataset-name hf \ --hf-split train \ --dataset-path lmarena-ai/vision-arena-bench-v0.1 \ --num-prompts 10 \ --no-stream ``` Before this PR: ``` ============ Serving Benchmark Result ============ Successful requests: 10 Failed requests: 0 Benchmark duration (s): 5.96 Total input tokens: 7191 Total generated tokens: 996 Request throughput (req/s): 1.68 Output token throughput (tok/s): 167.05 Peak output token throughput (tok/s): 261.00 Peak concurrent requests: 10.00 Total token throughput (tok/s): 1373.16 ---------------Time to First Token---------------- Mean TTFT (ms): 964.43 Median TTFT (ms): 858.48 P99 TTFT (ms): 1691.45 -----Time per Output Token (excl. 1st token)------ Mean TPOT (ms): 63.08 Median TPOT (ms): 40.86 P99 TPOT (ms): 241.30 ---------------Inter-token Latency---------------- Mean ITL (ms): 40.16 Median ITL (ms): 33.61 P99 ITL (ms): 250.30 ================================================== ``` After this PR: ``` ============ Serving Benchmark Result ============ Successful requests: 10 Failed requests: 0 Benchmark duration (s): 5.71 Total input tokens: 7191 Total generated tokens: 996 Request throughput (req/s): 1.75 Output token throughput (tok/s): 174.45 Peak output token throughput (tok/s): 279.00 Peak concurrent requests: 10.00 Total token throughput (tok/s): 1433.95 ---------------Time to First Token---------------- Mean TTFT (ms): 992.14 Median TTFT (ms): 938.30 P99 TTFT (ms): 1728.71 -----Time per Output Token (excl. 1st token)------ Mean TPOT (ms): 59.16 Median TPOT (ms): 37.65 P99 TPOT (ms): 234.89 ---------------Inter-token Latency---------------- Mean ITL (ms): 36.55 Median ITL (ms): 30.73 P99 ITL (ms): 170.72 ================================================== ``` - vLLM version: v0.13.0 - vLLM main: vllm-project/vllm@2f4e654 --------- Signed-off-by: shen-shanshan <467638484@qq.com>
… better performance (vllm-project#5799) ### What this PR does / why we need it? - Use upstream util function (`_pre_process()` and `_post_process()`) to reduce redundant codes. (Find more details at https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/layers/rotary_embedding/common.py#L184-L213) - Merge Q/K split to simplify the logic of calling `torch_npu.npu_rotary_mul()` for better performance (TPOT has been reduced by **6.22%**). ### Does this PR introduce _any_ user-facing change? no. ### How was this patch tested? #### ✅ Functional test Launch the server: ```bash export VLLM_USE_MODELSCOPE=True vllm serve /root/.cache/modelscope/hub/models/Qwen/Qwen2.5-VL-7B-Instruct \ --dtype bfloat16 \ --limit-mm-per-prompt '{"image": 1}' \ --max-model-len 16384 \ --max-num-batched-tokens 16384 ``` Query the server: ```bash curl -X POST http://localhost:8000/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "/root/.cache/modelscope/hub/models/Qwen/Qwen2.5-VL-7B-Instruct", "messages": [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": [ {"type": "image_url", "image_url": {"url": "https://modelscope.oss-cn-beijing.aliyuncs.com/resource/qwen.png"}}, {"type": "text", "text": "What is the text in the illustrate? How does it look?"} ]} ], "max_tokens": 100 }' ``` Output: ``` {"id":"chatcmpl-b2911ab6989ef098","object":"chat.completion","created":1768202780,"model":"/root/.cache/modelscope/hub/models/Qwen/Qwen2.5-VL-7B-Instruct","choices":[{"index":0,"message":{"role":"assistant","content":"The text in the illustration is \"TONGYI Qwen.\" The word \"TONGYI\" is written in blue, and \"Qwen\" is written in gray. The text appears to be part of a logo or branding design, with \"TONGYI\" being more prominent and \"Qwen\" being slightly smaller and positioned below it. The font style is modern and clean, with \"TONGYI\" having a slightly bolder appearance compared to \"Qwen.\"","refusal":null,"annotations":null,"audio":null,"function_call":null,"tool_calls":[],"reasoning":null,"reasoning_content":null},"logprobs":null,"finish_reason":"length","stop_reason":null,"token_ids":null}],"service_tier":null,"system_fingerprint":null,"usage":{"prompt_tokens":78,"total_tokens":178,"completion_tokens":100,"prompt_tokens_details":null},"prompt_logprobs":null,"prompt_token_ids":null,"kv_transfer_params":null} ``` #### ✅ Benchmark Run: ```bash export VLLM_USE_MODELSCOPE=False export HF_ENDPOINT="https://hf-mirror.com" vllm bench serve \ --model /root/.cache/modelscope/hub/models/Qwen/Qwen2.5-VL-7B-Instruct \ --backend openai-chat \ --endpoint /v1/chat/completions \ --dataset-name hf \ --hf-split train \ --dataset-path lmarena-ai/vision-arena-bench-v0.1 \ --num-prompts 10 \ --no-stream ``` Before this PR: ``` ============ Serving Benchmark Result ============ Successful requests: 10 Failed requests: 0 Benchmark duration (s): 5.96 Total input tokens: 7191 Total generated tokens: 996 Request throughput (req/s): 1.68 Output token throughput (tok/s): 167.05 Peak output token throughput (tok/s): 261.00 Peak concurrent requests: 10.00 Total token throughput (tok/s): 1373.16 ---------------Time to First Token---------------- Mean TTFT (ms): 964.43 Median TTFT (ms): 858.48 P99 TTFT (ms): 1691.45 -----Time per Output Token (excl. 1st token)------ Mean TPOT (ms): 63.08 Median TPOT (ms): 40.86 P99 TPOT (ms): 241.30 ---------------Inter-token Latency---------------- Mean ITL (ms): 40.16 Median ITL (ms): 33.61 P99 ITL (ms): 250.30 ================================================== ``` After this PR: ``` ============ Serving Benchmark Result ============ Successful requests: 10 Failed requests: 0 Benchmark duration (s): 5.71 Total input tokens: 7191 Total generated tokens: 996 Request throughput (req/s): 1.75 Output token throughput (tok/s): 174.45 Peak output token throughput (tok/s): 279.00 Peak concurrent requests: 10.00 Total token throughput (tok/s): 1433.95 ---------------Time to First Token---------------- Mean TTFT (ms): 992.14 Median TTFT (ms): 938.30 P99 TTFT (ms): 1728.71 -----Time per Output Token (excl. 1st token)------ Mean TPOT (ms): 59.16 Median TPOT (ms): 37.65 P99 TPOT (ms): 234.89 ---------------Inter-token Latency---------------- Mean ITL (ms): 36.55 Median ITL (ms): 30.73 P99 ITL (ms): 170.72 ================================================== ``` - vLLM version: v0.13.0 - vLLM main: vllm-project/vllm@2f4e654 --------- Signed-off-by: shen-shanshan <467638484@qq.com>
… better performance (vllm-project#5799) ### What this PR does / why we need it? - Use upstream util function (`_pre_process()` and `_post_process()`) to reduce redundant codes. (Find more details at https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/layers/rotary_embedding/common.py#L184-L213) - Merge Q/K split to simplify the logic of calling `torch_npu.npu_rotary_mul()` for better performance (TPOT has been reduced by **6.22%**). ### Does this PR introduce _any_ user-facing change? no. ### How was this patch tested? #### ✅ Functional test Launch the server: ```bash export VLLM_USE_MODELSCOPE=True vllm serve /root/.cache/modelscope/hub/models/Qwen/Qwen2.5-VL-7B-Instruct \ --dtype bfloat16 \ --limit-mm-per-prompt '{"image": 1}' \ --max-model-len 16384 \ --max-num-batched-tokens 16384 ``` Query the server: ```bash curl -X POST http://localhost:8000/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "/root/.cache/modelscope/hub/models/Qwen/Qwen2.5-VL-7B-Instruct", "messages": [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": [ {"type": "image_url", "image_url": {"url": "https://modelscope.oss-cn-beijing.aliyuncs.com/resource/qwen.png"}}, {"type": "text", "text": "What is the text in the illustrate? How does it look?"} ]} ], "max_tokens": 100 }' ``` Output: ``` {"id":"chatcmpl-b2911ab6989ef098","object":"chat.completion","created":1768202780,"model":"/root/.cache/modelscope/hub/models/Qwen/Qwen2.5-VL-7B-Instruct","choices":[{"index":0,"message":{"role":"assistant","content":"The text in the illustration is \"TONGYI Qwen.\" The word \"TONGYI\" is written in blue, and \"Qwen\" is written in gray. The text appears to be part of a logo or branding design, with \"TONGYI\" being more prominent and \"Qwen\" being slightly smaller and positioned below it. The font style is modern and clean, with \"TONGYI\" having a slightly bolder appearance compared to \"Qwen.\"","refusal":null,"annotations":null,"audio":null,"function_call":null,"tool_calls":[],"reasoning":null,"reasoning_content":null},"logprobs":null,"finish_reason":"length","stop_reason":null,"token_ids":null}],"service_tier":null,"system_fingerprint":null,"usage":{"prompt_tokens":78,"total_tokens":178,"completion_tokens":100,"prompt_tokens_details":null},"prompt_logprobs":null,"prompt_token_ids":null,"kv_transfer_params":null} ``` #### ✅ Benchmark Run: ```bash export VLLM_USE_MODELSCOPE=False export HF_ENDPOINT="https://hf-mirror.com" vllm bench serve \ --model /root/.cache/modelscope/hub/models/Qwen/Qwen2.5-VL-7B-Instruct \ --backend openai-chat \ --endpoint /v1/chat/completions \ --dataset-name hf \ --hf-split train \ --dataset-path lmarena-ai/vision-arena-bench-v0.1 \ --num-prompts 10 \ --no-stream ``` Before this PR: ``` ============ Serving Benchmark Result ============ Successful requests: 10 Failed requests: 0 Benchmark duration (s): 5.96 Total input tokens: 7191 Total generated tokens: 996 Request throughput (req/s): 1.68 Output token throughput (tok/s): 167.05 Peak output token throughput (tok/s): 261.00 Peak concurrent requests: 10.00 Total token throughput (tok/s): 1373.16 ---------------Time to First Token---------------- Mean TTFT (ms): 964.43 Median TTFT (ms): 858.48 P99 TTFT (ms): 1691.45 -----Time per Output Token (excl. 1st token)------ Mean TPOT (ms): 63.08 Median TPOT (ms): 40.86 P99 TPOT (ms): 241.30 ---------------Inter-token Latency---------------- Mean ITL (ms): 40.16 Median ITL (ms): 33.61 P99 ITL (ms): 250.30 ================================================== ``` After this PR: ``` ============ Serving Benchmark Result ============ Successful requests: 10 Failed requests: 0 Benchmark duration (s): 5.71 Total input tokens: 7191 Total generated tokens: 996 Request throughput (req/s): 1.75 Output token throughput (tok/s): 174.45 Peak output token throughput (tok/s): 279.00 Peak concurrent requests: 10.00 Total token throughput (tok/s): 1433.95 ---------------Time to First Token---------------- Mean TTFT (ms): 992.14 Median TTFT (ms): 938.30 P99 TTFT (ms): 1728.71 -----Time per Output Token (excl. 1st token)------ Mean TPOT (ms): 59.16 Median TPOT (ms): 37.65 P99 TPOT (ms): 234.89 ---------------Inter-token Latency---------------- Mean ITL (ms): 36.55 Median ITL (ms): 30.73 P99 ITL (ms): 170.72 ================================================== ``` - vLLM version: v0.13.0 - vLLM main: vllm-project/vllm@2f4e654 --------- Signed-off-by: shen-shanshan <467638484@qq.com> Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
… better performance (vllm-project#5799) ### What this PR does / why we need it? - Use upstream util function (`_pre_process()` and `_post_process()`) to reduce redundant codes. (Find more details at https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/layers/rotary_embedding/common.py#L184-L213) - Merge Q/K split to simplify the logic of calling `torch_npu.npu_rotary_mul()` for better performance (TPOT has been reduced by **6.22%**). ### Does this PR introduce _any_ user-facing change? no. ### How was this patch tested? #### ✅ Functional test Launch the server: ```bash export VLLM_USE_MODELSCOPE=True vllm serve /root/.cache/modelscope/hub/models/Qwen/Qwen2.5-VL-7B-Instruct \ --dtype bfloat16 \ --limit-mm-per-prompt '{"image": 1}' \ --max-model-len 16384 \ --max-num-batched-tokens 16384 ``` Query the server: ```bash curl -X POST http://localhost:8000/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "/root/.cache/modelscope/hub/models/Qwen/Qwen2.5-VL-7B-Instruct", "messages": [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": [ {"type": "image_url", "image_url": {"url": "https://modelscope.oss-cn-beijing.aliyuncs.com/resource/qwen.png"}}, {"type": "text", "text": "What is the text in the illustrate? How does it look?"} ]} ], "max_tokens": 100 }' ``` Output: ``` {"id":"chatcmpl-b2911ab6989ef098","object":"chat.completion","created":1768202780,"model":"/root/.cache/modelscope/hub/models/Qwen/Qwen2.5-VL-7B-Instruct","choices":[{"index":0,"message":{"role":"assistant","content":"The text in the illustration is \"TONGYI Qwen.\" The word \"TONGYI\" is written in blue, and \"Qwen\" is written in gray. The text appears to be part of a logo or branding design, with \"TONGYI\" being more prominent and \"Qwen\" being slightly smaller and positioned below it. The font style is modern and clean, with \"TONGYI\" having a slightly bolder appearance compared to \"Qwen.\"","refusal":null,"annotations":null,"audio":null,"function_call":null,"tool_calls":[],"reasoning":null,"reasoning_content":null},"logprobs":null,"finish_reason":"length","stop_reason":null,"token_ids":null}],"service_tier":null,"system_fingerprint":null,"usage":{"prompt_tokens":78,"total_tokens":178,"completion_tokens":100,"prompt_tokens_details":null},"prompt_logprobs":null,"prompt_token_ids":null,"kv_transfer_params":null} ``` #### ✅ Benchmark Run: ```bash export VLLM_USE_MODELSCOPE=False export HF_ENDPOINT="https://hf-mirror.com" vllm bench serve \ --model /root/.cache/modelscope/hub/models/Qwen/Qwen2.5-VL-7B-Instruct \ --backend openai-chat \ --endpoint /v1/chat/completions \ --dataset-name hf \ --hf-split train \ --dataset-path lmarena-ai/vision-arena-bench-v0.1 \ --num-prompts 10 \ --no-stream ``` Before this PR: ``` ============ Serving Benchmark Result ============ Successful requests: 10 Failed requests: 0 Benchmark duration (s): 5.96 Total input tokens: 7191 Total generated tokens: 996 Request throughput (req/s): 1.68 Output token throughput (tok/s): 167.05 Peak output token throughput (tok/s): 261.00 Peak concurrent requests: 10.00 Total token throughput (tok/s): 1373.16 ---------------Time to First Token---------------- Mean TTFT (ms): 964.43 Median TTFT (ms): 858.48 P99 TTFT (ms): 1691.45 -----Time per Output Token (excl. 1st token)------ Mean TPOT (ms): 63.08 Median TPOT (ms): 40.86 P99 TPOT (ms): 241.30 ---------------Inter-token Latency---------------- Mean ITL (ms): 40.16 Median ITL (ms): 33.61 P99 ITL (ms): 250.30 ================================================== ``` After this PR: ``` ============ Serving Benchmark Result ============ Successful requests: 10 Failed requests: 0 Benchmark duration (s): 5.71 Total input tokens: 7191 Total generated tokens: 996 Request throughput (req/s): 1.75 Output token throughput (tok/s): 174.45 Peak output token throughput (tok/s): 279.00 Peak concurrent requests: 10.00 Total token throughput (tok/s): 1433.95 ---------------Time to First Token---------------- Mean TTFT (ms): 992.14 Median TTFT (ms): 938.30 P99 TTFT (ms): 1728.71 -----Time per Output Token (excl. 1st token)------ Mean TPOT (ms): 59.16 Median TPOT (ms): 37.65 P99 TPOT (ms): 234.89 ---------------Inter-token Latency---------------- Mean ITL (ms): 36.55 Median ITL (ms): 30.73 P99 ITL (ms): 170.72 ================================================== ``` - vLLM version: v0.13.0 - vLLM main: vllm-project/vllm@2f4e654 --------- Signed-off-by: shen-shanshan <467638484@qq.com>
… better performance (vllm-project#5799) ### What this PR does / why we need it? - Use upstream util function (`_pre_process()` and `_post_process()`) to reduce redundant codes. (Find more details at https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/layers/rotary_embedding/common.py#L184-L213) - Merge Q/K split to simplify the logic of calling `torch_npu.npu_rotary_mul()` for better performance (TPOT has been reduced by **6.22%**). ### Does this PR introduce _any_ user-facing change? no. ### How was this patch tested? #### ✅ Functional test Launch the server: ```bash export VLLM_USE_MODELSCOPE=True vllm serve /root/.cache/modelscope/hub/models/Qwen/Qwen2.5-VL-7B-Instruct \ --dtype bfloat16 \ --limit-mm-per-prompt '{"image": 1}' \ --max-model-len 16384 \ --max-num-batched-tokens 16384 ``` Query the server: ```bash curl -X POST http://localhost:8000/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "/root/.cache/modelscope/hub/models/Qwen/Qwen2.5-VL-7B-Instruct", "messages": [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": [ {"type": "image_url", "image_url": {"url": "https://modelscope.oss-cn-beijing.aliyuncs.com/resource/qwen.png"}}, {"type": "text", "text": "What is the text in the illustrate? How does it look?"} ]} ], "max_tokens": 100 }' ``` Output: ``` {"id":"chatcmpl-b2911ab6989ef098","object":"chat.completion","created":1768202780,"model":"/root/.cache/modelscope/hub/models/Qwen/Qwen2.5-VL-7B-Instruct","choices":[{"index":0,"message":{"role":"assistant","content":"The text in the illustration is \"TONGYI Qwen.\" The word \"TONGYI\" is written in blue, and \"Qwen\" is written in gray. The text appears to be part of a logo or branding design, with \"TONGYI\" being more prominent and \"Qwen\" being slightly smaller and positioned below it. The font style is modern and clean, with \"TONGYI\" having a slightly bolder appearance compared to \"Qwen.\"","refusal":null,"annotations":null,"audio":null,"function_call":null,"tool_calls":[],"reasoning":null,"reasoning_content":null},"logprobs":null,"finish_reason":"length","stop_reason":null,"token_ids":null}],"service_tier":null,"system_fingerprint":null,"usage":{"prompt_tokens":78,"total_tokens":178,"completion_tokens":100,"prompt_tokens_details":null},"prompt_logprobs":null,"prompt_token_ids":null,"kv_transfer_params":null} ``` #### ✅ Benchmark Run: ```bash export VLLM_USE_MODELSCOPE=False export HF_ENDPOINT="https://hf-mirror.com" vllm bench serve \ --model /root/.cache/modelscope/hub/models/Qwen/Qwen2.5-VL-7B-Instruct \ --backend openai-chat \ --endpoint /v1/chat/completions \ --dataset-name hf \ --hf-split train \ --dataset-path lmarena-ai/vision-arena-bench-v0.1 \ --num-prompts 10 \ --no-stream ``` Before this PR: ``` ============ Serving Benchmark Result ============ Successful requests: 10 Failed requests: 0 Benchmark duration (s): 5.96 Total input tokens: 7191 Total generated tokens: 996 Request throughput (req/s): 1.68 Output token throughput (tok/s): 167.05 Peak output token throughput (tok/s): 261.00 Peak concurrent requests: 10.00 Total token throughput (tok/s): 1373.16 ---------------Time to First Token---------------- Mean TTFT (ms): 964.43 Median TTFT (ms): 858.48 P99 TTFT (ms): 1691.45 -----Time per Output Token (excl. 1st token)------ Mean TPOT (ms): 63.08 Median TPOT (ms): 40.86 P99 TPOT (ms): 241.30 ---------------Inter-token Latency---------------- Mean ITL (ms): 40.16 Median ITL (ms): 33.61 P99 ITL (ms): 250.30 ================================================== ``` After this PR: ``` ============ Serving Benchmark Result ============ Successful requests: 10 Failed requests: 0 Benchmark duration (s): 5.71 Total input tokens: 7191 Total generated tokens: 996 Request throughput (req/s): 1.75 Output token throughput (tok/s): 174.45 Peak output token throughput (tok/s): 279.00 Peak concurrent requests: 10.00 Total token throughput (tok/s): 1433.95 ---------------Time to First Token---------------- Mean TTFT (ms): 992.14 Median TTFT (ms): 938.30 P99 TTFT (ms): 1728.71 -----Time per Output Token (excl. 1st token)------ Mean TPOT (ms): 59.16 Median TPOT (ms): 37.65 P99 TPOT (ms): 234.89 ---------------Inter-token Latency---------------- Mean ITL (ms): 36.55 Median ITL (ms): 30.73 P99 ITL (ms): 170.72 ================================================== ``` - vLLM version: v0.13.0 - vLLM main: vllm-project/vllm@2f4e654 --------- Signed-off-by: shen-shanshan <467638484@qq.com> Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
… better performance (vllm-project#5799) ### What this PR does / why we need it? - Use upstream util function (`_pre_process()` and `_post_process()`) to reduce redundant codes. (Find more details at https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/layers/rotary_embedding/common.py#L184-L213) - Merge Q/K split to simplify the logic of calling `torch_npu.npu_rotary_mul()` for better performance (TPOT has been reduced by **6.22%**). ### Does this PR introduce _any_ user-facing change? no. ### How was this patch tested? #### ✅ Functional test Launch the server: ```bash export VLLM_USE_MODELSCOPE=True vllm serve /root/.cache/modelscope/hub/models/Qwen/Qwen2.5-VL-7B-Instruct \ --dtype bfloat16 \ --limit-mm-per-prompt '{"image": 1}' \ --max-model-len 16384 \ --max-num-batched-tokens 16384 ``` Query the server: ```bash curl -X POST http://localhost:8000/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "/root/.cache/modelscope/hub/models/Qwen/Qwen2.5-VL-7B-Instruct", "messages": [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": [ {"type": "image_url", "image_url": {"url": "https://modelscope.oss-cn-beijing.aliyuncs.com/resource/qwen.png"}}, {"type": "text", "text": "What is the text in the illustrate? How does it look?"} ]} ], "max_tokens": 100 }' ``` Output: ``` {"id":"chatcmpl-b2911ab6989ef098","object":"chat.completion","created":1768202780,"model":"/root/.cache/modelscope/hub/models/Qwen/Qwen2.5-VL-7B-Instruct","choices":[{"index":0,"message":{"role":"assistant","content":"The text in the illustration is \"TONGYI Qwen.\" The word \"TONGYI\" is written in blue, and \"Qwen\" is written in gray. The text appears to be part of a logo or branding design, with \"TONGYI\" being more prominent and \"Qwen\" being slightly smaller and positioned below it. The font style is modern and clean, with \"TONGYI\" having a slightly bolder appearance compared to \"Qwen.\"","refusal":null,"annotations":null,"audio":null,"function_call":null,"tool_calls":[],"reasoning":null,"reasoning_content":null},"logprobs":null,"finish_reason":"length","stop_reason":null,"token_ids":null}],"service_tier":null,"system_fingerprint":null,"usage":{"prompt_tokens":78,"total_tokens":178,"completion_tokens":100,"prompt_tokens_details":null},"prompt_logprobs":null,"prompt_token_ids":null,"kv_transfer_params":null} ``` #### ✅ Benchmark Run: ```bash export VLLM_USE_MODELSCOPE=False export HF_ENDPOINT="https://hf-mirror.com" vllm bench serve \ --model /root/.cache/modelscope/hub/models/Qwen/Qwen2.5-VL-7B-Instruct \ --backend openai-chat \ --endpoint /v1/chat/completions \ --dataset-name hf \ --hf-split train \ --dataset-path lmarena-ai/vision-arena-bench-v0.1 \ --num-prompts 10 \ --no-stream ``` Before this PR: ``` ============ Serving Benchmark Result ============ Successful requests: 10 Failed requests: 0 Benchmark duration (s): 5.96 Total input tokens: 7191 Total generated tokens: 996 Request throughput (req/s): 1.68 Output token throughput (tok/s): 167.05 Peak output token throughput (tok/s): 261.00 Peak concurrent requests: 10.00 Total token throughput (tok/s): 1373.16 ---------------Time to First Token---------------- Mean TTFT (ms): 964.43 Median TTFT (ms): 858.48 P99 TTFT (ms): 1691.45 -----Time per Output Token (excl. 1st token)------ Mean TPOT (ms): 63.08 Median TPOT (ms): 40.86 P99 TPOT (ms): 241.30 ---------------Inter-token Latency---------------- Mean ITL (ms): 40.16 Median ITL (ms): 33.61 P99 ITL (ms): 250.30 ================================================== ``` After this PR: ``` ============ Serving Benchmark Result ============ Successful requests: 10 Failed requests: 0 Benchmark duration (s): 5.71 Total input tokens: 7191 Total generated tokens: 996 Request throughput (req/s): 1.75 Output token throughput (tok/s): 174.45 Peak output token throughput (tok/s): 279.00 Peak concurrent requests: 10.00 Total token throughput (tok/s): 1433.95 ---------------Time to First Token---------------- Mean TTFT (ms): 992.14 Median TTFT (ms): 938.30 P99 TTFT (ms): 1728.71 -----Time per Output Token (excl. 1st token)------ Mean TPOT (ms): 59.16 Median TPOT (ms): 37.65 P99 TPOT (ms): 234.89 ---------------Inter-token Latency---------------- Mean ITL (ms): 36.55 Median ITL (ms): 30.73 P99 ITL (ms): 170.72 ================================================== ``` - vLLM version: v0.13.0 - vLLM main: vllm-project/vllm@2f4e654 --------- Signed-off-by: shen-shanshan <467638484@qq.com>
What this PR does / why we need it?
_pre_process()and_post_process()) to reduce redundant codes. (Find more details at https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/layers/rotary_embedding/common.py#L184-L213)torch_npu.npu_rotary_mul()for better performance (TPOT has been reduced by 6.22%).Does this PR introduce any user-facing change?
no.
How was this patch tested?
✅ Functional test
Launch the server:
Query the server:
curl -X POST http://localhost:8000/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "/root/.cache/modelscope/hub/models/Qwen/Qwen2.5-VL-7B-Instruct", "messages": [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": [ {"type": "image_url", "image_url": {"url": "https://modelscope.oss-cn-beijing.aliyuncs.com/resource/qwen.png"}}, {"type": "text", "text": "What is the text in the illustrate? How does it look?"} ]} ], "max_tokens": 100 }'Output:
✅ Benchmark
Run:
Before this PR:
After this PR: