Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
[Perf] Improve MLA multistream performance #1353
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Perf] Improve MLA multistream performance #1353
Changes from all commits
891bd87File filter
Filter by extension
Conversations
Uh oh!
There was an error while loading. Please reload this page.
Jump to
Uh oh!
There was an error while loading. Please reload this page.
There are no files selected for viewing
Check warning on line 590 in vllm_ascend/attention/mla_v1.py
vllm_ascend/attention/mla_v1.py#L589-L590
Check warning on line 881 in vllm_ascend/attention/mla_v1.py
vllm_ascend/attention/mla_v1.py#L881
Check warning on line 892 in vllm_ascend/attention/mla_v1.py
vllm_ascend/attention/mla_v1.py#L892
Check warning on line 1025 in vllm_ascend/attention/mla_v1.py
vllm_ascend/attention/mla_v1.py#L1025
Check warning on line 1098 in vllm_ascend/attention/mla_v1.py
vllm_ascend/attention/mla_v1.py#L1098
Check warning on line 1101 in vllm_ascend/attention/mla_v1.py
vllm_ascend/attention/mla_v1.py#L1101
Check warning on line 1104 in vllm_ascend/attention/mla_v1.py
vllm_ascend/attention/mla_v1.py#L1104
Check warning on line 1116 in vllm_ascend/attention/mla_v1.py
vllm_ascend/attention/mla_v1.py#L1116
Check warning on line 569 in vllm_ascend/models/deepseek_v2.py
vllm_ascend/models/deepseek_v2.py#L569
Check warning on line 574 in vllm_ascend/models/deepseek_v2.py
vllm_ascend/models/deepseek_v2.py#L573-L574
Check warning on line 430 in vllm_ascend/utils.py
vllm_ascend/utils.py#L425-L430
Uh oh!
There was an error while loading. Please reload this page.