Skip to content

Revert "skip HPU graphs for long prefills" (#850)#888

Merged
mgawarkiewicz-intel merged 1 commit into
releases/v0.14.1from
adobrzyn/revert_780_for_release0141
Jan 28, 2026
Merged

Revert "skip HPU graphs for long prefills" (#850)#888
mgawarkiewicz-intel merged 1 commit into
releases/v0.14.1from
adobrzyn/revert_780_for_release0141

Conversation

@adobrzyn
Copy link
Copy Markdown
Collaborator

Reverts #780


Reverts #780

---------

Signed-off-by: Agata Dobrzyniewicz <adobrzyniewicz@habana.ai>
Co-authored-by: Chendi.Xue <chendi.xue@intel.com>
Copilot AI review requested due to automatic review settings January 27, 2026 08:46
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR reverts a previous change that skipped HPU graphs for long prefills (#780). The revert simplifies the graph capture decision logic and modifies test configurations.

Changes:

  • Reverted the logic for determining when to bypass HPU graphs, replacing a complex condition involving sequence length and batched tokens with a simpler check based on max_cudagraph_capture_size
  • Updated test configurations by reducing max-model-len in performance tests and adding it to GSM8K tests

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.

File Description
vllm_gaudi/v1/worker/hpu_model_runner.py Simplified graph capture logic and removed max_graph_capture_tokens variable
tests/full_tests/ci_perf_tests.sh Reduced max-model-len from 32768 to 16384
tests/full_tests/ci_gsm8k_tests.sh Added max-model-len parameter (131072) to Qwen3 MOE test

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread vllm_gaudi/v1/worker/hpu_model_runner.py
Comment thread vllm_gaudi/v1/worker/hpu_model_runner.py
@github-actions
Copy link
Copy Markdown

✅ CI Passed

All checks passed successfully against the following vllm commit:
d7de043d55d1dd629554467e23874097e1c48993

@mgawarkiewicz-intel mgawarkiewicz-intel merged commit c66a038 into releases/v0.14.1 Jan 28, 2026
53 checks passed
slokesha pushed a commit to libinta/vllm-gaudi that referenced this pull request Jan 29, 2026
…roject#888)

Reverts vllm-project#780

---------

Signed-off-by: Agata Dobrzyniewicz <adobrzyniewicz@habana.ai>
Co-authored-by: Chendi.Xue <chendi.xue@intel.com>
Signed-off-by: slokesha <slokeshappa@habana.ai>
czhu15 added a commit that referenced this pull request Feb 10, 2026
yangulei added a commit that referenced this pull request Feb 24, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants