Skip to content

[XPU] Automatically detect target platform as XPU in build.#37634

Merged
jikunshang merged 1 commit intovllm-project:mainfrom
ccrhx4:enable_xpu_autodetection_build
Mar 20, 2026
Merged

[XPU] Automatically detect target platform as XPU in build.#37634
jikunshang merged 1 commit intovllm-project:mainfrom
ccrhx4:enable_xpu_autodetection_build

Conversation

@ccrhx4
Copy link
Contributor

@ccrhx4 ccrhx4 commented Mar 20, 2026

Purpose

Before VLLM_TARGET_DEVICE=xpu is required to build vLLM for XPU, this patch will automatically detect XPU as target platform.

Test Plan

pip install -r requirements/xpu.txt &&
pip install --no-build-isolation .

Test Result

@mergify mergify bot added the ci/build label Mar 20, 2026
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds automatic detection for the XPU platform during the build process. The implementation correctly adds a check for torch.version.xpu. However, the order of platform detection might lead to unexpected behavior. Currently, it prioritizes ROCm, then XPU, and finally CUDA. On a system with both NVIDIA and Intel GPUs, this would default to building for XPU, which might not be the user's intention, especially since CUDA is the primary supported platform for vLLM. I've suggested reordering the checks to prioritize CUDA, which seems more intuitive for most users.

Comment on lines 54 to 62
if torch.version.hip is not None:
VLLM_TARGET_DEVICE = "rocm"
logger.info("Auto-detected ROCm")
elif torch.version.xpu is not None:
VLLM_TARGET_DEVICE = "xpu"
logger.info("Auto-detected XPU")
elif torch.version.cuda is not None:
VLLM_TARGET_DEVICE = "cuda"
logger.info("Auto-detected CUDA")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The order of device detection prioritizes ROCm and XPU over CUDA. On a system with multiple GPU types (e.g., NVIDIA and Intel), this will cause vLLM to build for XPU by default, even if a more powerful NVIDIA GPU is available. Given that CUDA is the primary and most mature backend for vLLM, it should likely be prioritized in the auto-detection logic to provide a better out-of-the-box experience for users. I suggest reordering to check for CUDA first.

Suggested change
if torch.version.hip is not None:
VLLM_TARGET_DEVICE = "rocm"
logger.info("Auto-detected ROCm")
elif torch.version.xpu is not None:
VLLM_TARGET_DEVICE = "xpu"
logger.info("Auto-detected XPU")
elif torch.version.cuda is not None:
VLLM_TARGET_DEVICE = "cuda"
logger.info("Auto-detected CUDA")
if torch.version.cuda is not None:
VLLM_TARGET_DEVICE = "cuda"
logger.info("Auto-detected CUDA")
elif torch.version.hip is not None:
VLLM_TARGET_DEVICE = "rocm"
logger.info("Auto-detected ROCm")
elif torch.version.xpu is not None:
VLLM_TARGET_DEVICE = "xpu"
logger.info("Auto-detected XPU")

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

If you have any questions, please reach out to us on Slack at https://slack.vllm.ai.

🚀

Copy link
Collaborator

@jikunshang jikunshang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. thanks for fixing!

@jikunshang jikunshang added the ready ONLY add when PR is ready to merge/full CI is needed label Mar 20, 2026
@jikunshang jikunshang merged commit 6951fcd into vllm-project:main Mar 20, 2026
137 of 138 checks passed
Signed-off-by: huanxing <huanxing.shen@intel.com>
chooper26 pushed a commit to intellistream/vllm-hust that referenced this pull request Mar 21, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ci/build ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants