[XPU] Automatically detect target platform as XPU in build.#37634
[XPU] Automatically detect target platform as XPU in build.#37634jikunshang merged 1 commit intovllm-project:mainfrom
Conversation
There was a problem hiding this comment.
Code Review
This pull request adds automatic detection for the XPU platform during the build process. The implementation correctly adds a check for torch.version.xpu. However, the order of platform detection might lead to unexpected behavior. Currently, it prioritizes ROCm, then XPU, and finally CUDA. On a system with both NVIDIA and Intel GPUs, this would default to building for XPU, which might not be the user's intention, especially since CUDA is the primary supported platform for vLLM. I've suggested reordering the checks to prioritize CUDA, which seems more intuitive for most users.
| if torch.version.hip is not None: | ||
| VLLM_TARGET_DEVICE = "rocm" | ||
| logger.info("Auto-detected ROCm") | ||
| elif torch.version.xpu is not None: | ||
| VLLM_TARGET_DEVICE = "xpu" | ||
| logger.info("Auto-detected XPU") | ||
| elif torch.version.cuda is not None: | ||
| VLLM_TARGET_DEVICE = "cuda" | ||
| logger.info("Auto-detected CUDA") |
There was a problem hiding this comment.
The order of device detection prioritizes ROCm and XPU over CUDA. On a system with multiple GPU types (e.g., NVIDIA and Intel), this will cause vLLM to build for XPU by default, even if a more powerful NVIDIA GPU is available. Given that CUDA is the primary and most mature backend for vLLM, it should likely be prioritized in the auto-detection logic to provide a better out-of-the-box experience for users. I suggest reordering to check for CUDA first.
| if torch.version.hip is not None: | |
| VLLM_TARGET_DEVICE = "rocm" | |
| logger.info("Auto-detected ROCm") | |
| elif torch.version.xpu is not None: | |
| VLLM_TARGET_DEVICE = "xpu" | |
| logger.info("Auto-detected XPU") | |
| elif torch.version.cuda is not None: | |
| VLLM_TARGET_DEVICE = "cuda" | |
| logger.info("Auto-detected CUDA") | |
| if torch.version.cuda is not None: | |
| VLLM_TARGET_DEVICE = "cuda" | |
| logger.info("Auto-detected CUDA") | |
| elif torch.version.hip is not None: | |
| VLLM_TARGET_DEVICE = "rocm" | |
| logger.info("Auto-detected ROCm") | |
| elif torch.version.xpu is not None: | |
| VLLM_TARGET_DEVICE = "xpu" | |
| logger.info("Auto-detected XPU") |
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add If you have any questions, please reach out to us on Slack at https://slack.vllm.ai. 🚀 |
jikunshang
left a comment
There was a problem hiding this comment.
LGTM. thanks for fixing!
Signed-off-by: huanxing <huanxing.shen@intel.com>
…ject#37634) Signed-off-by: huanxing <huanxing.shen@intel.com>
Purpose
Before VLLM_TARGET_DEVICE=xpu is required to build vLLM for XPU, this patch will automatically detect XPU as target platform.
Test Plan
pip install -r requirements/xpu.txt &&
pip install --no-build-isolation .
Test Result