Skip to content

ci: Use stable Torch Release for cu130#2174

Merged
yzh119 merged 2 commits intoflashinfer-ai:mainfrom
bkryu:container_build_trigger
Dec 4, 2025
Merged

ci: Use stable Torch Release for cu130#2174
yzh119 merged 2 commits intoflashinfer-ai:mainfrom
bkryu:container_build_trigger

Conversation

@bkryu
Copy link
Collaborator

@bkryu bkryu commented Dec 4, 2025

📌 Description

Previous PR #2167 's container build is installing torch 2.10.dev12032025, due to the cu130 Dockerfile installing unstable nightlies. We are seeing unit testing failures for torch 2.10 dev where torch.einsum causes a cuBLAS failure:

(py312) root@cc6b2de90050:/flashinfer# pip list | grep torch
pytorch-triton         3.5.1+gitbfeb0668
torch                  2.10.0.dev20251203+cu130
(py312) root@cc6b2de90050:/flashinfer# python3
>>> import torch
>>> query = torch.randn(1, 4, 128, device="cuda", dtype=torch.float16)
>>> key = torch.randn(1, 4, 128, device="cuda", dtype=torch.float16)
>>> scores = torch.einsum("qhd,khd->qkh", query.float(), key.float())
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/opt/conda/envs/py312/lib/python3.12/site-packages/torch/functional.py", line 373, in einsum
    return _VF.einsum(equation, operands)  # type: ignore[attr-defined]
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: CUDA error: CUBLAS_STATUS_INVALID_VALUE when calling `cublasSgemmStridedBatched( handle, opa, opb, m, n, k, &alpha, a, lda, stridea, b, ldb, strideb, &beta, c, ldc, stridec, num_batches)`

As PyTorch started releasing stable 2.9.1 for cu130, there is no longer a need to use nightlies. Local testing with the 2.9.1 resolves einsum failures:

(py312) root@fd986dc62859:/flashinfer# pip3 install --force-reinstall torch --index-url https://download.pytorch.org/whl/cu130 --no-deps
Looking in indexes: https://download.pytorch.org/whl/cu130
Collecting torch
  Downloading https://download.pytorch.org/whl/cu130/torch-2.9.1%2Bcu130-cp312-cp312-manylinux_2_28_x86_64.whl.metadata (30 kB)
Downloading https://download.pytorch.org/whl/cu130/torch-2.9.1%2Bcu130-cp312-cp312-manylinux_2_28_x86_64.whl (612.6 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 612.6/612.6 MB 215.8 MB/s  0:00:01
Installing collected packages: torch
  Attempting uninstall: torch
    Found existing installation: torch 2.10.0.dev20251203+cu130
    Uninstalling torch-2.10.0.dev20251203+cu130:
      Successfully uninstalled torch-2.10.0.dev20251203+cu130
Successfully installed torch-2.9.1+cu130
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager, possibly rendering your system unusable. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv. Use the --root-user-action option if you know what you are doing and want to suppress this warning.
(py312) root@fd986dc62859:/flashinfer# python3
Python 3.12.11 | packaged by conda-forge | (main, Jun  4 2025, 14:45:31) [GCC 13.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> query = torch.randn(1, 4, 128, device="cuda", dtype=torch.float16)
>>> key = torch.randn(1, 4, 128, device="cuda", dtype=torch.float16)
>>> scores = torch.einsum("qhd,khd->qkh", query.float(), key.float())

See this line for the release container build job for cu130 where a stable release 2.9.1 is being fetched and installed

🔍 Related Issues

🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.

✅ Pre-commit Checks

  • I have installed pre-commit by running pip install pre-commit (or used your preferred method).
  • I have installed the hooks with pre-commit install.
  • I have run the hooks manually with pre-commit run --all-files and fixed any reported issues.

If you are unsure about how to set up pre-commit, see the pre-commit documentation.

🧪 Tests

  • Tests have been added or updated as needed.
  • All tests are passing (unittest, etc.).

Reviewer Notes

Summary by CodeRabbit

  • Chores
    • Improved the package installation process for containerized deployments to enhance reliability and consistency.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 4, 2025

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

Walkthrough

These changes update two CUDA 13.0-specific Dockerfiles to use the stable cu130 package channel instead of the nightly build variant. The installation script argument is modified in both the production and development Dockerfile configurations, affecting PyTorch/CUDA stack selection during image build.

Changes

Cohort / File(s) Change Summary
Docker CUDA 13.0 build configuration
docker/Dockerfile.cu130, docker/Dockerfile.cu130.dev
Updated PyTorch/CUDA installation from nightly/cu130 to cu130 package channel

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~3 minutes

  • Verify that switching from nightly/cu130 to cu130 reflects the intended packaging strategy (stable vs. experimental builds)
  • Confirm both development and production variants should use the same stable channel

Poem

🐰 From nightly dreams to stable days,
Our Docker builds now take the right ways!
cu130 strong, no more chase,
Building CUDA with steadfast grace. ✨

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Title check ✅ Passed The PR title 'ci: Use stable Torch Release for cu130' describes switching from nightly to stable CUDA 130 packages, which aligns with the actual changes in both Dockerfile files that replace 'nightly/cu130' with 'cu130'.
Description check ✅ Passed The pull request description follows the template structure with detailed context about the PyTorch version issue, root cause analysis, and verification of the fix.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @bkryu, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request serves as a temporary measure to re-trigger the CI container build process. Its main purpose is to debug an observed anomaly in a prior build, where an incorrect development version of PyTorch was installed. The changes introduce a diagnostic --dry-run command to the PyTorch installation script within the Docker build process, aiming to pinpoint the root cause of the unexpected package installation.

Highlights

  • CI Trigger: This pull request is a dummy PR created specifically to trigger a new CI container build.
  • Debugging PyTorch Installation: The primary goal is to investigate and debug an issue from a previous PR (ci: Install CUDA version specified torch first during container building.  #2167) where an unexpected development version of PyTorch (torch 2.10.dev12032025) was being installed in the container build.
  • Installation Script Modification: The install_python_packages.sh script was modified to include a pip3 install torch --dry-run command, intended to help diagnose the exact behavior of the PyTorch installation process without making permanent changes.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request appears to be for debugging a CI issue related to torch installation. The changes in docker/install/install_python_packages.sh add a --dry-run command and remove --force-reinstall. While the --dry-run is useful for debugging, it makes the installation process inefficient by running dependency resolution twice. More importantly, removing --force-reinstall is risky as it may fail to replace a pre-existing, incorrect torch version. I've suggested restoring the single-line installation with --force-reinstall for robustness. As a general note, the underlying issue might also be related to torch being present in requirements.txt, which could cause version conflicts during the subsequent installation step.

Comment on lines +27 to +28
pip3 install torch --index-url https://download.pytorch.org/whl/${CUDA_VERSION} --dry-run
pip3 install torch --index-url https://download.pytorch.org/whl/${CUDA_VERSION}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The use of --dry-run is helpful for debugging, but this change introduces an inefficiency and a potential correctness issue. The pip command is executed twice, leading to redundant dependency resolution. More critically, removing the --force-reinstall flag may prevent the correct CUDA-specific version of torch from being installed if another version is already present in the environment. To ensure the installation is both correct and efficient, I recommend using a single command that includes --force-reinstall.

Suggested change
pip3 install torch --index-url https://download.pytorch.org/whl/${CUDA_VERSION} --dry-run
pip3 install torch --index-url https://download.pytorch.org/whl/${CUDA_VERSION}
pip3 install --force-reinstall torch --index-url https://download.pytorch.org/whl/${CUDA_VERSION}

@bkryu bkryu changed the title Do not merge. Dummy PR to trigger CI container build ci: Use stable Torch Release for cu130 Dec 4, 2025
@yzh119 yzh119 enabled auto-merge (squash) December 4, 2025 19:23
@yzh119 yzh119 merged commit cdc5fb7 into flashinfer-ai:main Dec 4, 2025
15 checks passed
@bkryu bkryu deleted the container_build_trigger branch December 5, 2025 18:34
BingooYang pushed a commit to BingooYang/flashinfer that referenced this pull request Mar 13, 2026
<!-- .github/pull_request_template.md -->

## 📌 Description

Previous PR flashinfer-ai#2167 's container build is installing torch
2.10.dev12032025, due to the cu130 Dockerfile installing unstable
nightlies. We are seeing unit testing failures for torch 2.10 dev where
`torch.einsum` causes a cuBLAS failure:

```
(py312) root@cc6b2de90050:/flashinfer# pip list | grep torch
pytorch-triton         3.5.1+gitbfeb0668
torch                  2.10.0.dev20251203+cu130
(py312) root@cc6b2de90050:/flashinfer# python3
>>> import torch
>>> query = torch.randn(1, 4, 128, device="cuda", dtype=torch.float16)
>>> key = torch.randn(1, 4, 128, device="cuda", dtype=torch.float16)
>>> scores = torch.einsum("qhd,khd->qkh", query.float(), key.float())
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/opt/conda/envs/py312/lib/python3.12/site-packages/torch/functional.py", line 373, in einsum
    return _VF.einsum(equation, operands)  # type: ignore[attr-defined]
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: CUDA error: CUBLAS_STATUS_INVALID_VALUE when calling `cublasSgemmStridedBatched( handle, opa, opb, m, n, k, &alpha, a, lda, stridea, b, ldb, strideb, &beta, c, ldc, stridec, num_batches)`
```

As PyTorch started releasing stable 2.9.1 for cu130, there is no longer
a need to use nightlies. Local testing with the 2.9.1 resolves `einsum`
failures:

```
(py312) root@fd986dc62859:/flashinfer# pip3 install --force-reinstall torch --index-url https://download.pytorch.org/whl/cu130 --no-deps
Looking in indexes: https://download.pytorch.org/whl/cu130
Collecting torch
  Downloading https://download.pytorch.org/whl/cu130/torch-2.9.1%2Bcu130-cp312-cp312-manylinux_2_28_x86_64.whl.metadata (30 kB)
Downloading https://download.pytorch.org/whl/cu130/torch-2.9.1%2Bcu130-cp312-cp312-manylinux_2_28_x86_64.whl (612.6 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 612.6/612.6 MB 215.8 MB/s  0:00:01
Installing collected packages: torch
  Attempting uninstall: torch
    Found existing installation: torch 2.10.0.dev20251203+cu130
    Uninstalling torch-2.10.0.dev20251203+cu130:
      Successfully uninstalled torch-2.10.0.dev20251203+cu130
Successfully installed torch-2.9.1+cu130
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager, possibly rendering your system unusable. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv. Use the --root-user-action option if you know what you are doing and want to suppress this warning.
(py312) root@fd986dc62859:/flashinfer# python3
Python 3.12.11 | packaged by conda-forge | (main, Jun  4 2025, 14:45:31) [GCC 13.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> query = torch.randn(1, 4, 128, device="cuda", dtype=torch.float16)
>>> key = torch.randn(1, 4, 128, device="cuda", dtype=torch.float16)
>>> scores = torch.einsum("qhd,khd->qkh", query.float(), key.float())
```

See [this
line](https://github.com/flashinfer-ai/flashinfer/actions/runs/19940898985/job/57178116127?pr=2174#step:6:352)
for the release container build job for cu130 where a stable release
2.9.1 is being fetched and installed

<!-- What does this PR do? Briefly describe the changes and why they’re
needed. -->

## 🔍 Related Issues

<!-- Link any related issues here -->

## 🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull
request, please make sure the following items are complete.

### ✅ Pre-commit Checks

- [x] I have installed `pre-commit` by running `pip install pre-commit`
(or used your preferred method).
- [x] I have installed the hooks with `pre-commit install`.
- [x] I have run the hooks manually with `pre-commit run --all-files`
and fixed any reported issues.

> If you are unsure about how to set up `pre-commit`, see [the
pre-commit documentation](https://pre-commit.com/).

## 🧪 Tests

- [x] Tests have been added or updated as needed.
- [x] All tests are passing (`unittest`, etc.).

## Reviewer Notes

<!-- Optional: anything you'd like reviewers to focus on, concerns, etc.
-->


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Chores**
* Improved the package installation process for containerized
deployments to enhance reliability and consistency.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants