Skip to content

ensure 'torch' CUDA wheels are installed in CI#2279

Merged
rapids-bot[bot] merged 9 commits intorapidsai:release/26.04from
jameslamb:torch-testing
Mar 13, 2026
Merged

ensure 'torch' CUDA wheels are installed in CI#2279
rapids-bot[bot] merged 9 commits intorapidsai:release/26.04from
jameslamb:torch-testing

Conversation

@jameslamb
Copy link
Copy Markdown
Member

Description

Contributes to rapidsai/build-planning#256

Broken out from #2270

Proposes a stricter pattern for installing torch wheels, to prevent bugs of the form "accidentally used a CPU-only torch from pypi.org". This should help us to catch compatibility issues, improving release confidence.

Other small changes:

  • splits torch wheel testing into "oldest" (PyTorch 2.9) and "latest" (PyTorch 2.10)
  • introduces a require_gpu_pytorch matrix filter so conda jobs can explicitly request pytorch-gpu (to similarly ensure solvers don't fall back to the GPU-only variant)
  • appends rapids-generate-pip-constraint output to file PIP_CONSTRAINT points
    • (to reduce duplication and the risk of failing to apply constraints)

Checklist

  • I am familiar with the Contributing Guidelines.
  • New or existing tests cover these changes.
  • The documentation is up to date with these changes.

@jameslamb jameslamb added non-breaking Non-breaking change improvement Improvement / enhancement to an existing function labels Mar 6, 2026
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot bot commented Mar 6, 2026

Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually.

Contributors can view more details about this message here.

@jameslamb
Copy link
Copy Markdown
Member Author

/ok to test

Comment on lines +423 to +426
# avoid pulling in 'torch' in places like DLFW builds that prefer to install it other ways
- matrix:
no_pytorch: "true"
packages:
Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This follows the pattern @trxcllnt has been introducing across RAPIDS: rapidsai/cugraph-gnn#421

Think rmm never needed patches for DLFW and so was missed in that round of PRs because its - depends_on_pytorch group doesn't end up in test_python or similar commonly-used lists.

@jameslamb
Copy link
Copy Markdown
Member Author

/ok to test

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 6, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Summary by CodeRabbit

  • Chores
    • Added an automated CI tool to download CUDA-specific PyTorch wheels.
    • Centralized pip constraint handling to use an environment-driven constraint source.
    • Reworked dependency declarations to support multiple PyTorch/CUDA matrices and added a new torch-only dependency group; removed the previous test-wheels grouping.
  • Tests
    • Updated GPU test flows to install downloaded CUDA-specific wheels, added a GPU requirement flag in test matrices, and adjusted skip logic for newer CUDA versions.

Walkthrough

Adds a new CI script to download CUDA-specific PyTorch wheels, updates CI test scripts to use an environment-driven PIP constraint and to download/use CUDA wheels for PyTorch tests, and restructures dependencies.yaml to replace a simple PyTorch entry with a multi-matrix depends_on_pytorch and a new torch_only group.

Changes

Cohort / File(s) Summary
PyTorch wheel downloader
ci/download-torch-wheels.sh
New executable script that generates torch-specific constraints via rapids-dependency-file-generator and downloads CUDA-variant PyTorch wheels with rapids-pip-retry into a specified directory.
CI test scripts
ci/test_python_integrations.sh, ci/test_wheel.sh, ci/test_wheel_integrations.sh
Switch constraint generation/usage from a fixed ./constraints.txt to environment-driven ${PIP_CONSTRAINT}, add ;require_gpu=true to the PyTorch GPU matrix entry, and refactor the GPU integration flow to download/use CUDA-specific PyTorch wheels for tests.
Dependency configuration
dependencies.yaml
Remove test_wheels_pytorch file-group, add new torch_only group, and replace the simple depends_on_pytorch common block with a detailed specific multi-matrix declaration covering CUDA versions, GPU/non-GPU variants, and multiple output types (requirements, pyproject, conda).

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title directly summarizes the main change—ensuring CUDA wheels for torch are installed in CI, which aligns with the primary objective across all modified files.
Description check ✅ Passed The description is directly related to the changeset, explaining the motivation (preventing CPU-only torch), the specific improvements (matrix splits, require_gpu_pytorch filter, constraint handling), and linking to relevant issues and PRs.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
📝 Coding Plan
  • Generate coding plan for human review comments

Comment @coderabbitai help to get the list of available commands and usage tips.

Tip

CodeRabbit can generate a title for your PR based on the changes with custom instructions.

Set the reviews.auto_title_instructions setting to generate a title for your PR based on the changes in the PR with custom instructions.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
ci/download-torch-wheels.sh (1)

27-40: Keep the generated constraint file inside TORCH_WHEEL_DIR.

Writing torch-constraints.txt to ./ leaves shared state in the working tree even though the caller already gives this helper a per-run temp directory. Keeping the file under ${TORCH_WHEEL_DIR} makes the whole download step self-contained and avoids cross-run collisions.

♻️ Suggested refactor
+TORCH_CONSTRAINTS="${TORCH_WHEEL_DIR}/torch-constraints.txt"
+
 rapids-dependency-file-generator \
     --output requirements \
     --file-key "torch_only" \
     --matrix "cuda=${RAPIDS_CUDA_VERSION%.*};arch=$(arch);py=${RAPIDS_PY_VERSION};dependencies=${RAPIDS_DEPENDENCIES};require_gpu_pytorch=true" \
-| tee ./torch-constraints.txt
+| tee "${TORCH_CONSTRAINTS}"
 
 rapids-pip-retry download \
   --isolated \
   --prefer-binary \
   --no-deps \
   -d "${TORCH_WHEEL_DIR}" \
   --constraint "${PIP_CONSTRAINT}" \
-  --constraint ./torch-constraints.txt \
+  --constraint "${TORCH_CONSTRAINTS}" \
   'torch'
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ci/download-torch-wheels.sh` around lines 27 - 40, The generated constraint
file is written to ./torch-constraints.txt which leaves shared state; change the
rapids-dependency-file-generator pipeline so tee writes into the per-run
directory (use ${TORCH_WHEEL_DIR}/torch-constraints.txt) and update the
subsequent rapids-pip-retry download --constraint argument to point to that file
instead of ./torch-constraints.txt, keeping all references to
torch-constraints.txt, rapids-dependency-file-generator, tee,
${TORCH_WHEEL_DIR}, and the rapids-pip-retry download --constraint option
consistent.
dependencies.yaml (1)

401-409: Mirror the oldest/latest split on the conda path or explain why conda can use relaxed version constraints.

The caller passes dependencies=${RAPIDS_DEPENDENCIES} to the generator for PyTorch conda, but the conda matrices ignore this selector. The requirements (wheel) path pins specific PyTorch versions for dependencies=oldest (e.g., torch==2.9.0+cu129), whereas the conda path always uses relaxed constraints (pytorch-gpu>=2.9). Either add the same oldest/latest branching to the conda matrices, or document why conda can rely on the solver to handle any PyTorch >=2.9 version safely while wheels cannot.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dependencies.yaml` around lines 401 - 409, The conda matrices under
output_types: conda currently ignore the caller's dependencies selector
(dependencies=${RAPIDS_DEPENDENCIES}) and always use relaxed package specs
(packages: - pytorch-gpu>=2.9 / - pytorch>=2.9), which diverges from the wheel
path that branches on oldest/latest and pins versions (e.g.,
torch==2.9.0+cu129); update the conda matrices to mirror the oldest/latest split
(add separate matrix entries for dependencies=oldest that pin exact
pytorch/pytorch-gpu versions and for dependencies=latest that keep >=
constraints) or add a clear comment/documentation explaining why conda can
safely use relaxed constraints and referencing the matrix keys (matrices,
require_gpu_pytorch, packages) and the caller variable RAPIDS_DEPENDENCIES so
reviewers can verify the intended behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@ci/test_wheel_integrations.sh`:
- Around line 40-45: The CI skip log in test_wheel_integrations.sh is out of
sync with the gate that checks CUDA_MAJOR/CUDA_MINOR (the conditional using
CUDA_MAJOR and CUDA_MINOR around the if block) — update the skip/message string
(the message printed around line 66) to match the actual gate: indicate "CUDA
12.9+ (for 12.x) or 13.0" instead of "12.6-12.9 or 13.0" so it accurately
reflects the condition in the if that checks CUDA_MAJOR and CUDA_MINOR.

---

Nitpick comments:
In `@ci/download-torch-wheels.sh`:
- Around line 27-40: The generated constraint file is written to
./torch-constraints.txt which leaves shared state; change the
rapids-dependency-file-generator pipeline so tee writes into the per-run
directory (use ${TORCH_WHEEL_DIR}/torch-constraints.txt) and update the
subsequent rapids-pip-retry download --constraint argument to point to that file
instead of ./torch-constraints.txt, keeping all references to
torch-constraints.txt, rapids-dependency-file-generator, tee,
${TORCH_WHEEL_DIR}, and the rapids-pip-retry download --constraint option
consistent.

In `@dependencies.yaml`:
- Around line 401-409: The conda matrices under output_types: conda currently
ignore the caller's dependencies selector (dependencies=${RAPIDS_DEPENDENCIES})
and always use relaxed package specs (packages: - pytorch-gpu>=2.9 / -
pytorch>=2.9), which diverges from the wheel path that branches on oldest/latest
and pins versions (e.g., torch==2.9.0+cu129); update the conda matrices to
mirror the oldest/latest split (add separate matrix entries for
dependencies=oldest that pin exact pytorch/pytorch-gpu versions and for
dependencies=latest that keep >= constraints) or add a clear
comment/documentation explaining why conda can safely use relaxed constraints
and referencing the matrix keys (matrices, require_gpu_pytorch, packages) and
the caller variable RAPIDS_DEPENDENCIES so reviewers can verify the intended
behavior.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 0ef2a43d-0a13-4967-a6ef-94f870a18ed8

📥 Commits

Reviewing files that changed from the base of the PR and between d1563fc and 5d328ff.

📒 Files selected for processing (5)
  • ci/download-torch-wheels.sh
  • ci/test_python_integrations.sh
  • ci/test_wheel.sh
  • ci/test_wheel_integrations.sh
  • dependencies.yaml

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@ci/test_wheel_integrations.sh`:
- Around line 40-45: The top comment stating "requires CUDA 12.8+" is now
inaccurate relative to the conditional that permits CUDA 12.9+ for 12.x and only
13.0 for 13.x; update that comment above the gating if-block (the block checking
CUDA_MAJOR and CUDA_MINOR) to accurately describe the new policy (e.g., require
CUDA 12.9+ on 12.x and CUDA 13.0 on 13.x) so future triage matches the condition
in the { [ "${CUDA_MAJOR}" -eq 12 ] && [ "${CUDA_MINOR}" -ge 9 ]; } || { [
"${CUDA_MAJOR}" -eq 13 ] && [ "${CUDA_MINOR}" -le 0 ]; } check.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 23206a42-586c-4680-b0ff-0c16836fa1f9

📥 Commits

Reviewing files that changed from the base of the PR and between 5d328ff and 9eefdea.

📒 Files selected for processing (1)
  • ci/test_wheel_integrations.sh

-v \
"${PIP_INSTALL_SHARED_ARGS[@]}" \
-r test-pytorch-requirements.txt
"${TORCH_WHEEL_DIR}"/torch-*.whl
Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks to me like this is working and pulling in what we want!

CUDA 12.2.2, Python 3.11, arm64, ubuntu22.04, a100, latest-driver, latest-deps

(build link)

  RAPIDS logger » [03/06/26 19:41:00]
  ┌──────────────────────────────────────────────────────────────────────────┐
  |    Skipping PyTorch tests (requires CUDA 12.9+ or 13.0, found 12.2.2)    |
  └──────────────────────────────────────────────────────────────────────────┘

CUDA 12.9.1, Python 3.11, amd64, ubuntu22.04, l4, latest-driver, oldest-deps

(build link)

  Successfully installed ... torch-2.9.0+cu129 ...

CUDA 12.9.1, Python 3.14, amd64, ubuntu24.04, h100, latest-driver, latest-deps

(build link)

  Successfully installed ... torch-2.10.0+cu129 ...

CUDA 13.0.2, Python 3.12, amd64, ubuntu24.04, l4, latest-driver, latest-deps

(build link)

  Successfully installed ... torch-2.10.0+cu130 ...

CUDA 13.0.2, Python 3.12, arm64, rockylinux8, l4, latest-driver, latest-deps

(build link)

  Successfully installed ... torch-2.10.0+cu130 ...

CUDA 13.1.1, Python 3.13, amd64, rockylinux8, rtxpro6000, latest-driver, latest-deps

(build link)

  RAPIDS logger » [03/06/26 19:35:46]
  ┌──────────────────────────────────────────────────────────────────────────┐
  |    Skipping PyTorch tests (requires CUDA 12.9+ or 13.0, found 13.1.1)    |
  └──────────────────────────────────────────────────────────────────────────┘

CUDA 13.1.1, Python 3.14, amd64, ubuntu24.04, rtxpro6000, latest-driver, latest-deps

(build link)

  RAPIDS logger » [03/06/26 19:34:37]
  ┌──────────────────────────────────────────────────────────────────────────┐
  |    Skipping PyTorch tests (requires CUDA 12.9+ or 13.0, found 13.1.1)    |
  └──────────────────────────────────────────────────────────────────────────┘

CUDA 13.1.1, Python 3.14, arm64, ubuntu24.04, l4, latest-driver, latest-deps

(build link)

  RAPIDS logger » [03/06/26 19:36:06]
  ┌──────────────────────────────────────────────────────────────────────────┐
  |    Skipping PyTorch tests (requires CUDA 12.9+ or 13.0, found 13.1.1)    |
  └──────────────────────────────────────────────────────────────────────────┘

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why can we support 12.9+ (with the "plus") but not 13.0+? Apologies if this has been covered elsewhere in the discussions leading up to this point.

I see PyTorch indices for cu126, cu128, and cu130 so I'm confused by why we have 12.9 running tests but not 13.1.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This log message wording and condition were wrong / imprecise.

If a hypothetical CTK 12.10 came out tomorrow, none of this configuration would automatically work for it. It more precisely should read:

requires CUDA 12.9 or 13.0

And the condition guarding it should be

if \
    { [ "${CUDA_MAJOR}" -eq 12 ] && [ "${CUDA_MINOR}" -eq 9 ]; } \
    || { [ "${CUDA_MAJOR}" -eq 13 ] && [ "${CUDA_MINOR}" -eq 0 ]; }; \

I'll fix that here.

I see PyTorch indices for cu126, cu128, and cu130 so I'm confused by why we have 12.9 running tests but not 13.1.

There is a cu129 index too, and it's being used in this PR.

--extra-index-url=https://download.pytorch.org/whl/cu129

I didn't go back further than 12.9 because

We can't test CUDA 13.1 because:

  • torch CUDA wheels are == pinned a {major}.{minor}.{patch} CTK release
  • there are not any published torch wheels for CUDA 13.1

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just pushed a3ca93d

Hopefully that clarifies it.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. Approved.

@jameslamb jameslamb changed the title WIP: ensure 'torch' CUDA wheels are installed in CI ensure 'torch' CUDA wheels are installed in CI Mar 6, 2026
@jameslamb jameslamb requested a review from bdice March 6, 2026 19:58
@jameslamb jameslamb marked this pull request as ready for review March 6, 2026 19:58
@jameslamb jameslamb requested review from a team as code owners March 6, 2026 19:58
@jameslamb jameslamb changed the base branch from main to release/26.04 March 12, 2026 19:09
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
dependencies.yaml (1)

421-457: Consider consistent YAML anchor naming.

Minor style nit: The anchor names use slightly different patterns:

  • Line 433: &torch_cu129_index
  • Line 446: &torch_index_cu13

Consider aligning them for consistency (e.g., &torch_cu129_index / &torch_cu130_index or &torch_index_cu129 / &torch_index_cu130).

♻️ Suggested naming alignment
          - matrix:
              cuda: "13.0"
              dependencies: "oldest"
              require_gpu: "true"
            packages:
-              - &torch_index_cu13 --extra-index-url=https://download.pytorch.org/whl/cu130
+              - &torch_cu130_index --extra-index-url=https://download.pytorch.org/whl/cu130
              - torch==2.9.0+cu130
          - matrix:
              cuda: "13.0"
              require_gpu: "true"
            packages:
-              - *torch_index_cu13
+              - *torch_cu130_index
              - torch==2.10.0+cu130
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dependencies.yaml` around lines 421 - 457, The YAML uses inconsistent anchor
names (&torch_cu129_index vs &torch_index_cu13); pick a consistent pattern
(e.g., &torch_cu129_index and &torch_cu130_index or &torch_index_cu129 and
&torch_index_cu130), then rename the anchor defined at the CUDA 13.0 block
(currently &torch_index_cu13) to the chosen consistent name and update its alias
usage (*torch_index_cu13) to the new alias in the corresponding packages entry
so both anchor definition and alias references match.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@dependencies.yaml`:
- Around line 421-457: The YAML uses inconsistent anchor names
(&torch_cu129_index vs &torch_index_cu13); pick a consistent pattern (e.g.,
&torch_cu129_index and &torch_cu130_index or &torch_index_cu129 and
&torch_index_cu130), then rename the anchor defined at the CUDA 13.0 block
(currently &torch_index_cu13) to the chosen consistent name and update its alias
usage (*torch_index_cu13) to the new alias in the corresponding packages entry
so both anchor definition and alias references match.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 952553cf-86ed-4b6d-ba44-6b60716a2aa7

📥 Commits

Reviewing files that changed from the base of the PR and between 18f60eb and bd67d6b.

📒 Files selected for processing (3)
  • ci/download-torch-wheels.sh
  • ci/test_python_integrations.sh
  • dependencies.yaml
🚧 Files skipped from review as they are similar to previous changes (2)
  • ci/test_python_integrations.sh
  • ci/download-torch-wheels.sh

@jameslamb
Copy link
Copy Markdown
Member Author

Thanks so much for looking @bdice . I'm gonna merge this, please @ me or revert it if this causes any issues.

@jameslamb
Copy link
Copy Markdown
Member Author

/merge

@rapids-bot rapids-bot bot merged commit d8294bb into rapidsai:release/26.04 Mar 13, 2026
82 checks passed
@bdice bdice mentioned this pull request Mar 17, 2026
3 tasks
rapids-bot bot pushed a commit to rapidsai/cugraph-gnn that referenced this pull request Mar 18, 2026
…an optional dependency (#425)

Contributes to rapidsai/build-planning#256 and #410

## Ensures that `torch` CUDA wheels are always installed in CI

*"unintentionally installed a CPU-only `torch` from PyPI"* is a failure mode I've seen a few times for CI in this project, and each time it's happened it has take a bit of time to figure out that that was the root cause.

This PR tries to fix that by:

* using a stricter install pattern to guarantee a CUDA `torch` wheel is installed if a compatible one exists
* using local versions like `+cu130` to help prevent pulling in packages from PyPI
* adding `dependencies.yaml` items for "oldest" dependencies so CI covers a range of supported versions

I tested similar patterns in rapidsai/rmm#2279 and saw them work well there.

## Makes `torch` truly optional

We want these packages to be installable and importable without `torch`, for use in RAPIDS DLFW builds (where we don't install `torch` alongside RAPIDS because it's build in other processes).

I'd started relying on the assumption that they worked that way in this PR, but quickly realized that that isn't true... `torch` is used unconditionally in many ways in these libraries.

This PR fixes that. It makes `torch` optional and adds testing to ensure it stays that way:

* copying the `import_optional` machinery from `cugraph-pyg` into `pylibwholegraph`
* using `import_optional()` for all `torch` imports in the libraries
* using `pytest.importorskip("torch")` in tests
* deferring some `torch` inputs from import time to run time
* adding the `flake8-tidy-imports:banned-api` check from `ruff` to enforce that `import torch` isn't used anywhere in library code or test code
* explicitly testing that the libraries are still installable and at least 1 unit test can run successfully after a `pip uninstall torch`
* adding a check in `ci/validate_wheel.sh` confirming that `torch` doesn't make it into any wheel metadata *(which could happen due to mistakes in `dependencies.yaml` / `pyproject.toml`)*

## Notes for Reviewers

Pulling these changes out of #413 , so CI in this repo can immediately benefit from them and so #419 can be reverted.

When this is merged, #413 will have a smaller diff and just be focused on testing against a range of CTKs.

Authors:
  - James Lamb (https://github.com/jameslamb)
  - Alex Barghi (https://github.com/alexbarghi-nv)

Approvers:
  - Alex Barghi (https://github.com/alexbarghi-nv)
  - Kyle Edwards (https://github.com/KyleFromNVIDIA)

URL: #425
@coderabbitai coderabbitai bot mentioned this pull request Mar 31, 2026
3 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

improvement Improvement / enhancement to an existing function non-breaking Non-breaking change

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

2 participants