Skip to content

pin oldest numpy in dask-cudf tests, update dependency floors (cuda-python 12.9.2, cupy 13.6.0, numba 0.60.0)#19806

Merged
rapids-bot[bot] merged 9 commits intorapidsai:branch-25.10from
jameslamb:more-packaging-changes
Aug 28, 2025
Merged

pin oldest numpy in dask-cudf tests, update dependency floors (cuda-python 12.9.2, cupy 13.6.0, numba 0.60.0)#19806
rapids-bot[bot] merged 9 commits intorapidsai:branch-25.10from
jameslamb:more-packaging-changes

Conversation

@jameslamb
Copy link
Member

@jameslamb jameslamb commented Aug 27, 2025

Description

Contributes to rapidsai/build-planning#208

  • updates dependency pins:
  • ensures that "oldest" numpy is pinned in dask-cudf tests
    • the "oldest" pin for numpy was previously not used in dask-cudf wheel tests, allowing an incompatible mix of packages (pandas 2.0.3, numpy 2.0.2) to be installed together

Notes for Reviewers

Why a separate PR?

In #19768 (comment), we saw this set of dependency changes caused failures like this in CUDA 12 and CUDA 13 environments:

...
ERROR io/tests/test_csv.py - ValueError: numpy.dtype size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject
ERROR io/tests/test_json.py - ValueError: numpy.dtype size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject
ERROR io/tests/test_orc.py - ValueError: numpy.dtype size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject
ERROR io/tests/test_parquet.py - ValueError: numpy.dtype size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject
ERROR io/tests/test_s3.py - ValueError: numpy.dtype size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject
... many more ...

(wheel-test-dask-cudf link)

Opening this more narrowly-scoped PR to investigate that.

How I tested this

First commit here contained some of the dependency changes from #19768 , and those were enough to reproduce the test failures!

https://github.com/rapidsai/cudf/actions/runs/17271893124/job/49021534507?pr=19806#step:11:11928

Checklist

  • I am familiar with the Contributing Guidelines.
  • New or existing tests cover these changes.
  • The documentation is up to date with these changes.

@jameslamb jameslamb added the DO NOT MERGE Hold off on merging; see PR for details label Aug 27, 2025
@copy-pr-bot
Copy link

copy-pr-bot bot commented Aug 27, 2025

Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually.

Contributors can view more details about this message here.

@jameslamb
Copy link
Member Author

/ok to test

@jameslamb
Copy link
Member Author

/ok to test

@jameslamb
Copy link
Member Author

/ok to test

# with cudf_kafka's dependencies.
- pyarrow==15.*
- matrix:
packages:
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moving this here, into test_python_common, ensures that these also affect the oldest-deps environments for other CI jobs like wheel-tests-dask-cudf.

via this:

cudf/dependencies.yaml

Lines 357 to 366 in 9fa50b9

py_test_dask_cudf:
output: pyproject
pyproject_dir: python/dask_cudf
extras:
table: project.optional-dependencies
key: test
includes:
- depends_on_dask_cuda
- test_python_common
- test_python_cudf_common

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Blegh, we'll have to rethink this. dask==2025.7.0 conda packages requires numpy >=1.24.

    ├─ dask-cudf =25.10,>=0.0.0a0 * is installable with the potential options
    │  ├─ dask-cudf 25.10.00a293 would require
    │  │  └─ rapids-dask-dependency =25.10 *, which requires
    │  │     └─ dask ==2025.7.0 *, which requires
    │  │        └─ numpy >=1.24 * with the potential options

(build link)

So dask-cudf cannot test against numpy==1.23.*.

I guess it'll need its own "oldest-numpy" pinning here.

@jameslamb jameslamb added improvement Improvement / enhancement to an existing function non-breaking Non-breaking change and removed DO NOT MERGE Hold off on merging; see PR for details labels Aug 27, 2025
@jameslamb jameslamb changed the title WIP: update dependency floors (cuda-python 12.9.2, cupy 13.6.0, numba 0.60.0) WIP: apply "oldest" deps group everywhere, update dependency floors (cuda-python 12.9.2, cupy 13.6.0, numba 0.60.0) Aug 27, 2025
@jameslamb jameslamb changed the title WIP: apply "oldest" deps group everywhere, update dependency floors (cuda-python 12.9.2, cupy 13.6.0, numba 0.60.0) apply "oldest" deps group everywhere, update dependency floors (cuda-python 12.9.2, cupy 13.6.0, numba 0.60.0) Aug 27, 2025
@jameslamb
Copy link
Member Author

/ok to test

@jameslamb jameslamb changed the title apply "oldest" deps group everywhere, update dependency floors (cuda-python 12.9.2, cupy 13.6.0, numba 0.60.0) pin oldest numpy in dask-cudf tests, update dependency floors (cuda-python 12.9.2, cupy 13.6.0, numba 0.60.0) Aug 27, 2025
@jameslamb
Copy link
Member Author

Ok this is ready for review! There's one failing cudf-polars test:

FAILED py-polars/tests/unit/lazyframe/test_collect_schema.py::test_collect_schema_parametric - AssertionError
Falsifying example: test_collect_schema_parametric(
    lf=<LazyFrame at 0x71B06B019750>,
)
Explanation:
    These lines were always and only run by failing examples:
        /pyenv/versions/3.10.18/lib/python3.10/site-packages/polars/schema.py:141

You can reproduce this example by temporarily adding @reproduce_failure('6.138.6', b'AXicc2RxZGVwJAUKOzIi8xkAGJAIPg==') as a decorator on your test case

(build link)

But I saw @davidwendt mention elsewhere that that might be related to merging #19491

@jameslamb jameslamb marked this pull request as ready for review August 27, 2025 20:06
@jameslamb jameslamb requested a review from a team as a code owner August 27, 2025 20:06
Copy link
Contributor

@Matt711 Matt711 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

numpy pinning LGTM

@Matt711
Copy link
Contributor

Matt711 commented Aug 27, 2025

Ok this is ready for review! There's one failing cudf-polars test:

FAILED py-polars/tests/unit/lazyframe/test_collect_schema.py::test_collect_schema_parametric - AssertionError
Falsifying example: test_collect_schema_parametric(
    lf=<LazyFrame at 0x71B06B019750>,
)
Explanation:
    These lines were always and only run by failing examples:
        /pyenv/versions/3.10.18/lib/python3.10/site-packages/polars/schema.py:141

You can reproduce this example by temporarily adding @reproduce_failure('6.138.6', b'AXicc2RxZGVwJAUKOzIi8xkAGJAIPg==') as a decorator on your test case

(build link)

But I saw @davidwendt mention elsewhere that that might be related to merging #19491

Unblocked by #19821

@jameslamb jameslamb mentioned this pull request Aug 27, 2025
2 tasks
@jameslamb
Copy link
Member Author

Merged in latest branch-25.10 to get the changes from #19821 ... hopefully that'll be the last thing we need for this 😁

@gforsyth gforsyth removed the request for review from AyodeAwe August 28, 2025 12:46
@jameslamb
Copy link
Member Author

/merge

@rapids-bot rapids-bot bot merged commit a449958 into rapidsai:branch-25.10 Aug 28, 2025
91 checks passed
@jameslamb jameslamb deleted the more-packaging-changes branch August 28, 2025 13:31
rapids-bot bot pushed a commit that referenced this pull request Aug 28, 2025
Contributes to rapidsai/build-planning#208

* uses CUDA 13.0.0 to build and test
* adds CUDA 13 devcontainers
* adds `cuda-nvvm-tools` as a runtime dependency of `cudf` conda packages
  - temporary workaround for NVIDIA/numba-cuda#430, from @brandon-b-miller 

Contributes to rapidsai/build-planning#68

* updates to CUDA 13 dependencies in fallback entries in `dependencies.yaml` matrices (i.e., the ones that get written to `pyproject.toml` in source control)

## Notes for Reviewers

This switches GitHub Actions workflows to the `cuda13.0` branch from here: rapidsai/shared-workflows#413

A future round of PRs will revert that back to `branch-25.10`, once all of RAPIDS supports CUDA 13.

### This has dependencies

Need these to be merged first:

* [x] #19821
* [x] #19806

Authors:
  - James Lamb (https://github.com/jameslamb)
  - David Wendt (https://github.com/davidwendt)

Approvers:
  - Gil Forsyth (https://github.com/gforsyth)

URL: #19768
rapids-bot bot pushed a commit to rapidsai/cuml that referenced this pull request Sep 3, 2025
…cy pins (#7164)

Contributes to rapidsai/build-planning#208 (breaking some changes off of #7128 to help with review and debugging there)

* switches to using `dask-cuda[cu12]` extra for wheels (added in rapidsai/dask-cuda#1536)
* bumps pins on some dependencies to match the rest of RAPIDS
  - `cuda-python`: >=12.9.2 (CUDA 12)
  - `cupy`: >=13.6.0
  - `numba`: >=0.60.0
* adds explicit runtime dependency on `numba-cuda`
  - *`cuml` uses this unconditionally but does not declare runtime dependency on it today*

Contributes to rapidsai/build-infra#293

* replaces dependency on `pynvml` package with `nvidia-ml-py` package (see that issue for details)

## Notes for Reviewers

### These dependency pin changes should be low-risk

All of these pins and requirements are already coming through `cuml`'s dependencies, e.g. `cudf` carries most of them via rapidsai/cudf#19806

So they shouldn't change much about the test environments in CI.

Authors:
  - James Lamb (https://github.com/jameslamb)
  - Simon Adorf (https://github.com/csadorf)

Approvers:
  - Simon Adorf (https://github.com/csadorf)
  - Gil Forsyth (https://github.com/gforsyth)

URL: #7164
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

improvement Improvement / enhancement to an existing function non-breaking Non-breaking change

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants