Skip to content

Conversation

@rparolin
Copy link
Contributor

@rparolin rparolin commented Jan 16, 2026

Summary

  • Add pixi run test-par command to run the test suite in parallel using pytest-xdist

Changes

Parallel Testing

  • Added new test-par task that uses pytest -n auto to automatically detect and utilize all available CPU cores
  • Maintains same environment variables and build dependencies as the serial test task
  • Provides significant speedup for running the full test suite (2m45s -> 36s on my machine)

Usage

Run tests in parallel:

pixi run test-par

Run tests serially (existing):

pixi run test

@copy-pr-bot
Copy link

copy-pr-bot bot commented Jan 16, 2026

Auto-sync is disabled for ready for review pull requests in this repository. Workflows must be run manually.

Contributors can view more details about this message here.

@NVIDIA NVIDIA deleted a comment from greptile-apps bot Jan 16, 2026
@NVIDIA NVIDIA deleted a comment from greptile-apps bot Jan 16, 2026
@rparolin rparolin requested review from cpcloud and gmarkall January 16, 2026 16:06
@rparolin
Copy link
Contributor Author

/ok to test 5c9581b

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Jan 16, 2026

Greptile Summary

This PR adds parallel test execution support via pixi run test-par. The implementation correctly includes the --dist loadscope flag, which is essential for the test suite as noted in testing/pytest.ini:7 - this flag ensures proper grouping for CUDATestCase classes that have shared state in setUp/tearDown methods. The PR also adds a default environment configuration to streamline development.

Key Changes

  • Added test-par task with pytest -n auto --dist loadscope for parallel test execution
  • Added default environment with CUDA 13.1, Python 3.14, and test dependencies
  • Updated lockfile with all necessary dependencies for the new environment

The parallel testing configuration is properly implemented and addresses the previous review comment about the missing --dist loadscope flag.

Confidence Score: 5/5

  • This PR is safe to merge with no issues
  • The implementation is correct and complete. The --dist loadscope flag is properly included, which is critical for the CUDA test suite's shared state management. The changes are minimal, focused, and the lockfile update is consistent with the configuration changes.
  • No files require special attention

Important Files Changed

Filename Overview
pixi.toml Added test-par task with correct --dist loadscope flag for parallel testing and a default environment definition
pixi.lock Lockfile updated with dependencies for the new default environment including test dependencies

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 file reviewed, 1 comment

Edit Code Review Agent Settings | Greptile

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
@rparolin
Copy link
Contributor Author

/ok to test c57fd64

@rparolin
Copy link
Contributor Author

/ok to test 04b3f18

Copy link
Contributor

@cpcloud cpcloud left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@rparolin rparolin enabled auto-merge (squash) January 16, 2026 17:55
@rparolin rparolin merged commit 33f4be7 into main Jan 16, 2026
90 checks passed
gmarkall added a commit to gmarkall/numba-cuda that referenced this pull request Jan 27, 2026
- Add Python 3.14 to the wheel publishing matrix (NVIDIA#750)
- feat: swap out internal device array usage with `StridedMemoryView` (NVIDIA#703)
- Fix max block size computation in `forall` (NVIDIA#744)
- Fix prologue debug line info pointing to decorator instead of def line (NVIDIA#746)
- Fix kernel return type in DISubroutineType debug metadata (NVIDIA#745)
- Fix missing line info in Jupyter notebooks (NVIDIA#742)
- Fix: Pass correct flags to linker when debugging in the presence of LTOIR code (NVIDIA#698)
- chore(deps): add cuda-pathfinder to pixi deps (NVIDIA#741)
- fix: enable flake8-bugbear lints and fix found problems (NVIDIA#708)
- fix: Fix race condition in CUDA Simulator (NVIDIA#690)
- ci: run tests in parallel (NVIDIA#740)
- feat: users can pass `shared_memory_carveout` to @cuda.jit (NVIDIA#642)
- Fix compatibility with NumPy 2.4: np.trapz and np.in1d removed (NVIDIA#739)
- Pass the -numba-debug flag to libnvvm (NVIDIA#681)
- ci: remove rapids containers from conda ci (NVIDIA#737)
- Use `pathfinder` for dynamic libraries (NVIDIA#308)
- CI: Add CUDA 13.1 testing support (NVIDIA#705)
- Adding `pixi run test` and `pixi run test-par` support (NVIDIA#724)
- Disable per-PR nvmath tests + follow same test practice (NVIDIA#723)
- chore(deps): regenerate pixi lockfile (NVIDIA#722)
- Fix DISubprogram line number to point to function definition line (NVIDIA#695)
- revert: chore(dev): build pixi using rattler (NVIDIA#713) (NVIDIA#719)
- [feat] Initial version of the Numba CUDA GDB pretty-printer (NVIDIA#692)
- chore(dev): build pixi using rattler (NVIDIA#713)
- build(deps): bump the actions-monthly group across 1 directory with 8 updates (NVIDIA#704)
@gmarkall gmarkall mentioned this pull request Jan 27, 2026
kkraus14 pushed a commit that referenced this pull request Jan 28, 2026
- Add Python 3.14 to the wheel publishing matrix (#750)
- feat: swap out internal device array usage with `StridedMemoryView`
(#703)
- Fix max block size computation in `forall` (#744)
- Fix prologue debug line info pointing to decorator instead of def line
(#746)
- Fix kernel return type in DISubroutineType debug metadata (#745)
- Fix missing line info in Jupyter notebooks (#742)
- Fix: Pass correct flags to linker when debugging in the presence of
LTOIR code (#698)
- chore(deps): add cuda-pathfinder to pixi deps (#741)
- fix: enable flake8-bugbear lints and fix found problems (#708)
- fix: Fix race condition in CUDA Simulator (#690)
- ci: run tests in parallel (#740)
- feat: users can pass `shared_memory_carveout` to @cuda.jit (#642)
- Fix compatibility with NumPy 2.4: np.trapz and np.in1d removed (#739)
- Pass the -numba-debug flag to libnvvm (#681)
- ci: remove rapids containers from conda ci (#737)
- Use `pathfinder` for dynamic libraries (#308)
- CI: Add CUDA 13.1 testing support (#705)
- Adding `pixi run test` and `pixi run test-par` support (#724)
- Disable per-PR nvmath tests + follow same test practice (#723)
- chore(deps): regenerate pixi lockfile (#722)
- Fix DISubprogram line number to point to function definition line
(#695)
- revert: chore(dev): build pixi using rattler (#713) (#719)
- [feat] Initial version of the Numba CUDA GDB pretty-printer (#692)
- chore(dev): build pixi using rattler (#713)
- build(deps): bump the actions-monthly group across 1 directory with 8
updates (#704)

<!--

Thank you for contributing to numba-cuda :)

Here are some guidelines to help the review process go smoothly.

1. Please write a description in this text box of the changes that are
being
   made.

2. Please ensure that you have written units tests for the changes
made/features
   added.

3. If you are closing an issue please use one of the automatic closing
words as
noted here:
https://help.github.com/articles/closing-issues-using-keywords/

4. If your pull request is not ready for review but you want to make use
of the
continuous integration testing facilities please label it with `[WIP]`.

5. If your pull request is ready to be reviewed without requiring
additional
work on top of it, then remove the `[WIP]` label (if present) and
replace
it with `[REVIEW]`. If assistance is required to complete the
functionality,
for example when the C/C++ code of a feature is complete but Python
bindings
are still required, then add the label `[HELP-REQ]` so that others can
triage
and assist. The additional changes then can be implemented on top of the
same PR. If the assistance is done by members of the rapidsAI team, then
no
additional actions are required by the creator of the original PR for
this,
otherwise the original author of the PR needs to give permission to the
person(s) assisting to commit to their personal fork of the project. If
that
doesn't happen then a new PR based on the code of the original PR can be
opened by the person assisting, which then will be the PR that will be
   merged.

6. Once all work has been done and review has taken place please do not
add
features or make changes out of the scope of those requested by the
reviewer
(doing this just add delays as already reviewed code ends up having to
be
re-reviewed/it is hard to tell what is new etc!). Further, please do not
rebase your branch on main/force push/rewrite history, doing any of
these
   causes the context of any comments made by reviewers to be lost. If
   conflicts occur against main they should be resolved by merging main
   into the branch used for making the pull request.

Many thanks in advance for your cooperation!

-->
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants