-
Notifications
You must be signed in to change notification settings - Fork 55
feat: swap out internal device array usage with StridedMemoryView
#703
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Auto-sync is disabled for ready for review pull requests in this repository. Workflows must be run manually. Contributors can view more details about this message here. |
f5a1c5c to
c62d013
Compare
|
/ok to test |
Greptile OverviewGreptile SummaryThis PR refactors internal kernel argument handling to use Key improvements:
Trade-offs:
Implementation quality:
Confidence Score: 5/5
Important Files Changed
|
|
This PR can't be merged until the next release of cuda-core, because I depend on some unreleased features there. However, it's still worth reviewing. |
|
I managed to recover a good amount of perf of devicearray by avoiding the SMV conversion entirely and spoofing the interface. |
|
However there is still a slowdown of ~60%, but only in the many-args case (it's about 15% in the single argument case). This is much better than the previous commit which was upwards of 2.5x. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Greptile Overview
Greptile Summary
Refactored kernel argument handling to use StridedMemoryView internally, enabling direct __dlpack__ protocol support and improving CuPy interoperability (~3x speedup).
Key Changes
- Replaced
auto_device()calls with_to_strided_memory_view()for unified array handling - Added LRU caching to type inference functions (
typeof,from_dtype,strides_from_shape) to reduce overhead - Converted several properties to
@functools.cached_propertyfor performance - Refactored
Out/InOutclasses to use inheritance pattern withcopy_inputclass variable - Changed
strides_from_shape()API fromorder="C"/"F"to boolean flagsc_contiguous/f_contiguous
Issues Found
- Logic bug in
strides_from_shape(): when bothc_contiguousandf_contiguousare False, function produces incorrect strides (computes F-contiguous then reverses, which is neither C nor F layout)
Performance Trade-offs
The PR documents a ~2.5x regression for legacy device_array() in exchange for ~3x improvement for CuPy arrays. This aligns with the project's strategic direction toward ecosystem integration.
Confidence Score: 4/5
- This PR is safe to merge with one logic issue that needs fixing
- Score reflects well-structured refactoring with proper caching optimizations, but one critical logic bug in
strides_from_shape()when both contiguity flags are False needs resolution before merge numba_cuda/numba/cuda/np/numpy_support.py- fix thestrides_from_shape()logic for handling non-contiguous arrays
Important Files Changed
File Analysis
| Filename | Score | Overview |
|---|---|---|
| numba_cuda/numba/cuda/np/numpy_support.py | 3/5 | Added LRU caching to strides_from_shape and from_dtype; changed API from order parameter to c_contiguous/f_contiguous flags. Logic issue: when both flags are False, function computes F-contiguous strides then reverses them unexpectedly. |
| numba_cuda/numba/cuda/cudadrv/devicearray.py | 4/5 | Added _to_strided_memory_view and _make_strided_memory_view helper functions to support DLPack protocol; converted nbytes and added _strided_memory_view_shim to cached properties. Implementation looks solid. |
| numba_cuda/numba/cuda/args.py | 4/5 | Refactored Out and InOut classes to use StridedMemoryView; changed _numba_type_ to cached property. Clean refactor with proper class inheritance. |
| numba_cuda/numba/cuda/dispatcher.py | 4/5 | Updated kernel argument marshaling to work with StridedMemoryView objects instead of DeviceNDArray. Uses fallback to strides_from_shape when strides not available. |
| numba_cuda/numba/cuda/typing/typeof.py | 5/5 | Added LRU caching to _typeof_cuda_array_interface by extracting logic into cached helper functions. All parameters are hashable, caching is safe and should improve performance. |
| numba_cuda/numba/cuda/np/arrayobj.py | 5/5 | Updated call to strides_from_shape to use new keyword-only argument API with c_contiguous=True. Minimal, straightforward change. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
6 files reviewed, 1 comment
9ff51b9 to
1032275
Compare
rparolin
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally looks good to me. I'm a bit on the fence about shipping a known performance regression to a deprecated type. I'd feel better if we removed it first instead of regressing on performance. All that being said, the regression has improved from the initially reported 2.5x.
I'd still wait to merge until @gmarkall gives the final 👍
|
So the results of where we are is that using CuPy Arrays has ~3x less latency, but using device arrays or torch tensors has ~60% more latency? On the torch front NVIDIA/cuda-python#1439 may help in bypassing the slow |
Almost. Will post new numbers in a bit.
Passing |
1032275 to
739cb5b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Greptile Overview
Greptile Summary
Refactors internal kernel argument handling to use StridedMemoryView from cuda-python, enabling direct __dlpack__ protocol support for external arrays like CuPy. Replaces __cuda_array_interface__ handling with the unified StridedMemoryView API and adds LRU caching to type inference paths to reduce overhead. Performance measurements show ~3x improvement for CuPy arrays but ~2.5x regression for legacy device_array() objects, which the PR justifies as an acceptable trade-off favoring ecosystem integration over deprecated APIs.
Confidence Score: 1/5
- Critical logic bug in strides fallback for 0-dimensional arrays will cause incorrect behavior
- The PR contains a critical logic error in dispatcher.py line 558 where the strides fallback uses
oroperator with potentially empty tuples. For 0-dimensional arrays,strides_in_bytesis legitimately(), but empty tuples are falsy in Python, triggering unnecessary fallback computation. While the fallback should also return(), this indicates a misunderstanding of the truthiness semantics that could mask other issues. Additionally, there are multiple stream handling edge cases around stream=0 that should be verified for correctness. - numba_cuda/numba/cuda/dispatcher.py requires immediate attention for the strides fallback bug; numba_cuda/numba/cuda/args.py needs verification of stream_ptr=0 handling semantics
Important Files Changed
File Analysis
| Filename | Score | Overview |
|---|---|---|
| numba_cuda/numba/cuda/dispatcher.py | 1/5 | Refactored kernel argument marshaling to use StridedMemoryView; critical bug in strides fallback logic |
| numba_cuda/numba/cuda/cudadrv/devicearray.py | 2/5 | Added _to_strided_memory_view and _make_strided_memory_view functions for dlpack/CAI conversion; changed nbytes to cached_property |
| numba_cuda/numba/cuda/args.py | 3/5 | Refactored Out and InOut classes to use _to_strided_memory_view; InOut now inherits from Out with copy_input=True |
|
I also added a benchmark demonstrating that the additional overhead with Maybe there's some way that we can reduce that further, but I haven't looked into it. |
|
The devicearray regressions are somewhat concerning, but given we are actively working towards deprecating it, I think it would still be worth it. Do we have a sense on follow up work from here that helps to ameliorate the performance overheads related to torch? |
At least some of the remaining overhead is related to stream synchronization, but that may be justified/useful in some cases I'm guessing. After that, I'm not sure. It will require more investigation. Just to make sure we're on the same page, our expectation is that if an array is on device then we should expect the kernel launch overhead to amount to a collection of relatively cheap attribute accesses. Is that correct? |
My 2c: numba-cuda shouldn't be in the business of handling stream synchronization and that if someone is passing an array on a different stream through dlpack / CAI, it becomes their responsibility to launch the kernel on a stream that is synchronized with respect to the passed stream. This is likely a breaking change that would need to be clearly and loudly deprecated and subsequently removed.
Yes. Kernel launch latency is quite important where we should aim for less than 1us overhead. |
Got it, yeah I don't really know enough about how this functionality is used or assumed to be used to have an informed opinion (yet!), but simply removing sync (in the torch case by passing -1 as the stream pointer to
Roger that, I think we can get there if not very close. |
739cb5b to
891ccb7
Compare
|
/ok to test |
e2a664c to
4632643
Compare
|
/ok to test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No files reviewed, no comments
There is no external behavioural change expected from switching to use SMV, but using SMV from the simulator causes issues. To avoid this, we copy in the original args code into the simulator and use it there.
4632643 to
c7743b5
Compare
|
/ok to test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No files reviewed, no comments
|
/ok to test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1 file reviewed, 1 comment
|
/ok to test |
| "numba>=0.60.0", | ||
| "cuda-bindings>=12.9.1,<14.0.0", | ||
| "cuda-core>=0.3.2,<1.0.0", | ||
| "cuda-core>=0.5.1,<1.0.0", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The lower bound increased because the various APIs that this PR uses are only available in cuda-core >= 0.5.1.
The cu12 and cu13 constraints are removed, since the new cuda-core >=0.5.1 constraint invalidates the previously declared constraints.
|
/ok to test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No files reviewed, no comments
|
/ok to test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No files reviewed, no comments
- Add Python 3.14 to the wheel publishing matrix (NVIDIA#750) - feat: swap out internal device array usage with `StridedMemoryView` (NVIDIA#703) - Fix max block size computation in `forall` (NVIDIA#744) - Fix prologue debug line info pointing to decorator instead of def line (NVIDIA#746) - Fix kernel return type in DISubroutineType debug metadata (NVIDIA#745) - Fix missing line info in Jupyter notebooks (NVIDIA#742) - Fix: Pass correct flags to linker when debugging in the presence of LTOIR code (NVIDIA#698) - chore(deps): add cuda-pathfinder to pixi deps (NVIDIA#741) - fix: enable flake8-bugbear lints and fix found problems (NVIDIA#708) - fix: Fix race condition in CUDA Simulator (NVIDIA#690) - ci: run tests in parallel (NVIDIA#740) - feat: users can pass `shared_memory_carveout` to @cuda.jit (NVIDIA#642) - Fix compatibility with NumPy 2.4: np.trapz and np.in1d removed (NVIDIA#739) - Pass the -numba-debug flag to libnvvm (NVIDIA#681) - ci: remove rapids containers from conda ci (NVIDIA#737) - Use `pathfinder` for dynamic libraries (NVIDIA#308) - CI: Add CUDA 13.1 testing support (NVIDIA#705) - Adding `pixi run test` and `pixi run test-par` support (NVIDIA#724) - Disable per-PR nvmath tests + follow same test practice (NVIDIA#723) - chore(deps): regenerate pixi lockfile (NVIDIA#722) - Fix DISubprogram line number to point to function definition line (NVIDIA#695) - revert: chore(dev): build pixi using rattler (NVIDIA#713) (NVIDIA#719) - [feat] Initial version of the Numba CUDA GDB pretty-printer (NVIDIA#692) - chore(dev): build pixi using rattler (NVIDIA#713) - build(deps): bump the actions-monthly group across 1 directory with 8 updates (NVIDIA#704)
- Add Python 3.14 to the wheel publishing matrix (#750) - feat: swap out internal device array usage with `StridedMemoryView` (#703) - Fix max block size computation in `forall` (#744) - Fix prologue debug line info pointing to decorator instead of def line (#746) - Fix kernel return type in DISubroutineType debug metadata (#745) - Fix missing line info in Jupyter notebooks (#742) - Fix: Pass correct flags to linker when debugging in the presence of LTOIR code (#698) - chore(deps): add cuda-pathfinder to pixi deps (#741) - fix: enable flake8-bugbear lints and fix found problems (#708) - fix: Fix race condition in CUDA Simulator (#690) - ci: run tests in parallel (#740) - feat: users can pass `shared_memory_carveout` to @cuda.jit (#642) - Fix compatibility with NumPy 2.4: np.trapz and np.in1d removed (#739) - Pass the -numba-debug flag to libnvvm (#681) - ci: remove rapids containers from conda ci (#737) - Use `pathfinder` for dynamic libraries (#308) - CI: Add CUDA 13.1 testing support (#705) - Adding `pixi run test` and `pixi run test-par` support (#724) - Disable per-PR nvmath tests + follow same test practice (#723) - chore(deps): regenerate pixi lockfile (#722) - Fix DISubprogram line number to point to function definition line (#695) - revert: chore(dev): build pixi using rattler (#713) (#719) - [feat] Initial version of the Numba CUDA GDB pretty-printer (#692) - chore(dev): build pixi using rattler (#713) - build(deps): bump the actions-monthly group across 1 directory with 8 updates (#704) <!-- Thank you for contributing to numba-cuda :) Here are some guidelines to help the review process go smoothly. 1. Please write a description in this text box of the changes that are being made. 2. Please ensure that you have written units tests for the changes made/features added. 3. If you are closing an issue please use one of the automatic closing words as noted here: https://help.github.com/articles/closing-issues-using-keywords/ 4. If your pull request is not ready for review but you want to make use of the continuous integration testing facilities please label it with `[WIP]`. 5. If your pull request is ready to be reviewed without requiring additional work on top of it, then remove the `[WIP]` label (if present) and replace it with `[REVIEW]`. If assistance is required to complete the functionality, for example when the C/C++ code of a feature is complete but Python bindings are still required, then add the label `[HELP-REQ]` so that others can triage and assist. The additional changes then can be implemented on top of the same PR. If the assistance is done by members of the rapidsAI team, then no additional actions are required by the creator of the original PR for this, otherwise the original author of the PR needs to give permission to the person(s) assisting to commit to their personal fork of the project. If that doesn't happen then a new PR based on the code of the original PR can be opened by the person assisting, which then will be the PR that will be merged. 6. Once all work has been done and review has taken place please do not add features or make changes out of the scope of those requested by the reviewer (doing this just add delays as already reviewed code ends up having to be re-reviewed/it is hard to tell what is new etc!). Further, please do not rebase your branch on main/force push/rewrite history, doing any of these causes the context of any comments made by reviewers to be lost. If conflicts occur against main they should be resolved by merging main into the branch used for making the pull request. Many thanks in advance for your cooperation! -->


Summary
Refactor kernel argument handling to use
StridedMemoryViewinternally,enabling direct support for
__dlpack__objects and improving interoperabilitywith libraries like CuPy.
Closes: #152
Tracking issue: #128
Key Changes
New capability: Kernel arguments now accept objects with
__dlpack__protocol directly (e.g., CuPy arrays).
Internals: Replaced array interface handling with
cuda.core.utils.StridedMemoryViewfor:__dlpack__objects (new)__cuda_array_interface__objectsDeviceNDArray)Performance:
device_array()arrays: ~2.5x regression (initial measurements)slow. Previously it was going through CAI but its CAI version isn't supported
by
StridedMemoryViewPerformance Trade-off Discussion
The 2.5x slowdown for
device_array()is worth discussing (and perhaps thetorch regression is as well):
Arguments for accepting this regression:
__dlpack__libraries represent the primary ecosystem (or atleast the end goal) for GPU computing in Python
that we are prioritizing
device_array()is primarily used in legacy code and tests and isdeprecated
Why this might be worth merging despite the regression:
the project's direction
it proves important
Implementation Details
_to_strided_memory_view()and_make_strided_memory_view()helperfunctions (numba_cuda/numba/cuda/cudadrv/devicearray.py:247-359)
typeoffor CAI objects to reduce type inferenceoverhead (typing/typeof.py:315-365)
Testing
Existing test suite passes.
TL;DR: Adds
__dlpack__support (~3x faster for CuPy), with ~2.5xregression on legacy
device_array(). Trade-off favors ecosystem integration,but open to discussion.