-
Notifications
You must be signed in to change notification settings - Fork 54
Deprecate the DeviceNDArray class and public APIs that return instances
#546
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Deprecate the DeviceNDArray class and public APIs that return instances
#546
Conversation
|
Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually. Contributors can view more details about this message here. |
| @functools.wraps(func) | ||
| def wrapper(*args, **kwargs): | ||
| warnings.warn( | ||
| f"{func.__name__} api is deprecated. Please prefer cupy for array functions", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cupy arrays are much slower than DeviceNDArray because they require creating an external (i.e., non-numba-cuda-created) stream, so I'm not sure a recommendation for that is what we should do right now.
I was thinking that we can keep the top-level APIs (device_array etc.) and replace their internals with StridedMemoryView or something similar, in an effort to allow folks to as-cheaply-as-possible construct arrays.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I concur that a light weight device array like container should exist, I'm just not sure that numba-cuda should necessarily be the library providing it publicly. I think we should nudge users away from using numba-cuda as such, like for moving data from host to device. That said, I'm open to suggestions on what we should recommend.
|
/ok to test |
|
Auto-sync is disabled for ready for review pull requests in this repository. Workflows must be run manually. Contributors can view more details about this message here. |
Greptile SummaryThis PR deprecates the Key Changes:
Issues Found:
Confidence Score: 4/5
Important Files Changed
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Additional Comments (15)
-
numba_cuda/numba/cuda/kernels/transpose.py, line 24-25 (link)style: The deprecation message mentions 'transpose method' but this function is not a method - it's a standalone function. Consider rewording to 'transpose function and DeviceNDArray class are deprecated.'
Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
-
numba_cuda/numba/cuda/tests/cudadrv/test_profiler.py, line 19 (link)style: Array size changed from 100 to 10, should this be intentional? Was the change from 100 to 10 elements intentional, or should this remain 100to preserve the original test's different allocation sizes?
-
numba_cuda/numba/cuda/api.py, line 69-70 (link)logic: Dead code: line 70 is unreachable after the return statement on line 69
-
numba_cuda/numba/cuda/vectorizers.py, line 121 (link)logic: This line still uses the deprecated
cuda.as_cuda_arrayinstead of_api._as_cuda_arraylike line 186 -
numba_cuda/numba/cuda/tests/cudapy/test_random.py, line 22 (link)style: CuPy is imported but never used. Either remove the unused import or complete the migration to use CuPy arrays instead of deprecated DeviceNDArray APIs. Is this import intended for future work, or should the migration to CuPy arrays be completed in this PR?
Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
-
numba_cuda/numba/cuda/testing.py, line 197 (link)logic: Using
warnings.resetwarnings()clears all warning filters, not just the ones added in setUp. This could affect other tests running in the same process. Should this use a more targeted approach to only reset the specific filter added in setUp? -
numba_cuda/numba/cuda/tests/doc_examples/test_reduction.py, line 73 (link)style: This line accesses
a[0]directly on the GPU array, which works with CuPy but the assertion on line 77 usesa.get()[0]. Consider usinga[0].get()here for consistency.Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
-
numba_cuda/numba/cuda/_api.py, line 324-330 (link)logic: This function calls the public
device_arrayinstead of the internal_device_array, which will emit deprecation warnings when used internallyShould this call
_device_arrayto avoid deprecation warnings when used internally? -
numba_cuda/numba/cuda/_api.py, line 340-348 (link)logic: This function calls the public
mapped_arrayinstead of the internal_mapped_array, which will emit deprecation warnings when used internallyShould this call
_mapped_arrayto avoid deprecation warnings when used internally? -
numba_cuda/numba/cuda/_api.py, line 358-360 (link)logic: This function calls the public
pinned_arrayinstead of the internal_pinned_array, which will emit deprecation warnings when used internallyShould this call
_pinned_arrayto avoid deprecation warnings when used internally? -
numba_cuda/numba/cuda/tests/cudapy/test_multithreads.py, line 66 (link)style: inconsistent with migration goals - still uses deprecated cuda.to_device(). Are these methods intentionally testing the deprecated API, or should they also be migrated to CuPy?
-
numba_cuda/numba/cuda/tests/cudapy/test_multithreads.py, line 75-76 (link)style: inconsistent with migration goals - still uses deprecated cuda.to_device(). Should these test methods also migrate to CuPy arrays for consistency?
-
numba_cuda/numba/cuda/tests/doc_examples/test_globals.py, line 47 (link)logic:
cp.asarray(5, dtype=np.float64)creates a scalar array with value 5, not a 5-element array. Should becp.zeros(5, dtype=np.float64)to match original behavior. -
numba_cuda/numba/cuda/tests/cudapy/test_vectorize.py, line 89 (link)logic: Line 89 creates a CUDA stream that is never used - it's immediately overridden on line 94
-
numba_cuda/numba/cuda/tests/cudapy/test_vectorize.py, line 94-96 (link)logic: Stream creation moved outside the loop but needs to be inside for proper isolation between test iterations. Should each iteration use a separate stream for proper test isolation?
74 files reviewed, 15 comments
|
/ok to test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Additional Comments (3)
-
numba_cuda/numba/cuda/cudadrv/devicearray.py, line 773-779 (link)logic:
type(self)(will trigger deprecation warning during internal operations -
numba_cuda/numba/cuda/_api.py, line 205-208 (link)logic: Internal
_pinned_arrayshould not emit deprecation warnings. Remove this warning since this is the internal implementation -
numba_cuda/numba/cuda/_api.py, line 241-244 (link)logic: Internal
_mapped_arrayshould not emit deprecation warnings. Remove this warning since this is the internal implementation
74 files reviewed, 3 comments
cpcloud
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Couple of minor changes requested, but overall LGTM.
| xoroshiro128p_normal_float64, | ||
| ) | ||
|
|
||
| import cupy as cp |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this import needed here? It doesn't appear to be used and nothing else was changed.
| setattr(driver, "cuMemcpyHtoD", raising_transfer) | ||
| setattr(driver, "cuMemcpyDtoH", raising_transfer) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are we using setattr here? We know the attribute name, so there's no obvious reason to set this attribute dynamically.
| setattr(driver, "cuMemcpyHtoD", raising_transfer) | |
| setattr(driver, "cuMemcpyDtoH", raising_transfer) | |
| driver.cuMemcpyHtoD = driver.cuMemcpyDtoH = raising_transfer |
| else: | ||
| del driver.cuMemcpyHtoD | ||
| if self.old_DtoH is not None: | ||
| setattr(driver, "cuMemcpyDtoH", self.old_DtoH) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| setattr(driver, "cuMemcpyDtoH", self.old_DtoH) | |
| driver.cuMemcpyDtoH = self.old_DtoH |
| old_DtoH = getattr(driver, "cuMemcpyDtoH", None) | ||
| def tearDown(self): | ||
| if self.old_HtoD is not None: | ||
| setattr(driver, "cuMemcpyHtoD", self.old_HtoD) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| setattr(driver, "cuMemcpyHtoD", self.old_HtoD) | |
| driver.cuMemcpyHtoD = self.old_HtoD |
Co-authored-by: Phillip Cloud <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Additional Comments (10)
-
numba_cuda/numba/cuda/tests/cudapy/test_random.py, line 22 (link)style: CuPy import is unused in this file. Is this import intended for future use, or should it be removed until needed?
Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
-
numba_cuda/numba/cuda/tests/doc_examples/test_laplace.py, line 67 (link)logic: This still calls
copy_to_host()on a CuPy array, but CuPy arrays don't have this method - should use.get()instead -
numba_cuda/numba/cuda/_api.py, line 205-208 (link)logic: Internal function
_pinned_arrayshould not emit deprecation warnings - this defeats the purpose of having separate internal implementations -
numba_cuda/numba/cuda/_api.py, line 241-244 (link)logic: Internal function
_mapped_arrayshould not emit deprecation warnings - this defeats the purpose of having separate internal implementations -
numba_cuda/numba/cuda/tests/cudapy/test_gufunc.py, line 130 (link)logic: This
cuda.device_arraycall should also be wrapped in the warning context since it's a deprecated API -
numba_cuda/numba/cuda/tests/cudapy/test_device_array_capture.py, line 23 (link)style: Function name and docstring are now misleading - creates CuPy array, not Numba device array
Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
-
numba_cuda/numba/cuda/tests/cudapy/test_device_array_capture.py, line 28 (link)style: Docstring refers to 'Numba device array' but now wraps CuPy array
Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
-
numba_cuda/numba/cuda/vectorizers.py, line 121 (link)style: Inconsistency: this uses public
cuda.as_cuda_array()while line186 uses internal_api._as_cuda_array()for the same operationNote: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
-
numba_cuda/numba/cuda/tests/cudapy/test_gufunc_scalar.py, line 126 (link)syntax: Missing import for
pytestmodule which is required forpytest.warns() -
numba_cuda/numba/cuda/cudadrv/devicearray.py, line 773-779 (link)logic: Using deprecated constructor will trigger warnings during internal setitem operations. Should use
type(self)._create_nowarn(
74 files reviewed, 10 comments
|
/ok to test |
|
/ok to test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
74 files reviewed, no comments
Greptile's behavior is changing!From now on, if a review finishes with no comments, we will not post an additional "statistics" comment to confirm that our review found nothing to comment on. However, you can confirm that we reviewed your changes in the status check section. This feature can be toggled off in your Code Review Settings by deselecting "Create a status check for each PR". |
|
/ok to test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Additional Comments (5)
-
numba_cuda/numba/cuda/tests/cudapy/test_transpose.py, line 9 (link)style: CuPy import is unused in this file
Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
-
numba_cuda/numba/cuda/tests/cudadrv/test_cuda_devicerecord.py, line 116 (link)style: Should use
super().setUp()for consistency with line 42Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
-
numba_cuda/numba/cuda/tests/cudapy/test_gufunc_scalar.py, line 58 (link)logic: inconsistent migration -
dev_out2still uses deprecatedcopy_to_host()method whiledev_out1was migrated to CuPy's.get(). Should beout2 = dev_out2.get() -
numba_cuda/numba/cuda/vectorizers.py, line 119 (link)style: Line uses public cuda._api._is_cuda_ndarray but should use internal _api._is_cuda_ndarray for consistency
Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
-
numba_cuda/numba/cuda/vectorizers.py, line 184 (link)style: Still using deprecated public cuda.cudadrv.devicearray.is_cuda_ndarray - should use _api._is_cuda_ndarray
74 files reviewed, 5 comments
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Additional Comments (2)
-
numba_cuda/numba/cuda/api.py, line 148-154 (link)style: duplicates
_api._device_array()- consider importing from_apiinstead to reduce code duplicationNote: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
-
numba_cuda/numba/cuda/api.py, line 293-320 (link)style: duplicates
_api._contiguous_strides_like_array()- consider importing from_apimoduleNote: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
74 files reviewed, 2 comments
|
/ok to test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Additional Comments (1)
-
numba_cuda/numba/cuda/kernels/reduction.py, line 262-264 (link)logic: Unnecessary conversion here -
resis already a device array with slicing support. The originalres[:1].copy_to_device(partials[:1], stream=stream)was simpler and more efficient.
74 files reviewed, 1 comment
cpcloud
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
|
|
||
|
|
||
| class TestPinned(CUDATestCase): | ||
| # TODO |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
non-blocking: is there a specific todo here?

Part of #471
DeprecatedNDArrayAPIWarningemitted from all user facing functions for moving data around (cuda.to_device, driver.host_to_device, device_to_host, also as_cuda_array, is_cuda_array, etcDeviceNDArrayctorDeviceNDArray._create_nowarn