Skip to content

Conversation

@ZzEeKkAa
Copy link
Contributor

@ZzEeKkAa ZzEeKkAa commented Apr 9, 2025

Vendor numba's make_attribute_wrapper until it supports cuda target. Vendored function was changed to use cuda specific data model manager chained with a default data model manager. That makes it possible to wrap array attributes that are not supported by numba, but supported by numba-cuda (like fp16 arrays).

@gmarkall gmarkall added the 2 - In Progress Currently a work in progress label Apr 10, 2025
Copy link
Contributor

@gmarkall gmarkall left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the code looks good (a couple of small comments on the diff). I think we also need:

  • A merge of main to fix CI.
  • A test of the functionality it does enable - I can see it doesn't break any of the existing use cases in the test suite, but I assume they all used models that were not only in the CUDA data model manager. Can we have a test using a model that's only present in the CUDA data model manager?
    • If this is not the appropriate test, then I have misunderstood the purpose of the PR, and a clarification and appropriate test would also be appreciated 🙂

@gmarkall gmarkall added 4 - Waiting on author Waiting for author to respond to review and removed 2 - In Progress Currently a work in progress labels Apr 14, 2025
ZzEeKkAa and others added 3 commits April 16, 2025 09:37
Co-authored-by: Graham Markall <535640+gmarkall@users.noreply.github.com>
@ZzEeKkAa ZzEeKkAa requested a review from gmarkall April 16, 2025 13:48
@gmarkall gmarkall added 4 - Waiting on reviewer Waiting for reviewer to respond to author and removed 4 - Waiting on author Waiting for author to respond to review labels Apr 17, 2025
@gmarkall gmarkall added 5 - Ready to merge Testing and reviews complete, ready to merge and removed 4 - Waiting on reviewer Waiting for reviewer to respond to author labels Apr 30, 2025
@gmarkall gmarkall merged commit b0059f0 into NVIDIA:main Apr 30, 2025
35 checks passed
gmarkall added a commit to gmarkall/numba-cuda that referenced this pull request May 2, 2025
- Fix Invalid NVVM IR emitted when lowering shfl_sync APIs (NVIDIA#231)
- Add Bfloat16 Low++ Bindings (NVIDIA#166)
- Fix cuda.jit decorator inline (NVIDIA#181)
- Feature: cuda specific make_attribute_wrapper (NVIDIA#193)
- return a none tuple if no libdevice path is found (NVIDIA#234)
@gmarkall gmarkall mentioned this pull request May 2, 2025
gmarkall added a commit to gmarkall/numba-cuda that referenced this pull request May 3, 2025
- Local variable debug info deduplication (NVIDIA#222)
- Fix package installation for wheels CI  (NVIDIA#238)
- Fix Invalid NVVM IR emitted when lowering shfl_sync APIs (NVIDIA#231)
- Add Bfloat16 Low++ Bindings (NVIDIA#166)
- Fix cuda.jit decorator inline (NVIDIA#181)
- Feature: cuda specific make_attribute_wrapper (NVIDIA#193)
- return a none tuple if no libdevice path is found (NVIDIA#234)
@gmarkall gmarkall mentioned this pull request May 3, 2025
gmarkall added a commit that referenced this pull request May 3, 2025
- Local variable debug info deduplication (#222)
- Fix package installation for wheels CI  (#238)
- Fix Invalid NVVM IR emitted when lowering shfl_sync APIs (#231)
- Add Bfloat16 Low++ Bindings (#166)
- Fix cuda.jit decorator inline (#181)
- Feature: cuda specific make_attribute_wrapper (#193)
- return a none tuple if no libdevice path is found (#234)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

5 - Ready to merge Testing and reviews complete, ready to merge

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants