Conversation
|
Hi @jinsolp! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks! |
|
Hi @jinsolp can you complete the CLA? Then I can import it and run internal tests. |
|
@mnorris11 Sure! : ) Who should I be writing as the "Point of Contact"? What about "Schedule A" (list of designated employees)? Should I be writing myself in those sections? |
Hmm, @cjnolet @tarang-jain do you remember, did you fill out the Individual or Company one for NVIDIA? If Company, did you email cla@meta.com to update it with additional folks, or do you usually just direct folks to the Individual option? I think there is no preference on our side. If there is no NVIDIA "Company" CLA yet, feel free to start it @jinsolp and add yourself as Point of Contact and under Schedule A list of employees (along with Corey, Tarang, Tamas, and any others you deem should be added) |
|
@mnorris11 we were told to sign the individual one. |
|
@mnorris11 Signed! |
|
@mnorris11 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
|
Seems like the ROCM build fails, are the logs visible to you @jinsolp ? |
|
@mnorris11 yes I can see the logs, but I can't tell why it failed from the logs. Do you know how I can reproduce the results? |
|
Weird; seems like rocm hipification is having trouble with The cmake command to repro looks like this: Meanwhile @ItsPitt do you have ideas on the AMD side of what to include for Error logs: |
…into cuvs-cagra-fp16-pub
|
I've added |
|
@mnorris11 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
|
@mnorris11 Looks like build&tests failed with a bunch of warnings, but I don't think I have access to the log 👀 |
It looks like just warnings on the internal end, so no worries, it is now just in review. |
|
@mnorris11 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
|
@mnorris11 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
|
@mnorris11 merged this pull request in 752b687. |
| idx_t n, | ||
| const void* x, | ||
| NumericType numeric_type, | ||
| const idx_t* xids) override; |
There was a problem hiding this comment.
Maybe we should have taken the opportunity to also make the id sizes parameterizable. There are many use cases where int32 is more appropriate than int64
There was a problem hiding this comment.
We can change that in a subsequent diff, but I would like to avoid having 3 different add_with_ids implementaitons.
| n, d = x.shape | ||
| assert d == self.d | ||
| x = np.ascontiguousarray(x, dtype='float32') | ||
| if numeric_type == faiss.Float32: |
There was a problem hiding this comment.
It is a bit clumsy that the Python interface is not able to directly accept np.float16 arguments because there is no way to tell if this will raise an error.
There was a problem hiding this comment.
@mdouze is x expected to be a numpy array? The docs here say "array-like", but since it calls x.shape, can I assume that this is a numpy array and access the dtype instead of getting numeric_type as an argument?
There was a problem hiding this comment.
It should be a numpy array, see https://github.com/facebookresearch/faiss/blob/main/faiss/python/swigfaiss.swig#L1163
Support for torch arrays is via another mechanism.
| x = np.ascontiguousarray(x, dtype='float32') | ||
| else: | ||
| x = np.ascontiguousarray(x, dtype='float16') | ||
| self.add_c(n, swig_ptr(x)) |
There was a problem hiding this comment.
this will not work in the fp16 case because it will go through the regular add method, not the one with numeric_type
|
Thanks for the feedback @mdouze ! It looks like the PR is merged already. I'll open up a new PR with follow-ups. |
Summary: This PR does 2 things - Enable support fot `IndexIDMap` with Cagra fp16 (original support introduced in #4188) - Added tests in `test_cagra.py` - Reflecting feedback about python API from #4384 (comment) Pull Request resolved: #4411 Reviewed By: junjieqi Differential Revision: D78695771 Pulled By: mnorris11 fbshipit-source-id: 4b3a0869bed5d33165354f415c748812b0d4b253
Summary: This PR does 2 things - Enable support fot `IndexIDMap` with Cagra fp16 (original support introduced in facebookresearch#4188) - Added tests in `test_cagra.py` - Reflecting feedback about python API from facebookresearch#4384 (comment) Pull Request resolved: facebookresearch#4411 Reviewed By: junjieqi Differential Revision: D78695771 Pulled By: mnorris11 fbshipit-source-id: 4b3a0869bed5d33165354f415c748812b0d4b253
Summary: Supporting fp16 for cuVS cagra, and introducing new extended APIs for this. Discussions related to this issue: facebookresearch/faiss#4324 Added tests in `faiss/gpu/test/TestGpuIndexCagra.cu` and `faiss/gpu/test/test_cagra.py` for example usage. Pull Request resolved: facebookresearch/faiss#4384 Reviewed By: junjieqi Differential Revision: D76480612 Pulled By: mnorris11 fbshipit-source-id: 863d8671eab461733110f74550ffc56650f77407
Summary: This PR does 2 things - Enable support fot `IndexIDMap` with Cagra fp16 (original support introduced in facebookresearch/faiss#4188) - Added tests in `test_cagra.py` - Reflecting feedback about python API from facebookresearch/faiss#4384 (comment) Pull Request resolved: facebookresearch/faiss#4411 Reviewed By: junjieqi Differential Revision: D78695771 Pulled By: mnorris11 fbshipit-source-id: 4b3a0869bed5d33165354f415c748812b0d4b253
Supporting fp16 for cuVS cagra, and introducing new extended APIs for this.
Discussions related to this issue: #4324
Added tests in
faiss/gpu/test/TestGpuIndexCagra.cuandfaiss/gpu/test/test_cagra.pyfor example usage.