Enable Faiss-based vector format to index larger number of vectors in a single segment#14847
Conversation
… a single segment - Moves away from a ByteBuffer (with a 2 GB limit) to direct copying of vectors to native memory - Also simplify some other off-heap memory IO instances
|
This PR does not have an entry in lucene/CHANGES.txt. Consider adding one. If the PR doesn't need a changelog entry, then add the skip-changelog label to it and you will stop receiving this reminder on future updates to the PR. |
msokolov
left a comment
There was a problem hiding this comment.
This LGTM - one question: do we have unit tests covering this?
|
This PR does not have an entry in lucene/CHANGES.txt. Consider adding one. If the PR doesn't need a changelog entry, then add the skip-changelog label to it and you will stop receiving this reminder on future updates to the PR. |
|
@msokolov I wasn't sure about attempting to index a large amount of vector data, given that it'll take up a few GB of RAM. I've added a test for now, please let me know if I should keep it (or how to test it better). Perhaps having the test is fine, because we run Faiss tests (and only those) in a separate GH action? The test fails deterministically when added to |
|
Sorry I was too vague - I didn't mean we should be testing the > 2GB case! I just wanted to make sure we had unit test coverage for these classes at all because I'm not familiar with this part of thge codebase |
- Also modify the test to make backporting easier
Yes, we have a test class that runs all tests in the We had to modify / disable a few because the format only supports float vectors and a few similarity functions.. We run these tests on each PR / commit via GH actions, see sample run from this PR, which ran:
I kind of like that we have this test, can we just mark it as "monster" so that we don't run it locally / from GH actions? I was able to run it using: ./gradlew -p lucene/sandbox -Dtests.faiss.run=true test --tests "org.apache.lucene.sandbox.codecs.faiss.*" -Dtests.monster=true -Dtests.heapsize=16g..where it took a (relatively) long time to run: Also, running it on |
|
This PR does not have an entry in lucene/CHANGES.txt. Consider adding one. If the PR doesn't need a changelog entry, then add the skip-changelog label to it and you will stop receiving this reminder on future updates to the PR. |
+1, this is exactly why we have the monster annotation! |
mikemccand
left a comment
There was a problem hiding this comment.
Thanks @kaivalnp -- this looks like a rote cutover from the legacy ByteBuffer to MemorySegment. Thank you for adding the new monster test and confirming it passes!
Do we have any tests that check for memory leaks? E.g. a test that creates Faiss HNSW graph, and then opens/closes it thousands of times? I don't think we should block this awesome change for these tests ... we can separately pursue.
|
Thanks @mikemccand!
I don't think we have tests today, so I opened #14875 to track it -- plus the broader question of how to safely use the new format! |
|
This PR does not have an entry in lucene/CHANGES.txt. Consider adding one. If the PR doesn't need a changelog entry, then add the skip-changelog label to it and you will stop receiving this reminder on future updates to the PR. |
|
@mikemccand I stumbled upon a way to allocate a With this, I think we've moved away from all Edit: Also posting a benchmark run to check that we didn't change any behavior
This PR: There is no tangible difference in performance (seems to be within range of noise).. |
|
Thanks @kaivalnp -- I'll merge this one soon. Let's remember to also backport this to 10.x? |
|
Could you also add an entry in |
Thanks @mikemccand, I thought it was a follow-up to the original PR adding the codec, and may not need a separate entry -- but I've added one under "Bug Fixes" now.. I'll update the backport PR once this is merged! |
Description
I was trying to index a large number of vectors in a single segment, and ran into an error because of the way we copy vectors to native memory, before calling Faiss to create an index:
This limitation was hit because we use a
ByteBuffer(backed by native memory) to copy vectors from heap -- which has a 2 GB size limitAs a fix, I've changed it to use
MemorySegmentspecific functions to copy vectors (also moving away from these byte buffers in other places, and using more appropriate IO methods)With these changes, we no longer see the above error and are able to build and search an index. Also ran benchmarks for a case where this limit was not hit to check for performance impact:
Baseline (on
main):Candidate (on this PR):
..and indexing / search performance is largely unchanged
Edit: Related to #14178