-
Notifications
You must be signed in to change notification settings - Fork 4k
ARROW-1943: [JAVA] handle setInitialCapacity for deeply nested lists #1439
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
I thought more about this and the implemented solution is debatable even though it fixes the overallocation exception problem. For example, consider the newly added unit test for schema : LIST (LIST (INT)) Since each position in the top level vector has 1 or more lists, the number of offsets in the inner list will always be greater than offsets in its parent. This implies that there is some factor of increase in capacity as we go down the tree. In the context of unit test: The value capacity of outer list is 2 and inner list is 4 because each position of outer list has 2 inner lists and then we have an int vector with value capacity 10 comprising of data across all inner lists. So doing list.setInitialCapacity(2) -> innerList.setInitialCapacity(2) -> intVector.setInitialCapacity(2 * 5) will require expansion of offset buffer (and validity) of inner list. The question really is if there is a reasonable way to increase the multiplier as we go down the nested lists. The current patch keeps it same until we get down to scalars and then we directly use a multiplier of 5. However, this will potentially require re-allocation of internal buffers of each inner list vector as the user app writes data into deeply nested lists. |
|
Ping. |
|
I'm +1 on this approach. It may not be perfect but it is definitely far better than the old approach. |
The current implementation of setInitialCapacity() uses a factor of 5 for every level we go into list: So if the schema is LIST (LIST (LIST (LIST (LIST (LIST (LIST (BIGINT)))))) and we start with an initial capacity of 128, we end up throwing OversizedAllocationException from the BigIntVector because at every level we increased the capacity by 5 and by the time we reached inner scalar that actually stores the data, we were well over max size limit per vector (1MB). We saw this problem downstream when we failed to read deeply nested JSON data. The potential fix is to use the factor of 5 only when we are down to the leaf vector. As the depth increases and we are still working with complex/list, we don't use the factor of 5. cc @jacques-n , @BryanCutler , @icexelloss Author: siddharth <[email protected]> Closes apache#1439 from siddharthteotia/ARROW-1943 and squashes the following commits: d0adbad [siddharth] unit tests e2f21a8 [siddharth] fix imports d103436 [siddharth] ARROW-1943: handle setInitialCapacity for deeply nested lists
The current implementation of setInitialCapacity() uses a factor of 5 for every level we go into list: So if the schema is LIST (LIST (LIST (LIST (LIST (LIST (LIST (BIGINT)))))) and we start with an initial capacity of 128, we end up throwing OversizedAllocationException from the BigIntVector because at every level we increased the capacity by 5 and by the time we reached inner scalar that actually stores the data, we were well over max size limit per vector (1MB). We saw this problem downstream when we failed to read deeply nested JSON data. The potential fix is to use the factor of 5 only when we are down to the leaf vector. As the depth increases and we are still working with complex/list, we don't use the factor of 5. cc @jacques-n , @BryanCutler , @icexelloss Author: siddharth <[email protected]> Closes apache#1439 from siddharthteotia/ARROW-1943 and squashes the following commits: d0adbad [siddharth] unit tests e2f21a8 [siddharth] fix imports d103436 [siddharth] ARROW-1943: handle setInitialCapacity for deeply nested lists
The current implementation of setInitialCapacity() uses a factor of 5 for every level we go into list: So if the schema is LIST (LIST (LIST (LIST (LIST (LIST (LIST (BIGINT)))))) and we start with an initial capacity of 128, we end up throwing OversizedAllocationException from the BigIntVector because at every level we increased the capacity by 5 and by the time we reached inner scalar that actually stores the data, we were well over max size limit per vector (1MB). We saw this problem downstream when we failed to read deeply nested JSON data. The potential fix is to use the factor of 5 only when we are down to the leaf vector. As the depth increases and we are still working with complex/list, we don't use the factor of 5. cc @jacques-n , @BryanCutler , @icexelloss Author: siddharth <[email protected]> Closes apache#1439 from siddharthteotia/ARROW-1943 and squashes the following commits: d0adbad [siddharth] unit tests e2f21a8 [siddharth] fix imports d103436 [siddharth] ARROW-1943: handle setInitialCapacity for deeply nested lists
The current implementation of setInitialCapacity() uses a factor of 5 for every level we go into list:
So if the schema is LIST (LIST (LIST (LIST (LIST (LIST (LIST (BIGINT)))))) and we start with an initial capacity of 128, we end up throwing OversizedAllocationException from the BigIntVector because at every level we increased the capacity by 5 and by the time we reached inner scalar that actually stores the data, we were well over max size limit per vector (1MB).
We saw this problem downstream when we failed to read deeply nested JSON data.
The potential fix is to use the factor of 5 only when we are down to the leaf vector. As the depth increases and we are still working with complex/list, we don't use the factor of 5.
cc @jacques-n , @BryanCutler , @icexelloss