perf: iterate over generators when writing datafiles to reduce memory pressure #2671
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Rationale for this change
When writing to partitioned tables, there is a large memory spike when the partitions are computed because we
.combine_chunks()on the new partitioned arrow tables and we materialize the entire list of partitions before writing data files.This PR switches the partition computation to a generator to avoid materializing all the partitions in memory at once, reducing the memory overhead of writing to partitioned tables.
Are these changes tested?
No new tests. The tests using this method were updated to consume the generator as a list.
However, in my personal use case, I am using
pa.total_allocated_bytes()to determine memory allocation before and after the write and see the following across 5 writes of ~128 MB:This scales with the size of the write: if I want to write a 3 GB arrow table to a partitioned table, I need at least 6 GB RAM.
Are there any user-facing changes?
No.