Skip to content

Conversation

@hamilton-earthscope
Copy link
Contributor

@hamilton-earthscope hamilton-earthscope commented Oct 28, 2025

Rationale for this change

When writing to partitioned tables, there is a large memory spike when the partitions are computed because we .combine_chunks() on the new partitioned arrow tables and we materialize the entire list of partitions before writing data files.

This PR switches the partition computation to a generator to avoid materializing all the partitions in memory at once, reducing the memory overhead of writing to partitioned tables.

Are these changes tested?

No new tests. The tests using this method were updated to consume the generator as a list.

However, in my personal use case, I am using pa.total_allocated_bytes() to determine memory allocation before and after the write and see the following across 5 writes of ~128 MB:

Run Original Impl (Before Write) Original Impl (After Write) Iters (Before Write) Iters (After Write)
1 29.31 MB 151.62 MB 28.38 MB 30.40 MB
2 27.74 MB 151.62 MB 28.85 MB 30.36 MB
3 28.81 MB 151.62 MB 28.52 MB 31.29 MB
4 28.71 MB 151.62 MB 29.27 MB 30.64 MB
5 28.60 MB 151.61 MB 28.29 MB 31.11 MB

This scales with the size of the write: if I want to write a 3 GB arrow table to a partitioned table, I need at least 6 GB RAM.

Are there any user-facing changes?

No.

Copy link
Contributor

@kevinjqliu kevinjqliu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! thank you

@kevinjqliu kevinjqliu merged commit 5773b7f into apache:main Nov 3, 2025
8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants