Skip to content

Conversation

@jhorstmann
Copy link
Contributor

Which issue does this PR close?

Prerequisite for investigating parquet writing performance (#7822).

Rationale for this change

The benchmark should measure the cpu overhead of parquet writing, not the os or filesystem parts of it. Running the benchmark showed that the file has nearly a 50% overhead, which makes profiling more difficult by hiding the bottlenecks inside the parquet code itself.

What changes are included in this PR?

Use a Vec instead of an unbuffered File as the sink.

Are these changes tested?

Tested by running the benchmark.

Are there any user-facing changes?

No

@github-actions github-actions bot added the parquet Changes to the parquet crate label Jun 29, 2025
@Dandandan Dandandan merged commit a9f316b into apache:main Jun 29, 2025
16 checks passed
@Dandandan
Copy link
Contributor

Thanks @jhorstmann

Dandandan pushed a commit that referenced this pull request Jul 4, 2025
# Which issue does this PR close?

We generally require a GitHub issue to be filed for all bug fixes and
enhancements and this helps us generate change logs for our releases.
You can link an issue to this PR using the GitHub syntax.

- Closes #7822. The benchmark update in #7823 should be merged first to
get a fair baseline.

# Rationale for this change

The changes in this PR improve parquet writing performance for
primitives by up to 45%.

# What changes are included in this PR?

There was not a single bottleneck to fix, instead several small
improvements contributed to the performance increase:

- Optimize counting of values and nulls by replacing a loop with code
that can be vectorized by the compiler. The number of nulls can also be
calculated from the lengths of the array and the number of values to
write, instead of being counted separately.
- Change asserts in `BitWriter::put_value` to `debug_assert` since these
should never be triggered by users of the code and are not required for
soundness.
- Use slice iteration instead of indexing in flush_bit_packed_run to
avoid a bounds check.
- Separate iteration for def_levels and non_null_indices using
specialized iterators. Range iteration is `TrustedLen` and so avoids
multiple capacity checks and `BitIndexIterator` is more optimized for
collecting non-null indices.
- Cache logical nulls of the array to avoid clones or repeated
recomputation. This should avoid a pathological case when writing lists
of arrays that need logical nulls.
- Optimize bloom filter initialization to a single `memset` and write
all blocks as a single slice on little endian targets.

# Are these changes tested?

Logic should be covered by existing tests.

# Are there any user-facing changes?

No, all changes are to implementation details and do not affect public
apis.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

parquet Changes to the parquet crate

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants