Skip to content

Conversation

@eric-maynard
Copy link
Contributor

@eric-maynard eric-maynard commented Jul 30, 2025

This adds support for the DELTA_LENGTH_BYTE_ARRAY Parquet encoding.

The logic is taken from Spark's VectorizedDeltaLengthByteArrayReader with adjustments made for compatibility with our existing Parquet reader.

@eric-maynard
Copy link
Contributor Author

Thanks @danielcweeks -- does this also include DELTA_BYTE_ARRAY? What about BYTE_STREAM_SPLIT? I had been operating under the assumption that everything in the Parquet docs is officially adopted within Parquet and that we should endeavor to support all the encodings Spark does.

Copy link
Contributor

@amogh-jahagirdar amogh-jahagirdar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While I generally agree with @danielcweeks that Iceberg as a project shouldn't support a functionality that hasn't been clearly voted and adopted in dependencies like Parquet, I do think that the project is already in that position of supporting these encodings in the non-vectorized Parquet reader.

At the moment, for encodings like DELTA_BYTE_ARRAY or even previously for DELTA_BINARY_PACKED (before #13391), the vectorized reader would simply fail and we'd point users to use the non-vectorized reader to unblock their reads (so the non-vectorized reader already supports these encodings from a user perspective).

So in my mind, users probably already had the expectation that they could read these encodings so long as they disabled vectorized reads. At which point I think then we're just discussing the expansion of this support for vectorized reads.

I think the right path to take would be to add these encoding support to the vectorized reader while working with the parquet community to make the formal adoption of these encodings more clear.

@huaxingao
Copy link
Contributor

+1 on @amogh-jahagirdar's comment. Since users already expect to read these encodings with non-vectorized reads, adding them to the vectorized reader would make the two paths consistent. Spark already supports these encodings in its vectorized reader. Formal adoption in Parquet should be pursued. Shall we start a discussion/voting thread in the Parquet community?

@github-actions github-actions bot added the spark label Aug 12, 2025
@eric-maynard eric-maynard marked this pull request as ready for review August 12, 2025 19:27
@eric-maynard
Copy link
Contributor Author

In addition to the fact that we support this already in non-vectorized mode, I believe this is also a dependency to support DELTA_BYTE_ARRAY -- not sure if that's also considered unsupported per the above discussion.

@eric-maynard eric-maynard requested a review from huaxingao August 14, 2025 21:34
@wgtmac
Copy link
Member

wgtmac commented Aug 15, 2025

@danielcweeks There are already many Parquet implementations in the wild supporting these V2 encodings: https://parquet.apache.org/docs/file-format/implementationstatus/#encodings. Readers on this page may assume that all these encodings are formally supported. I'm not sure if we still need a vote to finalize something.

@eric-maynard eric-maynard marked this pull request as draft August 26, 2025 18:55
@github-actions
Copy link

This pull request has been marked as stale due to 30 days of inactivity. It will be closed in 1 week if no further activity occurs. If you think that’s incorrect or this pull request requires a review, please simply write any comment. If closed, you can revive the PR at any time and @mention a reviewer or discuss it on the [email protected] list. Thank you for your contributions.

@github-actions github-actions bot added the stale label Sep 26, 2025
@github-actions
Copy link

github-actions bot commented Oct 4, 2025

This pull request has been closed due to lack of activity. This is not a judgement on the merit of the PR in any way. It is just a way of keeping the PR queue manageable. If you think that is incorrect, or the pull request requires review, you can revive the PR at any time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants