Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ParquetMetadataWriter allow ad-hoc encoding of ParquetMetadata #6000

Closed
wants to merge 6 commits into from

Conversation

adriangb
Copy link
Contributor

@adriangb adriangb commented Jul 3, 2024

A step towards #5988, #6002

@github-actions github-actions bot added the parquet Changes to the parquet crate label Jul 3, 2024
@adriangb adriangb changed the title Add function to mirror Add encode_metadata function to mirror decode_metadata and allow ad-hoc encoding of ParquetMetadata Jul 3, 2024
Copy link
Contributor

@alamb alamb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @adriangb -- this PR looks good to me and I think we could proceed with this design.

I did file #6002 to track a potentially more flexible API that I think is worth considering. However, adding this API to mirror decode_metadata I think would also be fine (and we could make a more complex API later)


let encoded = encode_metadata(&metadata).unwrap();
let decoded = decode_metadata(&encoded).unwrap();
assert_eq!(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you simply just assert that encoded == decoded?

{
assert_eq!(a, b);
}
// TODO: add encoding and decoding of column and offset indexes (aka page indexes)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that encoding/decoding of these structures doesn't have to be present in the initial PR, however given they are stored out of line / slightly differently than the other structures I think it would be good to ensure we could encode them using this same API

/// specified by the [Parquet Spec].
///
/// [Parquet Spec]: https://github.com/apache/parquet-format#metadata
pub fn encode_metadata(metadata: &ParquetMetaData) -> Result<Vec<u8>> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible to switch the existing writers to use this API as well? Not only would that avoid code duplication, it would ensure the API is general enough

For example, I wonder if it would make sense for this function signature to be more like

/// write the metadata to the target `std::io:Write`, returning the number of bytes written
pub fn encode_metadata<W: Write>(metadata: &ParquetMetaData) -> Result<usize> {
...
}

That would allow writing into a Vec but also allow writing into various other targets and perhaps avoid buffering

@adriangb
Copy link
Contributor Author

adriangb commented Jul 4, 2024

@alamb I pushed a fluentish API version of this.

I got bogged down implementing the page index writing because there doesn't seem to be a clean path to go from a ParquetMetadata's PageLocation and Index to the thrift OffsetIndex and ColumnIndex. I think the thing is that the current writers never materialize a ParquetMetadata and thus forcing them to do so might introduce unnecessary overhead. Maybe the path to go from a ParquetMetadata to bytes shouldn't be merged with writers? But also maybe I just couldn't come up with a good implementation and with more trial or with your help we can get there.

I do think the readers could be merged.

For this encoder to make sense I think it should have an option to handle page indexes and have it enabled and working by default (like the writers do).

@adriangb
Copy link
Contributor Author

adriangb commented Jul 6, 2024

One thing I can do to avoid blocking on my lack of knowledge of encoding the page index stuff is to design the API first and implement it later. E.g. we can add .with_page_index(bool) and error if you set it to true or don't set it at all so that you're forced to acknowledge that the future default will be true.

@alamb
Copy link
Contributor

alamb commented Jul 8, 2024

Thanks @adriangb -- I will try and review this PR today

@alamb
Copy link
Contributor

alamb commented Jul 11, 2024

Working through the list of PRs in arrow-rs is on my list of things to do tomorrow

Copy link
Contributor

@alamb alamb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @adriangb -- this is looking like a good start

I think we should try and structure the code so the existing writer uses this new MetadataEncoder which would keep metadata writing consistent as well as enable usecases like encoding bloom filters, etc.

Let me know what you think.

cc @sunchao @tustvold @Jefffrey @liukun4515 @nevi-me for any thoughts you might have on this API / approach

@@ -86,7 +86,7 @@ pub type ParquetOffsetIndex = Vec<Vec<Vec<PageLocation>>>;
///
/// [`parquet.thrift`]: https://github.com/apache/parquet-format/blob/master/src/main/thrift/parquet.thrift
/// [`parse_metadata`]: crate::file::footer::parse_metadata
#[derive(Debug, Clone)]
#[derive(Debug, Clone, PartialEq)]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

let column_orders = encode_column_orders(metadata.file_metadata().column_orders());
let schema = types::to_thrift(&metadata.file_metadata().schema().clone())?;

let t_file_metadata = TFileMetaData {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I noticed that this is not quite the same code as used n the actual writer (specifically the way column order is not the same) so I worry it would be inconsistent or drift over time from the actual writer

// We only include ColumnOrder for leaf nodes.
// Currently only supported ColumnOrder is TypeDefinedOrder so we set this
// for all leaf nodes.
// Even if the column has an undefined sort order, such as INTERVAL, this
// is still technically the defined TYPEORDER so it should still be set.
let column_orders = (0..self.schema_descr().num_columns())
.map(|_| parquet::ColumnOrder::TYPEORDER(parquet::TypeDefinedOrder {}))
.collect();
// This field is optional, perhaps in cases where no min/max fields are set
// in any Statistics or ColumnIndex object in the whole file.
// But for simplicity we always set this field.
let column_orders = Some(column_orders);
let file_metadata = parquet::FileMetaData {
num_rows,
row_groups,
key_value_metadata,
version: self.props.writer_version().as_num(),
schema: types::to_thrift(self.schema.as_ref())?,
created_by: Some(self.props.created_by().to_owned()),
column_orders,
encryption_algorithm: None,
footer_signing_key_metadata: None,
};

Thus what I suggest we do here is change writer.rs to use the ParquetMetadataEncoder and refactor the code from there into this function. That would be a bit more involved but I think would set us up nicely so that metadata encoding remains consistent.

@adriangb
Copy link
Contributor Author

I think we should try and structure the code so the existing writer uses this new MetadataEncoder which would keep metadata writing consistent as well as enable usecases like encoding bloom filters, etc.

I completely agree. That's just a much bigger chunk to bite off, I can give it a shot but I may need support to get there.

@adriangb
Copy link
Contributor Author

I've made some progress. I made a (very rough) metadata writer that is used internally by SerializedFileWriter and can encode from a ParquetMetadata. My plan of attack from here:

  1. Implement reading of metadata without needing to have the entire file available. There's already MetadataLoader as pointed out in API for encoding/decoding ParquetMetadata with more control #6002 (comment) but it wants to read metadata from an entire file and I think needs to be refactored to be able to load metadata when that's all you have.
  2. Get feedback here on the APIs (they really aren't pretty).
  3. Add roundtrip tests.

Copy link
Contributor

@alamb alamb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is looking quite nice @adriangb and I think we should try and proceed with this approach.

I think it would be easier to make progress if we can work on the approach incrementally as multiple smaller PRs rather than one large one (it will be easier for me to give you timely feedback)

Also, it is probably good to know of #5486 from @etseidl which could conflict as we change the metadata.

Also #5933 from @progval

Given we are now being careful about breaking changes (see https://github.com/apache/arrow-rs/blob/master/CONTRIBUTING.md#breaking-changes) I am worried that these PRs will interact / cause conflicts with each other

What do you think of this idea: #6050 ?

Some(self.props.created_by().to_string()),
self.props.writer_version().as_num(),
);
encoder.finish()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍


let mut row_groups = self
.row_groups
.as_slice()
.iter()
.map(|v| v.to_thrift())
.collect::<Vec<_>>();

self.write_bloom_filters(&mut row_groups)?;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW #5933 also contains changes for bloom filter writing

@@ -791,23 +710,274 @@ impl<'a, W: Write + Send> PageWriter for SerializedPageWriter<'a, W> {
}
}

struct ThriftMetadataWriter<'a, W: Write> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I always get confused when reading the parquet code between what are the generated Thrift structures from the structures in https://docs.rs/parquet/latest/parquet/file/metadata/index.html

I like how you have split out writing of the thrift structures here from the writing of the parquet::file structures

Ok(())
}

fn convert_column_indexes(&self) -> Vec<Vec<Option<ColumnIndex>>> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was looking around for another copy of this code and I now see that this is the first time we are going from Index --> ColumnIndex

Makes sense to me. I think this type of structure could really help clean up some of the tests too (but I am getting ahead of myself)


let file_metadata = parquet::FileMetaData {
num_rows,
let encoder = ThriftMetadataWriter::new(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might read nicer like this:

let encoder = ThriftMetadataWriter::new()
  .with_schema(&self.schema)
  .with_descr(&self.descr)
  .with_row_groups(row_groups)
 ... 
  );

// encode the data to buf
encoder.encode(&mut buf)

Though I realize many of these fields are required

Maybe something like

        let encoder = ThriftMetadataWriter::new(
            &self.schema,
            &self.descr,
             ...
        )
        .with_column_indexes(&self.column_indexes)
        .with_offset_indexes(&self.offset_indexes);
        encoder.encode(&mut buf)

if let Some(row_group_offset_indexes) = self.metadata.offset_index() {
(0..self.metadata.row_groups().len())
.map(|rg_idx| {
let column_indexes = &row_group_offset_indexes[rg_idx];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor nit: could this be named offset_indexes?

Comment on lines 187 to 192
let null_counts = self
.indexes
.iter()
.map(|x| x.null_count())
.collect::<Option<Vec<_>>>()
.unwrap_or_else(|| vec![0; self.indexes.len()]);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While merging with #5486, I noticed this. IIUC, if on read the optional thrift ColumnIndex::null_counts is not present, then the PageIndex::null_count will be None. When converting back to a thrift ColumnIndex, it appears that this will convert the missing null_counts into a vector of num_pages zeros. I don't know if this is the correct behavior, mostly because the spec is (AFAICT) silent on the interpretation of a non-present null_counts. Is it not present as an optimization when there are no nulls, or is it not present due to a lack of information (say a V1 encoder doesn't keep null counts since the V1 page header doesn't require them). Due to that ambiguity I think null_counts here should be None if any or all of the PageIndex::null_count fields is None. Perhaps stop after the collect() and pass null_counts directly below.

@alamb
Copy link
Contributor

alamb commented Jul 16, 2024

Update here is I plan to make a 53 dev branch today so we can start getting this code merged and iterate on the API

@alamb alamb changed the base branch from master to 53.0.0-dev July 16, 2024 22:56
@alamb
Copy link
Contributor

alamb commented Jul 16, 2024

Hi @adriangb -- I changed this PR to point at the 53.0.0-dev branch. I plan to give it a careful review tomorrow and then I am thinking we can merge it and iterate over the course of a few PRs

Again, I am really sorry for the delay in reviewing. I think this is a really important feature but I have been overwhelmed with reviews for the last week or two

@adriangb
Copy link
Contributor Author

Thank you @alamb! No need to apologize; you have such a diverse and impactful contribution to open source, your time management is really quite inspiring. If anything I need to apologize for lagging on applying feedback. I will go over this PR and incorporate feedback (hopefully before your review tomorrow).

Copy link
Contributor

@alamb alamb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here is how I suggest we proceed with this PR:

  1. Let's create an example with the usecase described in API for encoding/decoding ParquetMetadata with more control #6002 (comment) (I will try to do this later today). I think this will motivate how the API looks like
  2. In parallel we could pull out some of the simple usability changes (like adding PartialEq and pub use thrift stuff into their own PR so we can merge that.

@alamb
Copy link
Contributor

alamb commented Jul 17, 2024

I started on a basic example here: #6081 -- tomorrow I'll try and find time to try and rebase it on this PR and see if I can do what is needed

adriangb added a commit to adriangb/arrow-rs that referenced this pull request Jul 18, 2024
@adriangb adriangb changed the title Add encode_metadata function to mirror decode_metadata and allow ad-hoc encoding of ParquetMetadata Add ParquetMetadataWriter allow ad-hoc encoding of ParquetMetadata Jul 18, 2024
@adriangb
Copy link
Contributor Author

I'm not sure why the test is failing (it was before, I don't think it's from a merge). Need to investigate.

@etseidl
Copy link
Contributor

etseidl commented Jul 24, 2024

I'm not sure why the test is failing (it was before, I don't think it's from a merge). Need to investigate.

I think you'll need to merge 53.0.0-dev again to pick up the latest changes to the offset index, and then reformat (some new names are longer and changed how the linter wants lines wrapped).

@adriangb adriangb force-pushed the add-encode_metadata branch 2 times, most recently from b38ccf7 to b41173f Compare July 24, 2024 19:28
@adriangb
Copy link
Contributor Author

I've updated the branch and cleaned up, test is still failing. It seems the reading part is trying to access byte 0 of the file, which doesn't make sense and makes me think there's a bug somewhere (could be in the test since there's a lot of shim in there): https://github.com/apache/arrow-rs/actions/runs/10082832978/job/27878006690?pr=6000#step:6:761


let data = buf.into_inner().freeze();

let decoded_metadata = load_metadata_from_bytes(metadata.file_size, data).await;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
let decoded_metadata = load_metadata_from_bytes(metadata.file_size, data).await;
let decoded_metadata = load_metadata_from_bytes(data.len(), data).await;

This will load the page indexes, but then the assert below fails because the offset_index_offset and column_index_offset fields of the column chunk are different. Might have to write an equals that accounts for that.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep thank you.

I could write a custom eq but... that is going to be a pain.

I also need to think about the implications of these things not matching up. If I understand correctly those are the offsets to the page index data from the start of the file (or from where?) and because we loaded the metadata from only a portion of the file they got re-calculated differently?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I could write a custom eq but... that is going to be a pain.

True. Perhaps just compare parts of the metadata rather than the whole thing. Let me see if I can whip something up quick and submit a PR to your branch.

I also need to think about the implications of these things not matching up. If I understand correctly those are the offsets to the page index data from the start of the file (or from where?) and because we loaded the metadata from only a portion of the file they got re-calculated differently?

Well, you've written the page indexes to a new file (well, buffer), so those offsets point to their location in the new file as opposed to the old. Looking at the test output, the column index is at offset 0 with length 23, and the offset index is at 23 with length 10. Assuming there's no 'PAR1' header on the new file, those offsets seem correct.

@alamb alamb deleted the branch apache:53.0.0-dev July 26, 2024 10:11
@alamb alamb closed this Jul 26, 2024
@alamb
Copy link
Contributor

alamb commented Jul 26, 2024

I merged the 53 dev branch and that seems to have closed this PR -- any chance you can retarget main?

Update: I restored the branch

@alamb
Copy link
Contributor

alamb commented Jul 26, 2024

I wrote up some thoughts that were floating in my head in #6129

I am hoping to spend some more time today looking at this PR in deatil

Thank you again for your patience

@alamb
Copy link
Contributor

alamb commented Jul 29, 2024

I know I keep leading you along with comments on this PR. I really believe this is an important PR / API to work on, but at the moment i don't have enough bandwidth to help drive this forward / break it down into smaller pieces / make sure they all fit together. Maybe @etseidl can figure out how to make it happen.

In any event I hope to have more time to devote here by the end of August

@etseidl
Copy link
Contributor

etseidl commented Jul 31, 2024

@adriangb I submitted a PR to fix the failing parquet and clippy tests.

The "check compilation" test failure is beyond me. The "--no-default-features" flag causes some dependencies to not be loaded. Getting that one fixed requires some cargo mojo that I lack :(

Add test for metadata equivalence
@etseidl
Copy link
Contributor

etseidl commented Aug 1, 2024

@adriangb just wanted to warn you not to try merging with master just yet. A PR that merged to 53.0.0-dev after 53.0.0-dev was merged to master (but before 53.0.0-dev was merged to this branch) is present here. If you merge with master now that PR will be lost. Hopefully this will be corrected soon 😅.

Ok, master was fixed, but then 53.0.0-dev was closed again, which closed this PR. I've submitted a new PR to your branch that merges with master and fixes the conflicts. If you approve, then this PR can be reopened with master as its base branch. Sorry for all the confusion!

@alamb alamb deleted the branch apache:53.0.0-dev August 1, 2024 10:57
@alamb alamb closed this Aug 1, 2024
@adriangb
Copy link
Contributor Author

adriangb commented Aug 5, 2024

@etseidl looks like this was closed because the target branch was deleted. My first thought is that I should retarget this at master/main and rebase it on that but I guess that's what you're warning me not to do above? Not sure what the next steps are here then.

@etseidl
Copy link
Contributor

etseidl commented Aug 5, 2024

My first thought is that I should retarget this at master/main and rebase it on that but I guess that's what you're warning me not to do above? Not sure what the next steps are here then.

If you retarget master, then there will be merge conflicts around the new histogram stuff I added. Those are pretty easy to clear (you can look at what I did in adriangb#3). Then I think this will be ready for a final review! Thanks!

This pull request was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
parquet Changes to the parquet crate
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants