-
Notifications
You must be signed in to change notification settings - Fork 1.1k
ensure null-counts are written for all-null columns #307
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -607,9 +607,11 @@ impl<T: DataType> ColumnWriterImpl<T> { | |
| let max_def_level = self.descr.max_def_level(); | ||
| let max_rep_level = self.descr.max_rep_level(); | ||
|
|
||
| // always update column NULL count, no matter if page stats are used | ||
| self.num_column_nulls += self.num_page_nulls; | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. hmm just curious why we don't need to update for &level in levels {
if level == self.descr.max_def_level() {
values_to_write += 1;
} else if calculate_page_stats {
self.num_page_nulls += 1
}
}
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There are two variables here:
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
(Sorry for the slow response) This is the part I don't quite understand.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm not sure I follow:
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I meant the second:
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I've just checked: #[test]
fn statistics_null_counts_big_mixed() {
// check that null-count statistics work for larger data sizes
let len = 1_000_000u64;
let data: Vec<_> = (0..len).map(|x| if x % 2 == 0 {Some(x)} else {None}).collect();
let values = Arc::new(UInt64Array::from(data));
let file = one_column_roundtrip("null_counts_big_mixed", values, true);
// check statistics are valid
let reader = SerializedFileReader::new(file).unwrap();
let metadata = reader.metadata();
assert_eq!(metadata.num_row_groups(), 1);
let row_group = metadata.row_group(0);
assert_eq!(row_group.num_columns(), 1);
let column = row_group.column(0);
let stats = column.statistics().unwrap();
assert_eq!(stats.null_count(), len / 2);
}This triggers the
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Oops, you're right. I misread |
||
|
|
||
| let page_statistics = if calculate_page_stat { | ||
| self.update_column_min_max(); | ||
| self.num_column_nulls += self.num_page_nulls; | ||
| Some(self.make_page_statistics()) | ||
| } else { | ||
| None | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not entirely happy with this test since it requires an arrow data structure to actually test an parquet-part. It would probably be nice to have a more "pure" test as well, however I wasn't able to formulate an easy one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would say this looks more like an "integration" test rather than "unit" test (in the sense that you connect some higher level APIs and ensure the output is reasonable). I personally don't see any problems with this approach.
to remove the dependency on Arrow you would probably have to use the
SerializedFileWriterAPI directly: https://docs.rs/parquet/4.0.0/parquet/file/writer/trait.FileWriter.htmlBut I think that would end up being quite a bit more code