-
Notifications
You must be signed in to change notification settings - Fork 2.9k
Add Parquet Row Group Bloom Filter Support #4831
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
e814287
3b985f4
f0e6aa1
9c6c64f
0520fb4
893e7b0
7613b9c
1fabb2d
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -49,7 +49,9 @@ Iceberg tables support table properties to configure table behavior, like the de | |
| | write.parquet.page-size-bytes | 1048576 (1 MB) | Parquet page size | | ||
| | write.parquet.dict-size-bytes | 2097152 (2 MB) | Parquet dictionary page size | | ||
| | write.parquet.compression-codec | gzip | Parquet compression codec: zstd, brotli, lz4, gzip, snappy, uncompressed | | ||
| | write.parquet.compression-level | null | Parquet compression level | | ||
| | write.parquet.compression-level | null | Parquet compression level | ||
| | write.parquet.bloom-filter-enabled.column.col1 | (not set) | Enables writing a bloom filter for the column | | ||
| | write.parquet.bloom-filter-max-bytes | 1048576 (1 MB) | The maximum number of bytes for a bloom filter bitset | | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What is the behavior of this? If the NDV requires a size that is too large, does it skip writing the bloom filter?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If the NDV requires a size that is too large, parquet still writes the bloom filter using the max bytes set by this property, not using the bitset calculated by NDV.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I guess there probably isn't much we can do about this, although that behavior makes no sense to me. Is it possible to set the expected false positive probability anywhere? Or is that hard-coded in the Parquet library?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There isn't a property to set fpp in Parquet.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What fpp is used by Parquet?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Parquet uses 0.01 for fpp. |
||
| | write.avro.compression-codec | gzip | Avro compression codec: gzip(deflate with 9 level), zstd, snappy, uncompressed | | ||
| | write.avro.compression-level | null | Avro compression level | | ||
| | write.orc.stripe-size-bytes | 67108864 (64 MB) | Define the default ORC stripe size, in bytes | | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -85,6 +85,9 @@ | |
| import static org.apache.iceberg.TableProperties.DELETE_PARQUET_ROW_GROUP_CHECK_MAX_RECORD_COUNT; | ||
| import static org.apache.iceberg.TableProperties.DELETE_PARQUET_ROW_GROUP_CHECK_MIN_RECORD_COUNT; | ||
| import static org.apache.iceberg.TableProperties.DELETE_PARQUET_ROW_GROUP_SIZE_BYTES; | ||
| import static org.apache.iceberg.TableProperties.PARQUET_BLOOM_FILTER_COLUMN_ENABLED_PREFIX; | ||
| import static org.apache.iceberg.TableProperties.PARQUET_BLOOM_FILTER_MAX_BYTES; | ||
| import static org.apache.iceberg.TableProperties.PARQUET_BLOOM_FILTER_MAX_BYTES_DEFAULT; | ||
| import static org.apache.iceberg.TableProperties.PARQUET_COMPRESSION; | ||
| import static org.apache.iceberg.TableProperties.PARQUET_COMPRESSION_DEFAULT; | ||
| import static org.apache.iceberg.TableProperties.PARQUET_COMPRESSION_LEVEL; | ||
|
|
@@ -239,6 +242,8 @@ public <D> FileAppender<D> build() throws IOException { | |
| CompressionCodecName codec = context.codec(); | ||
| int rowGroupCheckMinRecordCount = context.rowGroupCheckMinRecordCount(); | ||
| int rowGroupCheckMaxRecordCount = context.rowGroupCheckMaxRecordCount(); | ||
| int bloomFilterMaxBytes = context.bloomFilterMaxBytes(); | ||
| Map<String, String> columnBloomFilterEnabled = context.columnBloomFilterEnabled(); | ||
|
|
||
| if (compressionLevel != null) { | ||
| switch (codec) { | ||
|
|
@@ -269,19 +274,27 @@ public <D> FileAppender<D> build() throws IOException { | |
| conf.set(entry.getKey(), entry.getValue()); | ||
| } | ||
|
|
||
| ParquetProperties parquetProperties = ParquetProperties.builder() | ||
| ParquetProperties.Builder propsBuilder = ParquetProperties.builder() | ||
| .withWriterVersion(writerVersion) | ||
| .withPageSize(pageSize) | ||
| .withDictionaryPageSize(dictionaryPageSize) | ||
| .withMinRowCountForPageSizeCheck(rowGroupCheckMinRecordCount) | ||
| .withMaxRowCountForPageSizeCheck(rowGroupCheckMaxRecordCount) | ||
| .build(); | ||
| .withMaxBloomFilterBytes(bloomFilterMaxBytes); | ||
|
|
||
| for (Map.Entry<String, String> entry : columnBloomFilterEnabled.entrySet()) { | ||
| String colPath = entry.getKey(); | ||
| String bloomEnabled = entry.getValue(); | ||
| propsBuilder.withBloomFilterEnabled(colPath, Boolean.valueOf(bloomEnabled)); | ||
| } | ||
|
|
||
| ParquetProperties parquetProperties = propsBuilder.build(); | ||
|
|
||
| return new org.apache.iceberg.parquet.ParquetWriter<>( | ||
| conf, file, schema, rowGroupSize, metadata, createWriterFunc, codec, | ||
| parquetProperties, metricsConfig, writeMode); | ||
| } else { | ||
| return new ParquetWriteAdapter<>(new ParquetWriteBuilder<D>(ParquetIO.file(file)) | ||
| ParquetWriteBuilder<D> parquetWriteBuilder = new ParquetWriteBuilder<D>(ParquetIO.file(file)) | ||
| .withWriterVersion(writerVersion) | ||
| .setType(type) | ||
| .setConfig(config) | ||
|
|
@@ -291,12 +304,32 @@ public <D> FileAppender<D> build() throws IOException { | |
| .withWriteMode(writeMode) | ||
| .withRowGroupSize(rowGroupSize) | ||
| .withPageSize(pageSize) | ||
| .withDictionaryPageSize(dictionaryPageSize) | ||
| .build(), | ||
| .withDictionaryPageSize(dictionaryPageSize); | ||
|
|
||
| for (Map.Entry<String, String> entry : columnBloomFilterEnabled.entrySet()) { | ||
| String colPath = entry.getKey(); | ||
| String bloomEnabled = entry.getValue(); | ||
| parquetWriteBuilder.withBloomFilterEnabled(colPath, Boolean.valueOf(bloomEnabled)); | ||
| } | ||
|
|
||
| return new ParquetWriteAdapter<>( | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @aokolnychyi, what do you think about removing the old |
||
| parquetWriteBuilder.build(), | ||
| metricsConfig); | ||
| } | ||
| } | ||
|
|
||
| private static Map<String, String> bloomColumnConfigMap(String prefix, Map<String, String> config) { | ||
| Map<String, String> columnBloomFilterConfig = Maps.newHashMap(); | ||
| config.keySet().stream() | ||
| .filter(key -> key.startsWith(prefix)) | ||
| .forEach(key -> { | ||
| String columnPath = key.replaceFirst(prefix, ""); | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Since this uses column name in the config, is there any logic to update these configs when columns are renamed?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Good question. I don't have a logic to update the configs when columns are renamed. I think we are OK, though. At write path, I use these configs to write bloom filters at file creation time. I don't use these configs any more for read. At read path, the bloom filters are loaded using id instead of column name. If the columns are renamed after the bloom filters have been written, as long as the id are still the same, the bloom filters should be able to loaded OK.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We're okay for adding read support, but we should consider how to configure write support then. |
||
| String bloomFilterMode = config.get(key); | ||
huaxingao marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| columnBloomFilterConfig.put(columnPath, bloomFilterMode); | ||
| }); | ||
| return columnBloomFilterConfig; | ||
| } | ||
|
|
||
| private static class Context { | ||
| private final int rowGroupSize; | ||
| private final int pageSize; | ||
|
|
@@ -305,17 +338,23 @@ private static class Context { | |
| private final String compressionLevel; | ||
| private final int rowGroupCheckMinRecordCount; | ||
| private final int rowGroupCheckMaxRecordCount; | ||
| private final int bloomFilterMaxBytes; | ||
| private final Map<String, String> columnBloomFilterEnabled; | ||
|
|
||
| private Context(int rowGroupSize, int pageSize, int dictionaryPageSize, | ||
| CompressionCodecName codec, String compressionLevel, | ||
| int rowGroupCheckMinRecordCount, int rowGroupCheckMaxRecordCount) { | ||
| int rowGroupCheckMinRecordCount, int rowGroupCheckMaxRecordCount, | ||
| int bloomFilterMaxBytes, | ||
| Map<String, String> columnBloomFilterEnabled) { | ||
| this.rowGroupSize = rowGroupSize; | ||
| this.pageSize = pageSize; | ||
| this.dictionaryPageSize = dictionaryPageSize; | ||
| this.codec = codec; | ||
| this.compressionLevel = compressionLevel; | ||
| this.rowGroupCheckMinRecordCount = rowGroupCheckMinRecordCount; | ||
| this.rowGroupCheckMaxRecordCount = rowGroupCheckMaxRecordCount; | ||
| this.bloomFilterMaxBytes = bloomFilterMaxBytes; | ||
| this.columnBloomFilterEnabled = columnBloomFilterEnabled; | ||
| } | ||
|
|
||
| static Context dataContext(Map<String, String> config) { | ||
|
|
@@ -348,8 +387,16 @@ static Context dataContext(Map<String, String> config) { | |
| Preconditions.checkArgument(rowGroupCheckMaxRecordCount >= rowGroupCheckMinRecordCount, | ||
| "Row group check maximum record count must be >= minimal record count"); | ||
|
|
||
| int bloomFilterMaxBytes = PropertyUtil.propertyAsInt(config, PARQUET_BLOOM_FILTER_MAX_BYTES, | ||
| PARQUET_BLOOM_FILTER_MAX_BYTES_DEFAULT); | ||
| Preconditions.checkArgument(bloomFilterMaxBytes > 0, "bloom Filter Max Bytes must be > 0"); | ||
|
|
||
| Map<String, String> columnBloomFilterEnabled = | ||
| bloomColumnConfigMap(PARQUET_BLOOM_FILTER_COLUMN_ENABLED_PREFIX, config); | ||
|
|
||
| return new Context(rowGroupSize, pageSize, dictionaryPageSize, codec, compressionLevel, | ||
| rowGroupCheckMinRecordCount, rowGroupCheckMaxRecordCount); | ||
| rowGroupCheckMinRecordCount, rowGroupCheckMaxRecordCount, bloomFilterMaxBytes, | ||
| columnBloomFilterEnabled); | ||
| } | ||
|
|
||
| static Context deleteContext(Map<String, String> config) { | ||
|
|
@@ -385,8 +432,16 @@ static Context deleteContext(Map<String, String> config) { | |
| Preconditions.checkArgument(rowGroupCheckMaxRecordCount >= rowGroupCheckMinRecordCount, | ||
| "Row group check maximum record count must be >= minimal record count"); | ||
|
|
||
| int bloomFilterMaxBytes = PropertyUtil.propertyAsInt(config, PARQUET_BLOOM_FILTER_MAX_BYTES, | ||
| PARQUET_BLOOM_FILTER_MAX_BYTES_DEFAULT); | ||
| Preconditions.checkArgument(bloomFilterMaxBytes > 0, "bloom Filter Max Bytes must be > 0"); | ||
|
|
||
| Map<String, String> columnBloomFilterEnabled = | ||
| bloomColumnConfigMap(PARQUET_BLOOM_FILTER_COLUMN_ENABLED_PREFIX, config); | ||
|
|
||
| return new Context(rowGroupSize, pageSize, dictionaryPageSize, codec, compressionLevel, | ||
| rowGroupCheckMinRecordCount, rowGroupCheckMaxRecordCount); | ||
| rowGroupCheckMinRecordCount, rowGroupCheckMaxRecordCount, bloomFilterMaxBytes, | ||
| columnBloomFilterEnabled); | ||
| } | ||
|
|
||
| private static CompressionCodecName toCodec(String codecAsString) { | ||
|
|
@@ -424,6 +479,14 @@ int rowGroupCheckMinRecordCount() { | |
| int rowGroupCheckMaxRecordCount() { | ||
| return rowGroupCheckMaxRecordCount; | ||
| } | ||
|
|
||
| int bloomFilterMaxBytes() { | ||
| return bloomFilterMaxBytes; | ||
| } | ||
|
|
||
| Map<String, String> columnBloomFilterEnabled() { | ||
| return columnBloomFilterEnabled; | ||
| } | ||
| } | ||
| } | ||
|
|
||
|
|
@@ -903,12 +966,14 @@ public <D> CloseableIterable<D> build() { | |
| Schema fileSchema = ParquetSchemaUtil.convert(type); | ||
| builder.useStatsFilter() | ||
| .useDictionaryFilter() | ||
| .useBloomFilter() | ||
| .useRecordFilter(filterRecords) | ||
| .withFilter(ParquetFilters.convert(fileSchema, filter, caseSensitive)); | ||
| } else { | ||
| // turn off filtering | ||
| builder.useStatsFilter(false) | ||
| .useDictionaryFilter(false) | ||
| .useBloomFilter(false) | ||
| .useRecordFilter(false); | ||
| } | ||
|
|
||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it is a good idea to add a second boolean argument to this method. It is confusing enough with just one.
How about using a different method name for this and then renaming this to a be a private internal implementation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I changed this method name to
exprReferences(please let me know if you have a better name), but I still need to keep this aspublicbecause I need to access this method fromParquetBloomRowGroupFilter, which is in a different package.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe just
referencessince it doesn't bind?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good. Will change.