-
Notifications
You must be signed in to change notification settings - Fork 1.5k
PARQUET-1879 MapKeyValue is not a valid Logical Type #798
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PARQUET-1879 MapKeyValue is not a valid Logical Type #798
Conversation
6a0bcf2 to
e3d075a
Compare
gszadovszky
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for working on this.
You have changed every naming from "map" to "key_value" in the tests. This is good for the expected data but we should keep testing "map" at the read path as well. Based on the spec it is still acceptable.
I am not an expert in this topic so I would be happy if someone else also could review this.
parquet-column/src/main/java/org/apache/parquet/schema/Types.java
Outdated
Show resolved
Hide resolved
...t-hadoop/src/test/java/org/apache/parquet/format/converter/TestParquetMetadataConverter.java
Outdated
Show resolved
Hide resolved
* Writing UNKNOWN logical type into the schema, causes a breakage when parsing the file with Apache Arrow * Instead use the default, of falling back to null when that backwards-compatibility only logical type is present, but still write the original type
e3d075a to
cc9cc5f
Compare
@gszadovszky no problem. I have tried to add a test to verify the backwards-compatibile reading. Added TestReadWriteMapKeyValue to the commit. Not sure if this is the correct way, but parsing a schema to go the logical type path with key_value and no MAP_KEY_VALUE type, then another with map and with the MAP_KEY_VALUE type. From what I can tell the name is not actually verified anywhere (I tried with random name value too :) ), but both test paths are successful. Hopefully it's ok, but let me know if might need to go a bit deeper somewhere else |
gszadovszky
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for creating the backward compatibility test for Map. It should have been existed already.
Unfortunately, this way you do not properly test backward compatibility. The problem is you cannot generate an "old" file with the "new" library. To be more precise the message parser is more for convenience and not used while reading/writing a parquet file. When you say you are testing converted type it is not really true because the parser tries to read logical types at the first place. Also the parquet writer writes both logical types and converted types so you cannot validate old files that have only converted types.
I would suggest adding tests that covers the examples in the spec by creating the thrift generated format objects and convert them by ParquetMetadataConverter just like you did it in TestParquetMetadataConverter.testMapLogicalType. Maybe, these tests would fit better in that class as well.
I should have been described this before you've implemented this test. I am sorry about that.
Please don't force push your changes because it makes harder to track the review. The committer will squash the PR before merging it anyway.
da1d657 to
bbd7d65
Compare
Apologies for the force push. Good to know that squashed on commit. And thanks for the detailed reply. I think I got it this time :) Tests were moved into TestParquetMetadataConverter and for the old format test, building the metadata through Thrift SchemaElements. Regards, |
* Writing UNKNOWN logical type into the schema, causes a breakage when parsing the file with Apache Arrow * Instead use the default, of falling back to null when that backwards-compatibility only logical type is present, but still write the original type
bbd7d65 to
7eddaec
Compare
gszadovszky
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot. It looks good to me.
Let me wait a couple of days if anyone else would like to review. (I hope that so.)
|
@gszadovszky before this gets merged, I just wanted to clarify something myself after looking more into the format spec, that might tidy this issue up further.
From what I could see from some older issues, such as PARQUET-335 and the backwards-compatibility rules it seems to have always been an optional type, and also used incorrectly in the past. It appears that older versions of Parquet would be able to read the Map type in the schema without MAP_KEY_VALUE. If that is true, I would probably suggest pushing this additional commit that I tested, onto this PR. It would mean that any unexpected uses of LogicalType.MAP_KEY_VALUE would result in UNKNOWN being written to the file. But it is removed from the ConversionPatterns path, meaning that my case of this occuring when converting an Avro schema is still fixed, and tested. Let me know if believe this might be the preferred fix, or if what have already done is better. From what I see, it all should depend on whether the MAP_KEY_VALUE type is required as an Original Type or is ok being null for older readers? Thanks |
|
The main problem I think is that the spec does not say anything about how the thrift objects shall be used. The specification is about the semantics of the schema and it is described using the parquet schema language. But, in the file there is no such language, we only have thrift objects. Cheers, |
Sounds good @gszadovszky . Thanks for some clarification. Therefore, depending on any other comments from other reviewers, it seems this PR is still ready to merge as-is :) |
* Writing UNKNOWN logical type into the schema, causes a breakage when parsing the file with Apache Arrow * Instead use the default, of falling back to null when that backwards-compatibility only logical type is present, but still write the original type (cherry picked from commit 2589cc8)
* Writing UNKNOWN logical type into the schema, causes a breakage when parsing the file with Apache Arrow * Instead use the default, of falling back to null when that backwards-compatibility only logical type is present, but still write the original type (cherry picked from commit 2589cc8)
when parsing the file with Apache Arrow
only logical type is present, but still write the original type
Is this something that could be included in a 1.11.1 ?
Make sure you have checked all steps below.
Jira
Tests
Added a new test into TestParquetMetadataConverter#testMapLogicalType that verifies that when a Map schema is created, it leaves Logical Type as null when writing out the MapKeyValue Converted Type.
This is due to the LogicalType being required elsewhere for backwards compatibility, but when writing the Schema to Thrift, it needs to be left out, as writing UNKNOWN as occurs now is causing breakages when later trying to read the file using Apache Arrow in Snowflake Cloud Database.
Commits
Documentation
N/A