-
Notifications
You must be signed in to change notification settings - Fork 3k
Flink: Fix compatibiliy issue between 1.12.x and 1.13.x #3354
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,7 +1,7 @@ | ||
| org.slf4j:* = 1.7.25 | ||
| org.apache.avro:avro = 1.10.1 | ||
| org.apache.calcite:* = 1.10.0 | ||
| org.apache.flink:* = 1.13.2 | ||
| org.apache.flink:* = 1.12.5 | ||
|
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. As the comment from @pnowojski says (The comment from this email)
So we'd better to use flink 1.12.x to build the iceberg-flink-runtime jar to work against flink 1.13.x runtime. Currently, the latest version is 1.12.5
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @openinx @pnowojski there is a breaking change for This revert to 1.12 breaks the compiling of FLIP-27 Iceberg source dev branch, which has been updated to the 1.13 SplitEnumerator API
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @stevenzwu , Currently, the 1.12.5 is used to compile the flink common module (Let's say |
||
| org.apache.hadoop:* = 2.7.3 | ||
| org.apache.hive:hive-metastore = 2.3.8 | ||
| org.apache.hive:hive-serde = 2.3.8 | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Won't this fail if there is no
getCatalogTablemethod? And if the method exists then it wouldn't need to be called dynamically. You may need aorNoop()call here.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason why I use the dynamic approach is to avoid add the flink 1.13's
ResolvedCatalogTableinto the runtime jar. For example, if we decode theFlinkDynmaicTableFactory.classfrom iceberg-flink-runtime.jar (which is compiled by flink 1.13.2) by usingjavap -c ./org/apache/iceberg/flink/FlinkDynamicTableFactory.class, it will has the following JVM instructions:In this line :
It will add the
ResolvedCatalogTableexplicitly into the iceberg-flink-runtime.jar, that's not what we expected. Because we expect the iceberg-flink-runtime jar could run perfectly in both flink 1.12&flink1.13 clusters.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good to me for handling the extreme case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, so the method exists in both cases, but returns a more specific object in Flink 1.13?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that's correct.