-
Notifications
You must be signed in to change notification settings - Fork 2.9k
support create table like in flink catalog and watermark in windows #12116
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@swapna267: Started the test runs so we can see the status of the PR. |
|
@pvary reverted 1.18/1.19 changes. |
|
Background:
Currently (without the changes in PR), create table in flink catalog works by configuring flink connector as described in, But that needs user to provide the schema for the table. A way to tackle that is to do create table LIKE using below DDL. Options like connector, catalog-name, catalog-database, catalog-table need to be duplicated as Iceberg FlinkCatalog doesn't return any catalog related properties during getTable. This PR addresses the issue by including these properties when getTable is called , which will be used by Flink when creating table in Flink Catalog.
Flink let's user to push down watermark to source using interface SupportSourceWatermark.java . Here we are falling back to read options implemented in #9346 , to configure the watermark column on Iceberg Source. |
| if (Objects.equals( | ||
| table.getOptions().get("connector"), FlinkDynamicTableFactory.FACTORY_IDENTIFIER)) { | ||
| throw new IllegalArgumentException( | ||
| "Cannot create the table with 'connector'='iceberg' table property in " | ||
| + "an iceberg catalog, Please create table with 'connector'='iceberg' property in a non-iceberg catalog or " | ||
| + "create table without 'connector'='iceberg' related properties in an iceberg table."); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we remove this check?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tables can be created using LIKE in
- Flink Catalog - Not supported currently.
- Another table in Iceberg catalog itself as detailed in doc
This check basically fails, if we try to create table using LIKE in Iceberg catalog, basically case#2 if we have connector=iceberg in options . For example, DDL like below,
CREATE TABLE `hive_catalog`.`default`.`sample_like`
LIKE `hive_catalog`.`default`.`sample`
WITH ('connector'='iceberg')
In order to support Case#1 without user setting any extra Options using WITH clause, we need to add connector in getTable,
iceberg/flink/v1.20/flink/src/main/java/org/apache/iceberg/flink/FlinkCatalog.java
Line 344 in 52bfbdc
| catalogAndTableProps.put("connector", FlinkDynamicTableFactory.FACTORY_IDENTIFIER); |
This check was added in very old PR,
#2666
#2666 (comment) where Flink SQL didn't support CREATE TABLE A LIKE B , where A and B are in different Catalogs.
So, in this case by removing this check, we are ignoring connector option being passed, so following DDL can create table table_like in Flink catalog backed by iceberg_catalog.db.table. As we know source table is an Iceberg table, adding connector=iceberg would be redundant.
CREATE TABLE table_like (
eventTS AS CAST(t1 AS TIMESTAMP(3)),
) LIKE iceberg_catalog.db.table;
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What happens when the source table is not an Iceberg table?
I'm trying to understand here, where we get the missing information in this case, and wether we have a way to check that we actually get the missing information. If we can create such a check, then we can still throw an exception when we don't get this information from any source
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
createTable in this catalog is called only when Source table is an Iceberg Table. Currently the Catalog information comes when Catalog is created.
Following are the scenarios when getTable / createTable methods in this Catalog are being used.
- Create Iceberg table in Iceberg catalog -> Only createTable is called, where Catalog instance has all catalog related info.
- Create table in iceberg catalog catalog1 like table in iceberg catalog catalog1 -> getTable() sets schema/partitioning info , which is used to create the table with same schema/partitioning as source in Catalog1.
- Create table in iceberg catalog catalog2 like table in iceberg catalog catalog1 -> getTable() sets schema/partitioning info , which is used to create the table with same schema/partitioning as source in Catalog2.
- Create table in Flink catalog like table in iceberg catalog -> getTable() is only called to get the source table info and createTable called on Flink catalog, where connector/iceberg catalog properties are being used to instantiate FlinkDynamicTableFactory.
When createTable is invoked, currently there is no easy way to differentiate between Case 2) / Case 3) / Case 4) Or user is doing
CREATE TABLE `hive_catalog`.`default`.`sample_like`
WITH ('connector'='iceberg', 'catalog-name'='')
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With current changes in PR, there is one side-effect. In Case 2 & 3 mentioned above, the extra properties added in getTable,
iceberg/flink/v1.20/flink/src/main/java/org/apache/iceberg/flink/FlinkCatalog.java
Lines 339 to 344 in 52bfbdc
| catalogAndTableProps.put(FlinkCreateTableOptions.CATALOG_NAME.key(), getName()); | |
| catalogAndTableProps.put( | |
| FlinkCreateTableOptions.CATALOG_DATABASE.key(), tablePath.getDatabaseName()); | |
| catalogAndTableProps.put( | |
| FlinkCreateTableOptions.CATALOG_TABLE.key(), tablePath.getObjectName()); | |
| catalogAndTableProps.put("connector", FlinkDynamicTableFactory.FACTORY_IDENTIFIER); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't get this:
Currently the Catalog information comes when Catalog is created.
Let's talk offline
| SupportsFilterPushDown, | ||
| SupportsLimitPushDown { | ||
| SupportsLimitPushDown, | ||
| SupportsSourceWatermark { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we have 2 feature in a single PR:
- CREATE TABLE LIKE
- Watermark support
Could we separate out these features to different PRs?
Could we write tests for both features?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These features were driven by mainly a use case, where an iceberg table is needed to be used in Flink window functions. This needs an incoming table to have MILLISECOND precision timestamp column and also watermark to be defined on source table.
As iceberg only supports MICROSECOND timestamp columns, we need to have a table with computed columns and we can create these only in Flink Catalog. Iceberg catalog doesn't support creating tables with computed columns.
i am happy to split them into 2 separate PR's .
I have tests for CREATE TABLE LIKE.
As Watermark support is just making Source to implement interface and falling back to #9346 for core logic, i didn't have a test case. I can add a validation on if watermark-column is configured or not , so it can fail fast. And a test case around that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please separate out the features to 2 PR
| @TestTemplate | ||
| public void testConnectorTableInIcebergCatalog() { | ||
| // Create the catalog properties | ||
| Map<String, String> catalogProps = Maps.newHashMap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this test removed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is testing the check mentioned in #12116 (comment)
Fail creating a table in Iceberg Catalog if connector=iceberg is specified in the Option. As the check is been deleted, i removed this test case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that this is still a valid check in most cases. Only not valid when the table is created with "CREATE TABLE.. LIKE" and only if the source table is an iceberg table.
Do I miss something?
|
This pull request has been marked as stale due to 30 days of inactivity. It will be closed in 1 week if no further activity occurs. If you think that’s incorrect or this pull request requires a review, please simply write any comment. If closed, you can revive the PR at any time and @mention a reviewer or discuss it on the [email protected] list. Thank you for your contributions. |
|
This pull request has been closed due to lack of activity. This is not a judgement on the merit of the PR in any way. It is just a way of keeping the PR queue manageable. If you think that is incorrect, or the pull request requires review, you can revive the PR at any time. |
This PR addresses
Reference:
#10219
#9346