Skip to content

Conversation

@swapna267
Copy link
Contributor

@swapna267 swapna267 commented Jan 27, 2025

This PR addresses

  1. Creation of dynamic Iceberg table in Flink Catalog using the underlying physical Iceberg table using LIKE clause.
  2. Iceberg Source to support Source Watermark, so it can be used in Flink WINDOW functions. https://github.com/apache/flink/blob/release-1.18/flink-table/flink-table-common/src/main/java/org/apache/flink/table/connector/source/abilities/SupportsSourceWatermark.java enables Flink to rely on the watermark strategy provided by the ScanTableSource itself.
CREATE TABLE table_wm (
      eventTS AS CAST(t1 AS TIMESTAMP(3)),
      WATERMARK FOR eventTS AS SOURCE_WATERMARK()
) WITH (
  'watermark-column'='t1'
) LIKE iceberg_catalog.db.table;

Reference:
#10219
#9346

@github-actions github-actions bot added the flink label Jan 27, 2025
@pvary
Copy link
Contributor

pvary commented Jan 28, 2025

@swapna267: Started the test runs so we can see the status of the PR.
Could you please remove the Flink 1.18/1.9 version part of the changes?
It makes the review easier if we concentrate only a single Flink version, and do the backport later when the changes to the main version are merged.

@swapna267
Copy link
Contributor Author

@pvary reverted 1.18/1.19 changes.

@swapna267
Copy link
Contributor Author

swapna267 commented Jan 29, 2025

Background:

  1. Creation of dynamic Iceberg table in Flink Catalog using the underlying physical Iceberg table using LIKE clause.

Currently (without the changes in PR), create table in flink catalog works by configuring flink connector as described in,
flink-connector

But that needs user to provide the schema for the table. A way to tackle that is to do create table LIKE using below DDL.

CREATE TABLE table_wm (
      eventTS AS CAST(t1 AS TIMESTAMP(3)),
      WATERMARK FOR eventTS AS SOURCE_WATERMARK()
) WITH (
  'connector'='iceberg',
  'catalog-name'='iceberg_catalog',
  'catalog-database'='testdb',
  'catalog-table'='t'
) LIKE iceberg_catalog.testdb.t;

Options like connector, catalog-name, catalog-database, catalog-table need to be duplicated as Iceberg FlinkCatalog doesn't return any catalog related properties during getTable. This PR addresses the issue by including these properties when getTable is called , which will be used by Flink when creating table in Flink Catalog.

  1. Iceberg Source to support Source Watermark
    As raised in The "Emitting watermarks" feature can't be used in flink sql? #10219, Source watermark implemented as part of https://iceberg.apache.org/docs/nightly/flink-queries/#emitting-watermarks cannot be used in Flink window functions.

Flink let's user to push down watermark to source using interface SupportSourceWatermark.java .

Here we are falling back to read options implemented in #9346 , to configure the watermark column on Iceberg Source.

Comment on lines -387 to -393
if (Objects.equals(
table.getOptions().get("connector"), FlinkDynamicTableFactory.FACTORY_IDENTIFIER)) {
throw new IllegalArgumentException(
"Cannot create the table with 'connector'='iceberg' table property in "
+ "an iceberg catalog, Please create table with 'connector'='iceberg' property in a non-iceberg catalog or "
+ "create table without 'connector'='iceberg' related properties in an iceberg table.");
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we remove this check?

Copy link
Contributor Author

@swapna267 swapna267 Jan 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tables can be created using LIKE in

  1. Flink Catalog - Not supported currently.
  2. Another table in Iceberg catalog itself as detailed in doc

This check basically fails, if we try to create table using LIKE in Iceberg catalog, basically case#2 if we have connector=iceberg in options . For example, DDL like below,

CREATE TABLE  `hive_catalog`.`default`.`sample_like` 
LIKE `hive_catalog`.`default`.`sample`
WITH ('connector'='iceberg')

In order to support Case#1 without user setting any extra Options using WITH clause, we need to add connector in getTable,

catalogAndTableProps.put("connector", FlinkDynamicTableFactory.FACTORY_IDENTIFIER);

This check was added in very old PR,
#2666
#2666 (comment) where Flink SQL didn't support CREATE TABLE A LIKE B , where A and B are in different Catalogs.

So, in this case by removing this check, we are ignoring connector option being passed, so following DDL can create table table_like in Flink catalog backed by iceberg_catalog.db.table. As we know source table is an Iceberg table, adding connector=iceberg would be redundant.

CREATE TABLE table_like (
      eventTS AS CAST(t1 AS TIMESTAMP(3)),
) LIKE iceberg_catalog.db.table;

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens when the source table is not an Iceberg table?
I'm trying to understand here, where we get the missing information in this case, and wether we have a way to check that we actually get the missing information. If we can create such a check, then we can still throw an exception when we don't get this information from any source

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

createTable in this catalog is called only when Source table is an Iceberg Table. Currently the Catalog information comes when Catalog is created.

Following are the scenarios when getTable / createTable methods in this Catalog are being used.

  1. Create Iceberg table in Iceberg catalog -> Only createTable is called, where Catalog instance has all catalog related info.
  2. Create table in iceberg catalog catalog1 like table in iceberg catalog catalog1 -> getTable() sets schema/partitioning info , which is used to create the table with same schema/partitioning as source in Catalog1.
  3. Create table in iceberg catalog catalog2 like table in iceberg catalog catalog1 -> getTable() sets schema/partitioning info , which is used to create the table with same schema/partitioning as source in Catalog2.
  4. Create table in Flink catalog like table in iceberg catalog -> getTable() is only called to get the source table info and createTable called on Flink catalog, where connector/iceberg catalog properties are being used to instantiate FlinkDynamicTableFactory.

When createTable is invoked, currently there is no easy way to differentiate between Case 2) / Case 3) / Case 4) Or user is doing

CREATE TABLE  `hive_catalog`.`default`.`sample_like` 
WITH ('connector'='iceberg', 'catalog-name'='')

Copy link
Contributor Author

@swapna267 swapna267 Feb 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With current changes in PR, there is one side-effect. In Case 2 & 3 mentioned above, the extra properties added in getTable,

catalogAndTableProps.put(FlinkCreateTableOptions.CATALOG_NAME.key(), getName());
catalogAndTableProps.put(
FlinkCreateTableOptions.CATALOG_DATABASE.key(), tablePath.getDatabaseName());
catalogAndTableProps.put(
FlinkCreateTableOptions.CATALOG_TABLE.key(), tablePath.getObjectName());
catalogAndTableProps.put("connector", FlinkDynamicTableFactory.FACTORY_IDENTIFIER);
will be added as Table properties in destination table. Looking into nicer ways to avoid that. An option can be to add an extraProperty like say , createtabletype -> like in getTable , which can be used to differentiate the way table is being created. And extra props can be dropped in case 2&3 .

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't get this:

Currently the Catalog information comes when Catalog is created.

Let's talk offline

SupportsFilterPushDown,
SupportsLimitPushDown {
SupportsLimitPushDown,
SupportsSourceWatermark {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we have 2 feature in a single PR:

  • CREATE TABLE LIKE
  • Watermark support

Could we separate out these features to different PRs?
Could we write tests for both features?

Copy link
Contributor Author

@swapna267 swapna267 Jan 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These features were driven by mainly a use case, where an iceberg table is needed to be used in Flink window functions. This needs an incoming table to have MILLISECOND precision timestamp column and also watermark to be defined on source table.

As iceberg only supports MICROSECOND timestamp columns, we need to have a table with computed columns and we can create these only in Flink Catalog. Iceberg catalog doesn't support creating tables with computed columns.

i am happy to split them into 2 separate PR's .
I have tests for CREATE TABLE LIKE.

As Watermark support is just making Source to implement interface and falling back to #9346 for core logic, i didn't have a test case. I can add a validation on if watermark-column is configured or not , so it can fail fast. And a test case around that.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please separate out the features to 2 PR

Comment on lines -259 to -262
@TestTemplate
public void testConnectorTableInIcebergCatalog() {
// Create the catalog properties
Map<String, String> catalogProps = Maps.newHashMap();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this test removed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is testing the check mentioned in #12116 (comment)

Fail creating a table in Iceberg Catalog if connector=iceberg is specified in the Option. As the check is been deleted, i removed this test case.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that this is still a valid check in most cases. Only not valid when the table is created with "CREATE TABLE.. LIKE" and only if the source table is an iceberg table.
Do I miss something?

@github-actions
Copy link

This pull request has been marked as stale due to 30 days of inactivity. It will be closed in 1 week if no further activity occurs. If you think that’s incorrect or this pull request requires a review, please simply write any comment. If closed, you can revive the PR at any time and @mention a reviewer or discuss it on the [email protected] list. Thank you for your contributions.

@github-actions github-actions bot added the stale label Mar 13, 2025
@github-actions
Copy link

This pull request has been closed due to lack of activity. This is not a judgement on the merit of the PR in any way. It is just a way of keeping the PR queue manageable. If you think that is incorrect, or the pull request requires review, you can revive the PR at any time.

@github-actions github-actions bot closed this Mar 21, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants