Skip to content

Conversation

@swapna267
Copy link
Contributor

Creation of dynamic Iceberg table in Flink Catalog using the underlying physical Iceberg table using LIKE clause.

Currently (without the changes in PR), create table in flink catalog works by configuring flink connector as described in,
flink-connector

But that needs user to provide the schema for the table. A way to tackle that is to do create table LIKE using below DDL.

CREATE TABLE table_wm (
      eventTS AS CAST(t1 AS TIMESTAMP(3)),
      WATERMARK FOR eventTS AS SOURCE_WATERMARK()
) WITH (
  'connector'='iceberg',
  'catalog-name'='iceberg_catalog',
  'catalog-database'='testdb',
  'catalog-table'='t'
) LIKE iceberg_catalog.testdb.t;

Options like connector, catalog-name, catalog-database, catalog-table need to be duplicated as Iceberg FlinkCatalog doesn't return any catalog related properties during getTable. This PR addresses the issue by including these properties when getTable is called , which will be used by Flink when creating table in Flink Catalog.

Previous discussion related to feature is in PR, #12116 .

@github-actions github-actions bot added the flink label Feb 7, 2025
FlinkCreateTableOptions.CATALOG_TABLE.key(), tablePath.getObjectName());
catalogAndTableProps.put("connector", FlinkDynamicTableFactory.FACTORY_IDENTIFIER);
catalogAndTableProps.putAll(table.properties());
return toCatalogTableWithProps(table, catalogAndTableProps);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you help me understand why is the table properties needed to be added here?
We also send the table as a parameter. Wouldn't it be enough?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just thinking out loud:

  • Maybe the code would be easier to read if we send only the catalogProps to the toCatalogTableWithProps and create a merged map when calling the Flink method
  • This is somewhat suboptimal as we create an extra map

Even if we decide to follow your approach, the parameter name of the method should reflect that at the declaration of toCatalogTableWithProps, and maybe some comments or javadoc should be nice there for future generations 😉

WDYT?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah merged it here , as merging later on needs an extra map.

Renamed the method to toCatalogTableWithCustomProps and modified parameter names. Hope it's more readable now.

Comment on lines 425 to 426
} else if (!("connector".equalsIgnoreCase(entry.getKey())
|| FlinkCreateTableOptions.SRC_CATALOG_PROPS_KEY.equalsIgnoreCase(entry.getKey()))) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It took me some time to decipher this 😄
Could we please break up the negation so it doesn't span multiple lines?
Maybe:

      } else if (!"connector".equalsIgnoreCase(entry.getKey())
          && !FlinkCreateTableOptions.SRC_CATALOG_PROPS_KEY.equalsIgnoreCase(entry.getKey())) {

Or it is just me?
Also maybe a comment should be nice

Comment on lines 339 to 346
String srcCatalogProps =
FlinkCreateTableOptions.toJson(
getName(), tablePath.getDatabaseName(), tablePath.getObjectName(), catalogProps);

ImmutableMap.Builder<String, String> mergedProps = ImmutableMap.builder();
mergedProps.put("connector", FlinkDynamicTableFactory.FACTORY_IDENTIFIER);
mergedProps.put(FlinkCreateTableOptions.SRC_CATALOG_PROPS_KEY, srcCatalogProps);
mergedProps.putAll(table.properties());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we please add a comment here, that we store the catalog properties in the merged property list to work around the Flink API limitations?

tableProps.forEach(flinkConf::setString);

String catalogName = flinkConf.getString(CATALOG_NAME);
Map<String, String> mergedProps = mergeSrcCatalogProps(tableProps);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Either here, or in the javadoc for the mergeSrcCatalogProps please describe what we are doing, and why

Copy link
Contributor

@pvary pvary left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I generally like this approach.
Please add some comments to the code for the future developers.
Otherwise looks good to me.

@swapna267
Copy link
Contributor Author

Added comments to the code.

.noDefaultValue()
.withDescription("Properties for the underlying catalog for iceberg table.");

public static final String SRC_CATALOG_PROPS_KEY = "src-catalog";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also does Flink create table ... like ... work for Hive tables as source? I assume no. otherwise, we can refer to how that is implemented.

It feels tacky to use special property key to carry over catalog props and source table identifier. Does it make sense to modify Flink API to support this more elegantly?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right this is not supported in hive,
https://nightlies.apache.org/flink/flink-docs-release-1.20/docs/dev/table/hive-compatibility/hive-dialect/create/

Supported in Kafka, but yeah there is no concept of catalog , mostly flat properties of cluster for kafka connector.
Sure, can look into Flink API changes for long term, but as all connectors don't have concept of catalog or hierarchy like Iceberg, not sure how that works out.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, Kafka doesn't have catalog concept. we can take this discussion separately. it won't be a blocker for this PR

Copy link
Contributor

@stevenzwu stevenzwu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

waiting for CI to complete

@stevenzwu stevenzwu merged commit ffe9ad5 into apache:main Mar 5, 2025
20 checks passed
@stevenzwu
Copy link
Contributor

thanks @swapna267 for the contribution and @pvary for the review

@swapna267
Copy link
Contributor Author

Thanks @stevenzwu and @pvary for the review.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants