-
Notifications
You must be signed in to change notification settings - Fork 3k
Kafka Connect: Add table to topics mapping property #10422
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
I believe when we add #11313 you should be able to accomplish mapping topics to tables. Also I think this PR isn't complete, the new config isn't being used. |
1050f05 to
859f20e
Compare
I believe they aren't related, since that PR covers dynamic routing |
|
This pull request has been marked as stale due to 30 days of inactivity. It will be closed in 1 week if no further activity occurs. If you think that’s incorrect or this pull request requires a review, please simply write any comment. If closed, you can revive the PR at any time and @mention a reviewer or discuss it on the dev@iceberg.apache.org list. Thank you for your contributions. |
|
@bryanck ping! 😄 |
|
This pull request has been marked as stale due to 30 days of inactivity. It will be closed in 1 week if no further activity occurs. If you think that’s incorrect or this pull request requires a review, please simply write any comment. If closed, you can revive the PR at any time and @mention a reviewer or discuss it on the dev@iceberg.apache.org list. Thank you for your contributions. |
|
Ping |
|
This solution seems best to me; most explicit and least restrictive. I'll be using this in my fork. thanks @igorvoltaic. |
I took a look at that one too. From a configurable perspective, it seems better and the pattern aligns more with the connector's other configs of specifying the |
|
Thanks, I'll take a deeper look next week. |
|
This pull request has been marked as stale due to 30 days of inactivity. It will be closed in 1 week if no further activity occurs. If you think that’s incorrect or this pull request requires a review, please simply write any comment. If closed, you can revive the PR at any time and @mention a reviewer or discuss it on the dev@iceberg.apache.org list. Thank you for your contributions. |
|
Ping |
|
This pull request has been marked as stale due to 30 days of inactivity. It will be closed in 1 week if no further activity occurs. If you think that’s incorrect or this pull request requires a review, please simply write any comment. If closed, you can revive the PR at any time and @mention a reviewer or discuss it on the dev@iceberg.apache.org list. Thank you for your contributions. |
|
This pull request has been closed due to lack of activity. This is not a judgement on the merit of the PR in any way. It is just a way of keeping the PR queue manageable. If you think that is incorrect, or the pull request requires review, you can revive the PR at any time. |
The property which allows mapping Kafka topics to Iceberg tables:
An example config would look like:
iceberg.tables.topic-to-table-mapping=some_topic0:table_name0,some_topic1:table_name1Similar approach implemented in SnowflakeSink, ClickhouseSink, Aiven JdbcSink. Probably I can call it a standard way of static routing data in sink connectors at the moment.
The reason I stated thinking of implementing this because it isn't obvious from the config (or readme) how one should map the topics to tables in the original version because there is no clear indication of where the
.route-regexis applied.This could be done for
.partition-byand.id-columnsconfigs as well, but using tables as map keys is such case.I am making the same PR as the one in the databricks/iceberg-kafka-connect#223 because I was told that it is being moved to this core repository.
It seems that the code hasn't been fully migrated to this core repository yet and I am aware of there should be further tasks such as adding the rest of functionality from the above PR into IcebergSinkTask (as I see it), but would like to share the idea and get initial feedback. Thanks!