-
Notifications
You must be signed in to change notification settings - Fork 8
Exclude rockdbjni from assembled Flink / cloud gcp jars #334
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughA new exclusion for the artifact "org.rocksdb:rocksdbjni" has been added to the excluded_artifacts list within the spark_repository function. This update is intended to prevent conflicts (e.g., with flink, cloud_gcp) and avoid a NoSuchMethodError for WriteBatch.remove. No other modifications were made to the file. Changes
Possibly related PRs
Suggested reviewers
Poem
Tip 🌐 Web search-backed reviews and chat
Warning Review ran into problems🔥 ProblemsGitHub Actions and Pipeline Checks: Resource not accessible by integration - https://docs.github.com/rest/actions/workflow-runs#list-workflow-runs-for-a-repository. Please grant the required permissions to the CodeRabbit GitHub App under the organization or repository settings. 📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
🔇 Additional comments (1)
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
## Summary Hit some errors as our Spark deps pull in rocksdbjni 8.3.2 whereas we expect an older version in Flink (6.20.3-ververica-2.0). As we rely on user class first it seems like this newer version gets priority and when Flink is closing tiles we hit an error - ``` 2025-02-05 21:14:53,614 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph [] - Tiling for etsy.listing_canary.actions_v1 -> (Tiling Side Output Late Data for etsy.listing_canary.actions_v1, Avro conversion for etsy.listing_canary.actions_v1 -> async kvstore writes for etsy.listing_canary.actions_v1 -> Sink: Metrics Sink for etsy.listing_canary.actions_v1) (2/3) (a107444db4dad3eb79d9d02631d8696e_5627cd3c4e8c9c02fa4f114c4b3607f4_1_56) switched from RUNNING to FAILED on container_1738197659103_0039_01_000004 @ zipline-canary-cluster-w-1.us-central1-c.c.canary-443022.internal (dataPort=33465). java.lang.NoSuchMethodError: 'void org.rocksdb.WriteBatch.remove(org.rocksdb.ColumnFamilyHandle, byte[])' at org.apache.flink.contrib.streaming.state.RocksDBWriteBatchWrapper.remove(RocksDBWriteBatchWrapper.java:105) ``` Yanked it out from the two jars and confirmed that the Flink job seems to be running fine + crossing over across hours (and hence tile closures). ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [X] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - Chores - Made internal adjustments to dependency management to improve compatibility between libraries and enhance overall application stability. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
## Summary Hit some errors as our Spark deps pull in rocksdbjni 8.3.2 whereas we expect an older version in Flink (6.20.3-ververica-2.0). As we rely on user class first it seems like this newer version gets priority and when Flink is closing tiles we hit an error - ``` 2025-02-05 21:14:53,614 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph [] - Tiling for etsy.listing_canary.actions_v1 -> (Tiling Side Output Late Data for etsy.listing_canary.actions_v1, Avro conversion for etsy.listing_canary.actions_v1 -> async kvstore writes for etsy.listing_canary.actions_v1 -> Sink: Metrics Sink for etsy.listing_canary.actions_v1) (2/3) (a107444db4dad3eb79d9d02631d8696e_5627cd3c4e8c9c02fa4f114c4b3607f4_1_56) switched from RUNNING to FAILED on container_1738197659103_0039_01_000004 @ zipline-canary-cluster-w-1.us-central1-c.c.canary-443022.internal (dataPort=33465). java.lang.NoSuchMethodError: 'void org.rocksdb.WriteBatch.remove(org.rocksdb.ColumnFamilyHandle, byte[])' at org.apache.flink.contrib.streaming.state.RocksDBWriteBatchWrapper.remove(RocksDBWriteBatchWrapper.java:105) ``` Yanked it out from the two jars and confirmed that the Flink job seems to be running fine + crossing over across hours (and hence tile closures). ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [X] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - Chores - Made internal adjustments to dependency management to improve compatibility between libraries and enhance overall application stability. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
## Summary Hit some errors as our Spark deps pull in rocksdbjni 8.3.2 whereas we expect an older version in Flink (6.20.3-ververica-2.0). As we rely on user class first it seems like this newer version gets priority and when Flink is closing tiles we hit an error - ``` 2025-02-05 21:14:53,614 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph [] - Tiling for etsy.listing_canary.actions_v1 -> (Tiling Side Output Late Data for etsy.listing_canary.actions_v1, Avro conversion for etsy.listing_canary.actions_v1 -> async kvstore writes for etsy.listing_canary.actions_v1 -> Sink: Metrics Sink for etsy.listing_canary.actions_v1) (2/3) (a107444db4dad3eb79d9d02631d8696e_5627cd3c4e8c9c02fa4f114c4b3607f4_1_56) switched from RUNNING to FAILED on container_1738197659103_0039_01_000004 @ zipline-canary-cluster-w-1.us-central1-c.c.canary-443022.internal (dataPort=33465). java.lang.NoSuchMethodError: 'void org.rocksdb.WriteBatch.remove(org.rocksdb.ColumnFamilyHandle, byte[])' at org.apache.flink.contrib.streaming.state.RocksDBWriteBatchWrapper.remove(RocksDBWriteBatchWrapper.java:105) ``` Yanked it out from the two jars and confirmed that the Flink job seems to be running fine + crossing over across hours (and hence tile closures). ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [X] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - Chores - Made internal adjustments to dependency management to improve compatibility between libraries and enhance overall application stability. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
## Summary Hit some errors as our Spark deps pull in rocksdbjni 8.3.2 whereas we expect an older version in Flink (6.20.3-ververica-2.0). As we rely on user class first it seems like this newer version gets priority and when Flink is closing tiles we hit an error - ``` 2025-02-05 21:14:53,614 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph [] - Tiling for etsy.listing_canary.actions_v1 -> (Tiling Side Output Late Data for etsy.listing_canary.actions_v1, Avro conversion for etsy.listing_canary.actions_v1 -> async kvstore writes for etsy.listing_canary.actions_v1 -> Sink: Metrics Sink for etsy.listing_canary.actions_v1) (2/3) (a107444db4dad3eb79d9d02631d8696e_5627cd3c4e8c9c02fa4f114c4b3607f4_1_56) switched from RUNNING to FAILED on container_1738197659103_0039_01_000004 @ zipline-canary-cluster-w-1.us-central1-c.c.canary-443022.internal (dataPort=33465). java.lang.NoSuchMethodError: 'void org.rocksdb.WriteBatch.remove(org.rocksdb.ColumnFamilyHandle, byte[])' at org.apache.flink.contrib.streaming.state.RocksDBWriteBatchWrapper.remove(RocksDBWriteBatchWrapper.java:105) ``` Yanked it out from the two jars and confirmed that the Flink job seems to be running fine + crossing over across hours (and hence tile closures). ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [X] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - Chores - Made internal adjustments to dependency management to improve compatibility between libraries and enhance overall application stability. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
## Summary Hit some errors as our Spark deps pull in rocksdbjni 8.3.2 whereas we expect an older version in Flink (6.20.3-ververica-2.0). As we rely on user class first it seems like this newer version gets priority and when Flink is closing tiles we hit an error - ``` 2025-02-05 21:14:53,614 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph [] - Tiling for our clients.listing_canary.actions_v1 -> (Tiling Side Output Late Data for our clients.listing_canary.actions_v1, Avro conversion for our clients.listing_canary.actions_v1 -> async kvstore writes for our clients.listing_canary.actions_v1 -> Sink: Metrics Sink for our clients.listing_canary.actions_v1) (2/3) (a107444db4dad3eb79d9d02631d8696e_5627cd3c4e8c9c02fa4f114c4b3607f4_1_56) switched from RUNNING to FAILED on container_1738197659103_0039_01_000004 @ zipline-canary-cluster-w-1.us-central1-c.c.canary-443022.internal (dataPort=33465). java.lang.NoSuchMethodError: 'void org.rocksdb.WriteBatch.remove(org.rocksdb.ColumnFamilyHandle, byte[])' at org.apache.flink.contrib.streaming.state.RocksDBWriteBatchWrapper.remove(RocksDBWriteBatchWrapper.java:105) ``` Yanked it out from the two jars and confirmed that the Flink job seems to be running fine + crossing over across hours (and hence tile closures). ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [X] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - Chores - Made internal adjustments to dependency management to improve compatibility between libraries and enhance overall application stability. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
## Summary Hit some errors as our Spark deps pull in rocksdbjni 8.3.2 whereas we expect an older version in Flink (6.20.3-ververica-2.0). As we rely on user class first it seems like this newer version gets priority and when Flink is closing tiles we hit an error - ``` 2025-02-05 21:14:53,614 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph [] - Tiling for our clients.listing_canary.actions_v1 -> (Tiling Side Output Late Data for our clients.listing_canary.actions_v1, Avro conversion for our clients.listing_canary.actions_v1 -> async kvstore writes for our clients.listing_canary.actions_v1 -> Sink: Metrics Sink for our clients.listing_canary.actions_v1) (2/3) (a107444db4dad3eb79d9d02631d8696e_5627cd3c4e8c9c02fa4f114c4b3607f4_1_56) switched from RUNNING to FAILED on container_1738197659103_0039_01_000004 @ zipline-canary-cluster-w-1.us-central1-c.c.canary-443022.internal (dataPort=33465). java.lang.NoSuchMethodError: 'void org.rocksdb.WriteBatch.remove(org.rocksdb.ColumnFamilyHandle, byte[])' at org.apache.flink.contrib.streaming.state.RocksDBWriteBatchWrapper.remove(RocksDBWriteBatchWrapper.java:105) ``` Yanked it out from the two jars and confirmed that the Flink job seems to be running fine + crossing over across hours (and hence tile closures). ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [X] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - Chores - Made internal adjustments to dependency management to improve compatibility between libraries and enhance overall application stability. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
## Summary Hit some errors as our Spark deps pull in roour clientssdbjni 8.3.2 whereas we expect an older version in Flink (6.20.3-ververica-2.0). As we rely on user class first it seems like this newer version gets priority and when Flink is closing tiles we hit an error - ``` 2025-02-05 21:14:53,614 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph [] - Tiling for our clients.listing_canary.actions_v1 -> (Tiling Side Output Late Data for our clients.listing_canary.actions_v1, Avro conversion for our clients.listing_canary.actions_v1 -> async kvstore writes for our clients.listing_canary.actions_v1 -> Sink: Metrics Sink for our clients.listing_canary.actions_v1) (2/3) (a107444db4dad3eb79d9d02631d8696e_5627cd3c4e8c9c02fa4f114c4b3607f4_1_56) switched from RUNNING to FAILED on container_1738197659103_0039_01_000004 @ zipline-canary-cluster-w-1.us-central1-c.c.canary-443022.internal (dataPort=33465). java.lang.NoSuchMethodError: 'void org.roour clientssdb.WriteBatch.remove(org.roour clientssdb.ColumnFamilyHandle, byte[])' at org.apache.flink.contrib.streaming.state.Roour clientssDBWriteBatchWrapper.remove(Roour clientssDBWriteBatchWrapper.java:105) ``` Yanked it out from the two jars and confirmed that the Flink job seems to be running fine + crossing over across hours (and hence tile closures). ## Cheour clientslist - [ ] Added Unit Tests - [ ] Covered by existing CI - [X] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - Chores - Made internal adjustments to dependency management to improve compatibility between libraries and enhance overall application stability. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
Summary
Hit some errors as our Spark deps pull in rocksdbjni 8.3.2 whereas we expect an older version in Flink (6.20.3-ververica-2.0). As we rely on user class first it seems like this newer version gets priority and when Flink is closing tiles we hit an error -
Yanked it out from the two jars and confirmed that the Flink job seems to be running fine + crossing over across hours (and hence tile closures).
Checklist
Summary by CodeRabbit