-
Notifications
You must be signed in to change notification settings - Fork 8
Add Iceberg partition stats extraction #1012
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Add IcebergPartitionStatsExtractor for extracting partition statistics from Iceberg tables - Add comprehensive test suite for Iceberg partition stats functionality - Add IcebergSparkSPJTest for testing Storage Partitioned Join optimization - Add Iceberg Spark runtime dependency to build configuration 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
WalkthroughThis change adds the Iceberg Spark runtime 3.5 dependency to the Spark module's build and test configurations, updates the batch library's source file glob to be recursive, removes a duplicate Iceberg dependency from the Maven repository, updates the Maven hash, and introduces two comprehensive test suites for Iceberg integration and SPJ optimization. Changes
Sequence Diagram(s)sequenceDiagram
participant TestSuite as IcebergSparkSPJTest
participant SparkSession
participant IcebergTable
participant DataFrame
participant PlanAnalyzer
TestSuite->>SparkSession: Initialize with Iceberg extensions
TestSuite->>IcebergTable: Create test tables (customers, orders)
TestSuite->>IcebergTable: Insert synthetic data
TestSuite->>DataFrame: Perform join query
DataFrame->>PlanAnalyzer: Analyze physical plan
PlanAnalyzer-->>TestSuite: Report Exchange/shuffle presence
TestSuite->>TestSuite: Assert plan and result correctness
Estimated code review effort🎯 4 (Complex) | ⏱️ ~40 minutes Possibly related PRs
Suggested reviewers
Poem
Note ⚡️ Unit Test Generation is now available in beta!Learn more here, or try it out under "Finishing Touches" below. ✨ Finishing Touches🧪 Generate unit tests
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
| "org.json4s:json4s-core_2.12:3.7.0-M11", | ||
| "org.json4s:json4s-ast_2.12:3.7.0-M11", | ||
| "io.delta:delta-spark_2.12:3.2.0", | ||
| "org.apache.iceberg:iceberg-spark-runtime-3.5_2.12:1.6.1", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this was duplicated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Nitpick comments (4)
spark/src/test/scala/ai/chronon/spark/test/batch/iceberg/IcebergPartitionStatsExtractorTest.scala (1)
122-122: Fix misleading commentThe comment says "(Charlie, NULL)" but Charlie has a non-null name. Row 4 has the NULL name.
- southStats.colToNullCount("name") should be(1) // One null name (Charlie, NULL) + southStats.colToNullCount("name") should be(1) // One null name (row 4)spark/src/test/scala/ai/chronon/spark/test/batch/iceberg/IcebergSparkSPJTest.scala (3)
173-173: Remove df.explain() call to reduce test output noise.This prints to console unnecessarily.
- df.explain()
199-199: Add parentheses for clarity.- resultCount should be > 0L + resultCount should be > (0L)
286-286: Clarify column index comment.Index 1 corresponds to the second column selected.
- row.isNullAt(1) shouldBe true // order_id should be null + row.isNullAt(1) shouldBe true // order_id (index 1) should be null
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
spark/BUILD.bazel(2 hunks)spark/src/main/scala/ai/chronon/spark/iceberg/IcebergPartitionStatsExtractor.scala(1 hunks)spark/src/test/scala/ai/chronon/spark/test/batch/iceberg/IcebergPartitionStatsExtractorTest.scala(1 hunks)spark/src/test/scala/ai/chronon/spark/test/batch/iceberg/IcebergSparkSPJTest.scala(1 hunks)tools/build_rules/dependencies/maven_repository.bzl(0 hunks)
💤 Files with no reviewable changes (1)
- tools/build_rules/dependencies/maven_repository.bzl
🧰 Additional context used
🧠 Learnings (4)
spark/BUILD.bazel (2)
Learnt from: tchow-zlai
PR: #393
File: cloud_gcp/BUILD.bazel:99-99
Timestamp: 2025-02-22T20:30:28.381Z
Learning: The jar file "iceberg-bigquery-catalog-1.5.2-1.0.1-beta.jar" in cloud_gcp/BUILD.bazel is a local dependency and should not be replaced with maven_artifact.
Learnt from: chewy-zlai
PR: #47
File: docker-init/Dockerfile:36-38
Timestamp: 2024-10-17T01:09:24.653Z
Learning: The JAR files spark-assembly-0.1.0-SNAPSHOT.jar and cloud_aws-assembly-0.1.0-SNAPSHOT.jar are generated by sbt and located in the target directory after the build.
spark/src/main/scala/ai/chronon/spark/iceberg/IcebergPartitionStatsExtractor.scala (1)
Learnt from: piyush-zlai
PR: #33
File: cloud_aws/src/main/scala/ai/chronon/integrations/aws/DynamoDBKVStoreImpl.scala:245-260
Timestamp: 2024-10-08T16:18:45.669Z
Learning: In DynamoDBKVStoreImpl.scala, refactoring methods like extractTimedValues and extractListValues to eliminate code duplication is discouraged if it would make the code more convoluted.
spark/src/test/scala/ai/chronon/spark/test/batch/iceberg/IcebergPartitionStatsExtractorTest.scala (2)
Learnt from: piyush-zlai
PR: #33
File: cloud_aws/src/test/scala/ai/chronon/integrations/aws/DynamoDBKVStoreTest.scala:175-175
Timestamp: 2024-10-07T15:09:51.567Z
Learning: Hardcoding future timestamps in tests within DynamoDBKVStoreTest.scala is acceptable when data is generated and queried within the same time range, ensuring the tests remain valid over time.
Learnt from: piyush-zlai
PR: #53
File: hub/app/controllers/TimeSeriesController.scala:224-224
Timestamp: 2024-10-29T15:21:58.102Z
Learning: In the mocked data implementation in hub/app/controllers/TimeSeriesController.scala, potential NumberFormatException exceptions due to parsing errors (e.g., when using val featureId = name.split("_").last.toInt) are acceptable and will be addressed when adding the concrete backend.
spark/src/test/scala/ai/chronon/spark/test/batch/iceberg/IcebergSparkSPJTest.scala (3)
Learnt from: piyush-zlai
PR: #44
File: hub/app/controllers/ModelController.scala:15-18
Timestamp: 2024-10-17T19:46:42.629Z
Learning: References to MockDataService in hub/test/controllers/SearchControllerSpec.scala and hub/test/controllers/ModelControllerSpec.scala are needed for tests and should not be removed.
Learnt from: chewy-zlai
PR: #50
File: spark/src/test/scala/ai/chronon/spark/test/MockKVStore.scala:19-28
Timestamp: 2024-10-31T18:29:45.027Z
Learning: In MockKVStore located at spark/src/test/scala/ai/chronon/spark/test/MockKVStore.scala, the multiPut method is intended to be a simple implementation without dataset existence validation, duplicate validation logic elimination, or actual storage of key-value pairs for verification.
Learnt from: piyush-zlai
PR: #33
File: cloud_aws/src/test/scala/ai/chronon/integrations/aws/DynamoDBKVStoreTest.scala:175-175
Timestamp: 2024-10-07T15:09:51.567Z
Learning: Hardcoding future timestamps in tests within DynamoDBKVStoreTest.scala is acceptable when data is generated and queried within the same time range, ensuring the tests remain valid over time.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: fetcher_tests
🔇 Additional comments (1)
spark/BUILD.bazel (1)
46-46: Verify Spark version compatibility for IcebergI couldn’t locate a defined Spark version in the repo. You’ve added:
• spark/BUILD.bazel:46
• spark/BUILD.bazel:136Ensure your project is using Spark 3.5 (or newer) to match
iceberg-spark-runtime-3.5and that there are no version conflicts.
| val partitionValue = file.partition().get(index, classOf[String]) | ||
|
|
||
| sourceField.name() -> String.valueOf(partitionValue) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Handle null partition values explicitly
String.valueOf converts null to "null" string. Consider explicit null handling.
- val partitionValue = file.partition().get(index, classOf[String])
-
- sourceField.name() -> String.valueOf(partitionValue)
+ val partitionValue = file.partition().get(index, classOf[String])
+ sourceField.name() -> Option(partitionValue).getOrElse("")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| val partitionValue = file.partition().get(index, classOf[String]) | |
| sourceField.name() -> String.valueOf(partitionValue) | |
| val partitionValue = file.partition().get(index, classOf[String]) | |
| sourceField.name() -> Option(partitionValue).getOrElse("") |
🤖 Prompt for AI Agents
In
spark/src/main/scala/ai/chronon/spark/iceberg/IcebergPartitionStatsExtractor.scala
around lines 67 to 69, the code uses String.valueOf on a partition value that
may be null, which converts null to the string "null". To fix this, explicitly
check if partitionValue is null and handle it accordingly, for example by
returning an empty string or a specific placeholder instead of the string
"null".
spark/src/test/scala/ai/chronon/spark/test/batch/iceberg/IcebergSparkSPJTest.scala
Show resolved
Hide resolved
spark/src/test/scala/ai/chronon/spark/test/batch/iceberg/IcebergSparkSPJTest.scala
Show resolved
Hide resolved
| private def disableSPJ(): Unit = { | ||
| spark.conf.set("spark.sql.sources.v2.bucketing.enabled", "false") | ||
| spark.conf.set("spark.sql.iceberg.planning.preserve-data-grouping", "false") | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reset all SPJ-related configs in disableSPJ.
Only 2 configs are reset but enableSPJ sets 7 configs.
private def disableSPJ(): Unit = {
spark.conf.set("spark.sql.sources.v2.bucketing.enabled", "false")
spark.conf.set("spark.sql.iceberg.planning.preserve-data-grouping", "false")
+ spark.conf.set("spark.sql.sources.v2.bucketing.pushPartValues.enabled", "false")
+ spark.conf.set("spark.sql.requireAllClusterKeysForCoPartition", "true")
+ spark.conf.set("spark.sql.sources.v2.bucketing.partiallyClusteredDistribution.enabled", "false")
+ spark.conf.set("spark.sql.adaptive.enabled", "true")
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| private def disableSPJ(): Unit = { | |
| spark.conf.set("spark.sql.sources.v2.bucketing.enabled", "false") | |
| spark.conf.set("spark.sql.iceberg.planning.preserve-data-grouping", "false") | |
| } | |
| private def disableSPJ(): Unit = { | |
| spark.conf.set("spark.sql.sources.v2.bucketing.enabled", "false") | |
| spark.conf.set("spark.sql.iceberg.planning.preserve-data-grouping", "false") | |
| spark.conf.set("spark.sql.sources.v2.bucketing.pushPartValues.enabled", "false") | |
| spark.conf.set("spark.sql.requireAllClusterKeysForCoPartition", "true") | |
| spark.conf.set("spark.sql.sources.v2.bucketing.partiallyClusteredDistribution.enabled", "false") | |
| spark.conf.set("spark.sql.adaptive.enabled", "true") | |
| } |
🤖 Prompt for AI Agents
In
spark/src/test/scala/ai/chronon/spark/test/batch/iceberg/IcebergSparkSPJTest.scala
around lines 67 to 70, the disableSPJ method only resets 2 SPJ-related
configurations while enableSPJ sets 7. Update disableSPJ to reset all 7
configurations that enableSPJ sets, ensuring the test environment is fully
reverted to its original state after disabling SPJ.
| namespace: String, | ||
| tableName: String): Seq[PartitionStats] = { | ||
|
|
||
| val catalog = spark |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
take this from tableUtils
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (2)
spark/src/test/scala/ai/chronon/spark/test/batch/iceberg/IcebergSparkSPJTest.scala (2)
20-43: Clean up temp directories in afterAll.The temp directories created for warehouse paths should be cleaned up.
66-69: Reset all SPJ-related configs in disableSPJ.Only 2 configs are reset but enableSPJ sets 7 configs.
🧹 Nitpick comments (1)
spark/src/test/scala/ai/chronon/spark/test/batch/iceberg/IcebergSparkSPJTest.scala (1)
103-103: Add column names to DataFrame creation.Use explicit column names for better clarity.
- spark.createDataFrame(customerData).write.mode("append").insertInto("customers") + import spark.implicits._ + customerData.toDF("customer_id", "customer_name", "region", "email", "created_date") + .write.mode("append").insertInto("customers")- spark.createDataFrame(orderData).write.mode("append").insertInto("orders") + import spark.implicits._ + orderData.toDF("order_id", "customer_id", "region", "amount", "order_date") + .write.mode("append").insertInto("orders")Also applies to: 112-112
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
maven_install.json(1 hunks)spark/src/main/scala/ai/chronon/spark/iceberg/IcebergPartitionStatsExtractor.scala(1 hunks)spark/src/test/scala/ai/chronon/spark/test/batch/iceberg/IcebergPartitionStatsExtractorTest.scala(1 hunks)spark/src/test/scala/ai/chronon/spark/test/batch/iceberg/IcebergSparkSPJTest.scala(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- maven_install.json
🚧 Files skipped from review as they are similar to previous changes (2)
- spark/src/test/scala/ai/chronon/spark/test/batch/iceberg/IcebergPartitionStatsExtractorTest.scala
- spark/src/main/scala/ai/chronon/spark/iceberg/IcebergPartitionStatsExtractor.scala
🧰 Additional context used
🧠 Learnings (1)
spark/src/test/scala/ai/chronon/spark/test/batch/iceberg/IcebergSparkSPJTest.scala (6)
Learnt from: piyush-zlai
PR: #44
File: hub/app/controllers/ModelController.scala:15-18
Timestamp: 2024-10-17T19:46:42.629Z
Learning: References to MockDataService in hub/test/controllers/SearchControllerSpec.scala and hub/test/controllers/ModelControllerSpec.scala are needed for tests and should not be removed.
Learnt from: chewy-zlai
PR: #50
File: spark/src/test/scala/ai/chronon/spark/test/MockKVStore.scala:19-28
Timestamp: 2024-10-31T18:29:45.027Z
Learning: In MockKVStore located at spark/src/test/scala/ai/chronon/spark/test/MockKVStore.scala, the multiPut method is intended to be a simple implementation without dataset existence validation, duplicate validation logic elimination, or actual storage of key-value pairs for verification.
Learnt from: piyush-zlai
PR: #33
File: cloud_aws/src/test/scala/ai/chronon/integrations/aws/DynamoDBKVStoreTest.scala:175-175
Timestamp: 2024-10-07T15:09:51.567Z
Learning: Hardcoding future timestamps in tests within DynamoDBKVStoreTest.scala is acceptable when data is generated and queried within the same time range, ensuring the tests remain valid over time.
Learnt from: david-zlai
PR: #222
File: cloud_gcp/src/main/resources/additional-confs.yaml:3-3
Timestamp: 2025-01-15T21:00:35.574Z
Learning: The GCS bucket configuration spark.chronon.table.gcs.temporary_gcs_bucket: "zl-warehouse" should remain in the main additional-confs.yaml file, not in dev-specific configs.
Learnt from: piyush-zlai
PR: #33
File: cloud_aws/src/main/scala/ai/chronon/integrations/aws/DynamoDBKVStoreImpl.scala:245-260
Timestamp: 2024-10-08T16:18:45.669Z
Learning: In DynamoDBKVStoreImpl.scala, refactoring methods like extractTimedValues and extractListValues to eliminate code duplication is discouraged if it would make the code more convoluted.
Learnt from: chewy-zlai
PR: #50
File: spark/src/test/scala/ai/chronon/spark/test/MockKVStore.scala:13-16
Timestamp: 2024-10-31T18:27:44.973Z
Learning: In MockKVStore.scala, the create method should reset the dataset even if the dataset already exists.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (17)
- GitHub Check: service_commons_tests
- GitHub Check: service_tests
- GitHub Check: aggregator_tests
- GitHub Check: api_tests
- GitHub Check: cloud_aws_tests
- GitHub Check: cloud_gcp_tests
- GitHub Check: online_tests
- GitHub Check: flink_tests
- GitHub Check: streaming_tests
- GitHub Check: analyzer_tests
- GitHub Check: groupby_tests
- GitHub Check: spark_tests
- GitHub Check: join_tests
- GitHub Check: fetcher_tests
- GitHub Check: batch_tests
- GitHub Check: scala_compile_fmt_fix
- GitHub Check: enforce_triggered_workflows
🔇 Additional comments (2)
spark/src/test/scala/ai/chronon/spark/test/batch/iceberg/IcebergSparkSPJTest.scala (2)
115-171: LGTM!Well-structured utility methods for plan analysis.
173-344: LGTM!Comprehensive test coverage for SPJ optimization scenarios.
| maven_artifact("org.apache.thrift:libthrift"), | ||
| maven_artifact("org.apache.hadoop:hadoop-common"), | ||
| maven_artifact("org.apache.hadoop:hadoop-client-api"), | ||
| maven_artifact_with_suffix("org.apache.iceberg:iceberg-spark-runtime-3.5"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
create a separate build target for this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
spark/src/test/scala/ai/chronon/spark/test/batch/iceberg/IcebergSparkSPJTest.scala (1)
66-70: Reset all SPJ configs in disableSPJ.Only 2 configs reset but enableSPJ sets 7.
private def disableSPJ(): Unit = { spark.conf.set("spark.sql.sources.v2.bucketing.enabled", "false") spark.conf.set("spark.sql.iceberg.planning.preserve-data-grouping", "false") + spark.conf.set("spark.sql.sources.v2.bucketing.pushPartValues.enabled", "false") + spark.conf.set("spark.sql.requireAllClusterKeysForCoPartition", "true") + spark.conf.set("spark.sql.sources.v2.bucketing.partiallyClusteredDistribution.enabled", "false") + spark.conf.set("spark.sql.adaptive.enabled", "true") }
🧹 Nitpick comments (1)
spark/src/test/scala/ai/chronon/spark/test/batch/iceberg/IcebergSparkSPJTest.scala (1)
103-103: Use toDF for type safety.Replace deprecated createDataFrame.
-spark.createDataFrame(customerData).write.mode("append").insertInto("customers") +import spark.implicits._ +customerData.toDF("customer_id", "customer_name", "region", "email", "created_date") + .write.mode("append").insertInto("customers")
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
spark/BUILD.bazel(4 hunks)spark/src/test/scala/ai/chronon/spark/test/batch/iceberg/IcebergPartitionStatsExtractorTest.scala(1 hunks)spark/src/test/scala/ai/chronon/spark/test/batch/iceberg/IcebergSparkSPJTest.scala(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- spark/BUILD.bazel
- spark/src/test/scala/ai/chronon/spark/test/batch/iceberg/IcebergPartitionStatsExtractorTest.scala
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: tchow-zlai
PR: zipline-ai/chronon#393
File: cloud_gcp/BUILD.bazel:99-99
Timestamp: 2025-02-22T20:30:28.381Z
Learning: The jar file "iceberg-bigquery-catalog-1.5.2-1.0.1-beta.jar" in cloud_gcp/BUILD.bazel is a local dependency and should not be replaced with maven_artifact.
spark/src/test/scala/ai/chronon/spark/test/batch/iceberg/IcebergSparkSPJTest.scala (6)
Learnt from: piyush-zlai
PR: #44
File: hub/app/controllers/ModelController.scala:15-18
Timestamp: 2024-10-17T19:46:42.629Z
Learning: References to MockDataService in hub/test/controllers/SearchControllerSpec.scala and hub/test/controllers/ModelControllerSpec.scala are needed for tests and should not be removed.
Learnt from: piyush-zlai
PR: #33
File: cloud_aws/src/test/scala/ai/chronon/integrations/aws/DynamoDBKVStoreTest.scala:175-175
Timestamp: 2024-10-07T15:09:51.567Z
Learning: Hardcoding future timestamps in tests within DynamoDBKVStoreTest.scala is acceptable when data is generated and queried within the same time range, ensuring the tests remain valid over time.
Learnt from: chewy-zlai
PR: #50
File: spark/src/test/scala/ai/chronon/spark/test/MockKVStore.scala:19-28
Timestamp: 2024-10-31T18:29:45.027Z
Learning: In MockKVStore located at spark/src/test/scala/ai/chronon/spark/test/MockKVStore.scala, the multiPut method is intended to be a simple implementation without dataset existence validation, duplicate validation logic elimination, or actual storage of key-value pairs for verification.
Learnt from: david-zlai
PR: #222
File: cloud_gcp/src/main/resources/additional-confs.yaml:3-3
Timestamp: 2025-01-15T21:00:35.574Z
Learning: The GCS bucket configuration spark.chronon.table.gcs.temporary_gcs_bucket: "zl-warehouse" should remain in the main additional-confs.yaml file, not in dev-specific configs.
Learnt from: piyush-zlai
PR: #33
File: cloud_aws/src/main/scala/ai/chronon/integrations/aws/DynamoDBKVStoreImpl.scala:245-260
Timestamp: 2024-10-08T16:18:45.669Z
Learning: In DynamoDBKVStoreImpl.scala, refactoring methods like extractTimedValues and extractListValues to eliminate code duplication is discouraged if it would make the code more convoluted.
Learnt from: chewy-zlai
PR: #50
File: spark/src/test/scala/ai/chronon/spark/test/MockKVStore.scala:13-16
Timestamp: 2024-10-31T18:27:44.973Z
Learning: In MockKVStore.scala, the create method should reset the dataset even if the dataset already exists.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (17)
- GitHub Check: aggregator_tests
- GitHub Check: online_tests
- GitHub Check: cloud_gcp_tests
- GitHub Check: cloud_aws_tests
- GitHub Check: service_commons_tests
- GitHub Check: api_tests
- GitHub Check: flink_tests
- GitHub Check: service_tests
- GitHub Check: batch_tests
- GitHub Check: join_tests
- GitHub Check: fetcher_tests
- GitHub Check: streaming_tests
- GitHub Check: spark_tests
- GitHub Check: analyzer_tests
- GitHub Check: groupby_tests
- GitHub Check: scala_compile_fmt_fix
- GitHub Check: enforce_triggered_workflows
🔇 Additional comments (1)
spark/src/test/scala/ai/chronon/spark/test/batch/iceberg/IcebergSparkSPJTest.scala (1)
173-344: Excellent test coverage!Comprehensive SPJ validation.
spark/src/test/scala/ai/chronon/spark/test/batch/iceberg/IcebergSparkSPJTest.scala
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
spark/src/main/scala/ai/chronon/spark/batch/iceberg/IcebergPartitionStatsExtractor.scala(1 hunks)
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: tchow-zlai
PR: zipline-ai/chronon#393
File: cloud_gcp/BUILD.bazel:99-99
Timestamp: 2025-02-22T20:30:28.381Z
Learning: The jar file "iceberg-bigquery-catalog-1.5.2-1.0.1-beta.jar" in cloud_gcp/BUILD.bazel is a local dependency and should not be replaced with maven_artifact.
spark/src/main/scala/ai/chronon/spark/batch/iceberg/IcebergPartitionStatsExtractor.scala (1)
Learnt from: piyush-zlai
PR: #33
File: cloud_aws/src/main/scala/ai/chronon/integrations/aws/DynamoDBKVStoreImpl.scala:245-260
Timestamp: 2024-10-08T16:18:45.669Z
Learning: In DynamoDBKVStoreImpl.scala, refactoring methods like extractTimedValues and extractListValues to eliminate code duplication is discouraged if it would make the code more convoluted.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (17)
- GitHub Check: join_tests
- GitHub Check: fetcher_tests
- GitHub Check: cloud_aws_tests
- GitHub Check: groupby_tests
- GitHub Check: service_commons_tests
- GitHub Check: streaming_tests
- GitHub Check: analyzer_tests
- GitHub Check: service_tests
- GitHub Check: batch_tests
- GitHub Check: scala_compile_fmt_fix
- GitHub Check: cloud_gcp_tests
- GitHub Check: spark_tests
- GitHub Check: online_tests
- GitHub Check: flink_tests
- GitHub Check: api_tests
- GitHub Check: aggregator_tests
- GitHub Check: enforce_triggered_workflows
🔇 Additional comments (5)
spark/src/main/scala/ai/chronon/spark/batch/iceberg/IcebergPartitionStatsExtractor.scala (5)
1-11: Clean imports and package structure.Appropriate imports for Iceberg-Spark integration with proper Scala-Java interop.
12-12: Well-designed case class.Clear field names and appropriate types for partition statistics.
14-14: Good constructor design.SparkSession dependency injection enables testing and modularity.
37-75: Excellent resource management.Proper try-finally ensures manifest readers are closed even on exceptions.
79-80: Clean return pattern.Returning immutable sequence from mutable buffer accumulation.
Add IcebergPartitionStatsExtractor for extracting partition statistics from Iceberg tables
Add comprehensive test suite for Iceberg partition stats functionality
Add IcebergSparkSPJTest for testing Storage Partitioned Join optimization
Add Iceberg Spark runtime dependency to build configuration
Summary
Checklist
Summary by CodeRabbit
New Features
Chores