Skip to content

Conversation

@nikhil-zlai
Copy link
Contributor

@nikhil-zlai nikhil-zlai commented Jul 29, 2025

  • Add IcebergPartitionStatsExtractor for extracting partition statistics from Iceberg tables

  • Add comprehensive test suite for Iceberg partition stats functionality

  • Add IcebergSparkSPJTest for testing Storage Partitioned Join optimization

  • Add Iceberg Spark runtime dependency to build configuration

Summary

Checklist

  • Added Unit Tests
  • Covered by existing CI
  • Integration tested
  • Documentation update

Summary by CodeRabbit

  • New Features

    • Added new test suites to validate Iceberg partition statistics extraction and Spark Partitioned Join (SPJ) optimization, including coverage for error handling, join types, and execution plan analysis.
    • Introduced functionality to extract detailed partition-level statistics from Iceberg tables, including row counts and null value metrics.
  • Chores

    • Updated build configuration to include Iceberg Spark runtime dependency and improved source file pattern matching.
    • Removed duplicate Iceberg Spark runtime dependency from Maven repository configuration.
    • Updated artifact hash in dependency lock file.

- Add IcebergPartitionStatsExtractor for extracting partition statistics from Iceberg tables
- Add comprehensive test suite for Iceberg partition stats functionality
- Add IcebergSparkSPJTest for testing Storage Partitioned Join optimization
- Add Iceberg Spark runtime dependency to build configuration

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <[email protected]>
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jul 29, 2025

Walkthrough

This change adds the Iceberg Spark runtime 3.5 dependency to the Spark module's build and test configurations, updates the batch library's source file glob to be recursive, removes a duplicate Iceberg dependency from the Maven repository, updates the Maven hash, and introduces two comprehensive test suites for Iceberg integration and SPJ optimization.

Changes

Cohort / File(s) Change Summary
Spark Build Updates
spark/BUILD.bazel
Added Iceberg Spark runtime 3.5 dependency to main, batch, and test targets; made batch source glob recursive.
Iceberg Partition Stats Test
spark/src/test/scala/ai/chronon/spark/test/batch/iceberg/IcebergPartitionStatsExtractorTest.scala
Added new test suite for Iceberg partition stats extraction, covering error, empty, and data scenarios.
Iceberg SPJ Optimization Test
spark/src/test/scala/ai/chronon/spark/test/batch/iceberg/IcebergSparkSPJTest.scala
Added comprehensive test suite for validating Iceberg SPJ optimization, including plan analysis and multiple join scenarios.
Maven Repository Cleanup
tools/build_rules/dependencies/maven_repository.bzl
Removed duplicate Iceberg Spark runtime 3.5 dependency entry.
Maven Hash Update
maven_install.json
Updated Maven artifacts hash to reflect dependency changes.
Iceberg Partition Stats Extractor
spark/src/main/scala/ai/chronon/spark/batch/iceberg/IcebergPartitionStatsExtractor.scala
Added class to extract partition-level stats from Iceberg tables using Spark.

Sequence Diagram(s)

sequenceDiagram
    participant TestSuite as IcebergSparkSPJTest
    participant SparkSession
    participant IcebergTable
    participant DataFrame
    participant PlanAnalyzer

    TestSuite->>SparkSession: Initialize with Iceberg extensions
    TestSuite->>IcebergTable: Create test tables (customers, orders)
    TestSuite->>IcebergTable: Insert synthetic data
    TestSuite->>DataFrame: Perform join query
    DataFrame->>PlanAnalyzer: Analyze physical plan
    PlanAnalyzer-->>TestSuite: Report Exchange/shuffle presence
    TestSuite->>TestSuite: Assert plan and result correctness
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~40 minutes

Possibly related PRs

Suggested reviewers

  • varant-zlai
  • piyush-zlai

Poem

In Spark's bright forge, new Iceberg flows,
With tests and builds, the coverage grows.
Dependencies trimmed, the globs expand,
SPJ plans checked by a careful hand.
Code and queries, both refined—
Reviewers, your wisdom is enshrined! 🚀

Note

⚡️ Unit Test Generation is now available in beta!

Learn more here, or try it out under "Finishing Touches" below.

✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch nikhil/iceberg-stats

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

"org.json4s:json4s-core_2.12:3.7.0-M11",
"org.json4s:json4s-ast_2.12:3.7.0-M11",
"io.delta:delta-spark_2.12:3.2.0",
"org.apache.iceberg:iceberg-spark-runtime-3.5_2.12:1.6.1",
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this was duplicated.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (4)
spark/src/test/scala/ai/chronon/spark/test/batch/iceberg/IcebergPartitionStatsExtractorTest.scala (1)

122-122: Fix misleading comment

The comment says "(Charlie, NULL)" but Charlie has a non-null name. Row 4 has the NULL name.

-    southStats.colToNullCount("name") should be(1)   // One null name (Charlie, NULL)
+    southStats.colToNullCount("name") should be(1)   // One null name (row 4)
spark/src/test/scala/ai/chronon/spark/test/batch/iceberg/IcebergSparkSPJTest.scala (3)

173-173: Remove df.explain() call to reduce test output noise.

This prints to console unnecessarily.

-    df.explain()

199-199: Add parentheses for clarity.

-    resultCount should be > 0L
+    resultCount should be > (0L)

286-286: Clarify column index comment.

Index 1 corresponds to the second column selected.

-      row.isNullAt(1) shouldBe true // order_id should be null
+      row.isNullAt(1) shouldBe true // order_id (index 1) should be null
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e49cd0a and 28a50a0.

📒 Files selected for processing (5)
  • spark/BUILD.bazel (2 hunks)
  • spark/src/main/scala/ai/chronon/spark/iceberg/IcebergPartitionStatsExtractor.scala (1 hunks)
  • spark/src/test/scala/ai/chronon/spark/test/batch/iceberg/IcebergPartitionStatsExtractorTest.scala (1 hunks)
  • spark/src/test/scala/ai/chronon/spark/test/batch/iceberg/IcebergSparkSPJTest.scala (1 hunks)
  • tools/build_rules/dependencies/maven_repository.bzl (0 hunks)
💤 Files with no reviewable changes (1)
  • tools/build_rules/dependencies/maven_repository.bzl
🧰 Additional context used
🧠 Learnings (4)
spark/BUILD.bazel (2)

Learnt from: tchow-zlai
PR: #393
File: cloud_gcp/BUILD.bazel:99-99
Timestamp: 2025-02-22T20:30:28.381Z
Learning: The jar file "iceberg-bigquery-catalog-1.5.2-1.0.1-beta.jar" in cloud_gcp/BUILD.bazel is a local dependency and should not be replaced with maven_artifact.

Learnt from: chewy-zlai
PR: #47
File: docker-init/Dockerfile:36-38
Timestamp: 2024-10-17T01:09:24.653Z
Learning: The JAR files spark-assembly-0.1.0-SNAPSHOT.jar and cloud_aws-assembly-0.1.0-SNAPSHOT.jar are generated by sbt and located in the target directory after the build.

spark/src/main/scala/ai/chronon/spark/iceberg/IcebergPartitionStatsExtractor.scala (1)

Learnt from: piyush-zlai
PR: #33
File: cloud_aws/src/main/scala/ai/chronon/integrations/aws/DynamoDBKVStoreImpl.scala:245-260
Timestamp: 2024-10-08T16:18:45.669Z
Learning: In DynamoDBKVStoreImpl.scala, refactoring methods like extractTimedValues and extractListValues to eliminate code duplication is discouraged if it would make the code more convoluted.

spark/src/test/scala/ai/chronon/spark/test/batch/iceberg/IcebergPartitionStatsExtractorTest.scala (2)

Learnt from: piyush-zlai
PR: #33
File: cloud_aws/src/test/scala/ai/chronon/integrations/aws/DynamoDBKVStoreTest.scala:175-175
Timestamp: 2024-10-07T15:09:51.567Z
Learning: Hardcoding future timestamps in tests within DynamoDBKVStoreTest.scala is acceptable when data is generated and queried within the same time range, ensuring the tests remain valid over time.

Learnt from: piyush-zlai
PR: #53
File: hub/app/controllers/TimeSeriesController.scala:224-224
Timestamp: 2024-10-29T15:21:58.102Z
Learning: In the mocked data implementation in hub/app/controllers/TimeSeriesController.scala, potential NumberFormatException exceptions due to parsing errors (e.g., when using val featureId = name.split("_").last.toInt) are acceptable and will be addressed when adding the concrete backend.

spark/src/test/scala/ai/chronon/spark/test/batch/iceberg/IcebergSparkSPJTest.scala (3)

Learnt from: piyush-zlai
PR: #44
File: hub/app/controllers/ModelController.scala:15-18
Timestamp: 2024-10-17T19:46:42.629Z
Learning: References to MockDataService in hub/test/controllers/SearchControllerSpec.scala and hub/test/controllers/ModelControllerSpec.scala are needed for tests and should not be removed.

Learnt from: chewy-zlai
PR: #50
File: spark/src/test/scala/ai/chronon/spark/test/MockKVStore.scala:19-28
Timestamp: 2024-10-31T18:29:45.027Z
Learning: In MockKVStore located at spark/src/test/scala/ai/chronon/spark/test/MockKVStore.scala, the multiPut method is intended to be a simple implementation without dataset existence validation, duplicate validation logic elimination, or actual storage of key-value pairs for verification.

Learnt from: piyush-zlai
PR: #33
File: cloud_aws/src/test/scala/ai/chronon/integrations/aws/DynamoDBKVStoreTest.scala:175-175
Timestamp: 2024-10-07T15:09:51.567Z
Learning: Hardcoding future timestamps in tests within DynamoDBKVStoreTest.scala is acceptable when data is generated and queried within the same time range, ensuring the tests remain valid over time.

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: fetcher_tests
🔇 Additional comments (1)
spark/BUILD.bazel (1)

46-46: Verify Spark version compatibility for Iceberg

I couldn’t locate a defined Spark version in the repo. You’ve added:
• spark/BUILD.bazel:46
• spark/BUILD.bazel:136

Ensure your project is using Spark 3.5 (or newer) to match iceberg-spark-runtime-3.5 and that there are no version conflicts.

Comment on lines 67 to 69
val partitionValue = file.partition().get(index, classOf[String])

sourceField.name() -> String.valueOf(partitionValue)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Handle null partition values explicitly

String.valueOf converts null to "null" string. Consider explicit null handling.

-                val partitionValue = file.partition().get(index, classOf[String])
-
-                sourceField.name() -> String.valueOf(partitionValue)
+                val partitionValue = file.partition().get(index, classOf[String])
+                sourceField.name() -> Option(partitionValue).getOrElse("")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
val partitionValue = file.partition().get(index, classOf[String])
sourceField.name() -> String.valueOf(partitionValue)
val partitionValue = file.partition().get(index, classOf[String])
sourceField.name() -> Option(partitionValue).getOrElse("")
🤖 Prompt for AI Agents
In
spark/src/main/scala/ai/chronon/spark/iceberg/IcebergPartitionStatsExtractor.scala
around lines 67 to 69, the code uses String.valueOf on a partition value that
may be null, which converts null to the string "null". To fix this, explicitly
check if partitionValue is null and handle it accordingly, for example by
returning an empty string or a specific placeholder instead of the string
"null".

Comment on lines +67 to +70
private def disableSPJ(): Unit = {
spark.conf.set("spark.sql.sources.v2.bucketing.enabled", "false")
spark.conf.set("spark.sql.iceberg.planning.preserve-data-grouping", "false")
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Reset all SPJ-related configs in disableSPJ.

Only 2 configs are reset but enableSPJ sets 7 configs.

   private def disableSPJ(): Unit = {
     spark.conf.set("spark.sql.sources.v2.bucketing.enabled", "false")
     spark.conf.set("spark.sql.iceberg.planning.preserve-data-grouping", "false")
+    spark.conf.set("spark.sql.sources.v2.bucketing.pushPartValues.enabled", "false")
+    spark.conf.set("spark.sql.requireAllClusterKeysForCoPartition", "true")
+    spark.conf.set("spark.sql.sources.v2.bucketing.partiallyClusteredDistribution.enabled", "false")
+    spark.conf.set("spark.sql.adaptive.enabled", "true")
   }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
private def disableSPJ(): Unit = {
spark.conf.set("spark.sql.sources.v2.bucketing.enabled", "false")
spark.conf.set("spark.sql.iceberg.planning.preserve-data-grouping", "false")
}
private def disableSPJ(): Unit = {
spark.conf.set("spark.sql.sources.v2.bucketing.enabled", "false")
spark.conf.set("spark.sql.iceberg.planning.preserve-data-grouping", "false")
spark.conf.set("spark.sql.sources.v2.bucketing.pushPartValues.enabled", "false")
spark.conf.set("spark.sql.requireAllClusterKeysForCoPartition", "true")
spark.conf.set("spark.sql.sources.v2.bucketing.partiallyClusteredDistribution.enabled", "false")
spark.conf.set("spark.sql.adaptive.enabled", "true")
}
🤖 Prompt for AI Agents
In
spark/src/test/scala/ai/chronon/spark/test/batch/iceberg/IcebergSparkSPJTest.scala
around lines 67 to 70, the disableSPJ method only resets 2 SPJ-related
configurations while enableSPJ sets 7. Update disableSPJ to reset all 7
configurations that enableSPJ sets, ensuring the test environment is fully
reverted to its original state after disabling SPJ.

namespace: String,
tableName: String): Seq[PartitionStats] = {

val catalog = spark
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

take this from tableUtils

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (2)
spark/src/test/scala/ai/chronon/spark/test/batch/iceberg/IcebergSparkSPJTest.scala (2)

20-43: Clean up temp directories in afterAll.

The temp directories created for warehouse paths should be cleaned up.


66-69: Reset all SPJ-related configs in disableSPJ.

Only 2 configs are reset but enableSPJ sets 7 configs.

🧹 Nitpick comments (1)
spark/src/test/scala/ai/chronon/spark/test/batch/iceberg/IcebergSparkSPJTest.scala (1)

103-103: Add column names to DataFrame creation.

Use explicit column names for better clarity.

-    spark.createDataFrame(customerData).write.mode("append").insertInto("customers")
+    import spark.implicits._
+    customerData.toDF("customer_id", "customer_name", "region", "email", "created_date")
+      .write.mode("append").insertInto("customers")
-    spark.createDataFrame(orderData).write.mode("append").insertInto("orders")
+    import spark.implicits._
+    orderData.toDF("order_id", "customer_id", "region", "amount", "order_date")
+      .write.mode("append").insertInto("orders")

Also applies to: 112-112

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 28a50a0 and 91e7903.

📒 Files selected for processing (4)
  • maven_install.json (1 hunks)
  • spark/src/main/scala/ai/chronon/spark/iceberg/IcebergPartitionStatsExtractor.scala (1 hunks)
  • spark/src/test/scala/ai/chronon/spark/test/batch/iceberg/IcebergPartitionStatsExtractorTest.scala (1 hunks)
  • spark/src/test/scala/ai/chronon/spark/test/batch/iceberg/IcebergSparkSPJTest.scala (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • maven_install.json
🚧 Files skipped from review as they are similar to previous changes (2)
  • spark/src/test/scala/ai/chronon/spark/test/batch/iceberg/IcebergPartitionStatsExtractorTest.scala
  • spark/src/main/scala/ai/chronon/spark/iceberg/IcebergPartitionStatsExtractor.scala
🧰 Additional context used
🧠 Learnings (1)
spark/src/test/scala/ai/chronon/spark/test/batch/iceberg/IcebergSparkSPJTest.scala (6)

Learnt from: piyush-zlai
PR: #44
File: hub/app/controllers/ModelController.scala:15-18
Timestamp: 2024-10-17T19:46:42.629Z
Learning: References to MockDataService in hub/test/controllers/SearchControllerSpec.scala and hub/test/controllers/ModelControllerSpec.scala are needed for tests and should not be removed.

Learnt from: chewy-zlai
PR: #50
File: spark/src/test/scala/ai/chronon/spark/test/MockKVStore.scala:19-28
Timestamp: 2024-10-31T18:29:45.027Z
Learning: In MockKVStore located at spark/src/test/scala/ai/chronon/spark/test/MockKVStore.scala, the multiPut method is intended to be a simple implementation without dataset existence validation, duplicate validation logic elimination, or actual storage of key-value pairs for verification.

Learnt from: piyush-zlai
PR: #33
File: cloud_aws/src/test/scala/ai/chronon/integrations/aws/DynamoDBKVStoreTest.scala:175-175
Timestamp: 2024-10-07T15:09:51.567Z
Learning: Hardcoding future timestamps in tests within DynamoDBKVStoreTest.scala is acceptable when data is generated and queried within the same time range, ensuring the tests remain valid over time.

Learnt from: david-zlai
PR: #222
File: cloud_gcp/src/main/resources/additional-confs.yaml:3-3
Timestamp: 2025-01-15T21:00:35.574Z
Learning: The GCS bucket configuration spark.chronon.table.gcs.temporary_gcs_bucket: "zl-warehouse" should remain in the main additional-confs.yaml file, not in dev-specific configs.

Learnt from: piyush-zlai
PR: #33
File: cloud_aws/src/main/scala/ai/chronon/integrations/aws/DynamoDBKVStoreImpl.scala:245-260
Timestamp: 2024-10-08T16:18:45.669Z
Learning: In DynamoDBKVStoreImpl.scala, refactoring methods like extractTimedValues and extractListValues to eliminate code duplication is discouraged if it would make the code more convoluted.

Learnt from: chewy-zlai
PR: #50
File: spark/src/test/scala/ai/chronon/spark/test/MockKVStore.scala:13-16
Timestamp: 2024-10-31T18:27:44.973Z
Learning: In MockKVStore.scala, the create method should reset the dataset even if the dataset already exists.

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (17)
  • GitHub Check: service_commons_tests
  • GitHub Check: service_tests
  • GitHub Check: aggregator_tests
  • GitHub Check: api_tests
  • GitHub Check: cloud_aws_tests
  • GitHub Check: cloud_gcp_tests
  • GitHub Check: online_tests
  • GitHub Check: flink_tests
  • GitHub Check: streaming_tests
  • GitHub Check: analyzer_tests
  • GitHub Check: groupby_tests
  • GitHub Check: spark_tests
  • GitHub Check: join_tests
  • GitHub Check: fetcher_tests
  • GitHub Check: batch_tests
  • GitHub Check: scala_compile_fmt_fix
  • GitHub Check: enforce_triggered_workflows
🔇 Additional comments (2)
spark/src/test/scala/ai/chronon/spark/test/batch/iceberg/IcebergSparkSPJTest.scala (2)

115-171: LGTM!

Well-structured utility methods for plan analysis.


173-344: LGTM!

Comprehensive test coverage for SPJ optimization scenarios.

maven_artifact("org.apache.thrift:libthrift"),
maven_artifact("org.apache.hadoop:hadoop-common"),
maven_artifact("org.apache.hadoop:hadoop-client-api"),
maven_artifact_with_suffix("org.apache.iceberg:iceberg-spark-runtime-3.5"),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

create a separate build target for this.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
spark/src/test/scala/ai/chronon/spark/test/batch/iceberg/IcebergSparkSPJTest.scala (1)

66-70: Reset all SPJ configs in disableSPJ.

Only 2 configs reset but enableSPJ sets 7.

 private def disableSPJ(): Unit = {
   spark.conf.set("spark.sql.sources.v2.bucketing.enabled", "false")
   spark.conf.set("spark.sql.iceberg.planning.preserve-data-grouping", "false")
+  spark.conf.set("spark.sql.sources.v2.bucketing.pushPartValues.enabled", "false")
+  spark.conf.set("spark.sql.requireAllClusterKeysForCoPartition", "true")
+  spark.conf.set("spark.sql.sources.v2.bucketing.partiallyClusteredDistribution.enabled", "false")
+  spark.conf.set("spark.sql.adaptive.enabled", "true")
 }
🧹 Nitpick comments (1)
spark/src/test/scala/ai/chronon/spark/test/batch/iceberg/IcebergSparkSPJTest.scala (1)

103-103: Use toDF for type safety.

Replace deprecated createDataFrame.

-spark.createDataFrame(customerData).write.mode("append").insertInto("customers")
+import spark.implicits._
+customerData.toDF("customer_id", "customer_name", "region", "email", "created_date")
+  .write.mode("append").insertInto("customers")
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 91e7903 and 055ab9e.

📒 Files selected for processing (3)
  • spark/BUILD.bazel (4 hunks)
  • spark/src/test/scala/ai/chronon/spark/test/batch/iceberg/IcebergPartitionStatsExtractorTest.scala (1 hunks)
  • spark/src/test/scala/ai/chronon/spark/test/batch/iceberg/IcebergSparkSPJTest.scala (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • spark/BUILD.bazel
  • spark/src/test/scala/ai/chronon/spark/test/batch/iceberg/IcebergPartitionStatsExtractorTest.scala
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: tchow-zlai
PR: zipline-ai/chronon#393
File: cloud_gcp/BUILD.bazel:99-99
Timestamp: 2025-02-22T20:30:28.381Z
Learning: The jar file "iceberg-bigquery-catalog-1.5.2-1.0.1-beta.jar" in cloud_gcp/BUILD.bazel is a local dependency and should not be replaced with maven_artifact.
spark/src/test/scala/ai/chronon/spark/test/batch/iceberg/IcebergSparkSPJTest.scala (6)

Learnt from: piyush-zlai
PR: #44
File: hub/app/controllers/ModelController.scala:15-18
Timestamp: 2024-10-17T19:46:42.629Z
Learning: References to MockDataService in hub/test/controllers/SearchControllerSpec.scala and hub/test/controllers/ModelControllerSpec.scala are needed for tests and should not be removed.

Learnt from: piyush-zlai
PR: #33
File: cloud_aws/src/test/scala/ai/chronon/integrations/aws/DynamoDBKVStoreTest.scala:175-175
Timestamp: 2024-10-07T15:09:51.567Z
Learning: Hardcoding future timestamps in tests within DynamoDBKVStoreTest.scala is acceptable when data is generated and queried within the same time range, ensuring the tests remain valid over time.

Learnt from: chewy-zlai
PR: #50
File: spark/src/test/scala/ai/chronon/spark/test/MockKVStore.scala:19-28
Timestamp: 2024-10-31T18:29:45.027Z
Learning: In MockKVStore located at spark/src/test/scala/ai/chronon/spark/test/MockKVStore.scala, the multiPut method is intended to be a simple implementation without dataset existence validation, duplicate validation logic elimination, or actual storage of key-value pairs for verification.

Learnt from: david-zlai
PR: #222
File: cloud_gcp/src/main/resources/additional-confs.yaml:3-3
Timestamp: 2025-01-15T21:00:35.574Z
Learning: The GCS bucket configuration spark.chronon.table.gcs.temporary_gcs_bucket: "zl-warehouse" should remain in the main additional-confs.yaml file, not in dev-specific configs.

Learnt from: piyush-zlai
PR: #33
File: cloud_aws/src/main/scala/ai/chronon/integrations/aws/DynamoDBKVStoreImpl.scala:245-260
Timestamp: 2024-10-08T16:18:45.669Z
Learning: In DynamoDBKVStoreImpl.scala, refactoring methods like extractTimedValues and extractListValues to eliminate code duplication is discouraged if it would make the code more convoluted.

Learnt from: chewy-zlai
PR: #50
File: spark/src/test/scala/ai/chronon/spark/test/MockKVStore.scala:13-16
Timestamp: 2024-10-31T18:27:44.973Z
Learning: In MockKVStore.scala, the create method should reset the dataset even if the dataset already exists.

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (17)
  • GitHub Check: aggregator_tests
  • GitHub Check: online_tests
  • GitHub Check: cloud_gcp_tests
  • GitHub Check: cloud_aws_tests
  • GitHub Check: service_commons_tests
  • GitHub Check: api_tests
  • GitHub Check: flink_tests
  • GitHub Check: service_tests
  • GitHub Check: batch_tests
  • GitHub Check: join_tests
  • GitHub Check: fetcher_tests
  • GitHub Check: streaming_tests
  • GitHub Check: spark_tests
  • GitHub Check: analyzer_tests
  • GitHub Check: groupby_tests
  • GitHub Check: scala_compile_fmt_fix
  • GitHub Check: enforce_triggered_workflows
🔇 Additional comments (1)
spark/src/test/scala/ai/chronon/spark/test/batch/iceberg/IcebergSparkSPJTest.scala (1)

173-344: Excellent test coverage!

Comprehensive SPJ validation.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 055ab9e and ab3c093.

📒 Files selected for processing (1)
  • spark/src/main/scala/ai/chronon/spark/batch/iceberg/IcebergPartitionStatsExtractor.scala (1 hunks)
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: tchow-zlai
PR: zipline-ai/chronon#393
File: cloud_gcp/BUILD.bazel:99-99
Timestamp: 2025-02-22T20:30:28.381Z
Learning: The jar file "iceberg-bigquery-catalog-1.5.2-1.0.1-beta.jar" in cloud_gcp/BUILD.bazel is a local dependency and should not be replaced with maven_artifact.
spark/src/main/scala/ai/chronon/spark/batch/iceberg/IcebergPartitionStatsExtractor.scala (1)

Learnt from: piyush-zlai
PR: #33
File: cloud_aws/src/main/scala/ai/chronon/integrations/aws/DynamoDBKVStoreImpl.scala:245-260
Timestamp: 2024-10-08T16:18:45.669Z
Learning: In DynamoDBKVStoreImpl.scala, refactoring methods like extractTimedValues and extractListValues to eliminate code duplication is discouraged if it would make the code more convoluted.

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (17)
  • GitHub Check: join_tests
  • GitHub Check: fetcher_tests
  • GitHub Check: cloud_aws_tests
  • GitHub Check: groupby_tests
  • GitHub Check: service_commons_tests
  • GitHub Check: streaming_tests
  • GitHub Check: analyzer_tests
  • GitHub Check: service_tests
  • GitHub Check: batch_tests
  • GitHub Check: scala_compile_fmt_fix
  • GitHub Check: cloud_gcp_tests
  • GitHub Check: spark_tests
  • GitHub Check: online_tests
  • GitHub Check: flink_tests
  • GitHub Check: api_tests
  • GitHub Check: aggregator_tests
  • GitHub Check: enforce_triggered_workflows
🔇 Additional comments (5)
spark/src/main/scala/ai/chronon/spark/batch/iceberg/IcebergPartitionStatsExtractor.scala (5)

1-11: Clean imports and package structure.

Appropriate imports for Iceberg-Spark integration with proper Scala-Java interop.


12-12: Well-designed case class.

Clear field names and appropriate types for partition statistics.


14-14: Good constructor design.

SparkSession dependency injection enables testing and modularity.


37-75: Excellent resource management.

Proper try-finally ensures manifest readers are closed even on exceptions.


79-80: Clean return pattern.

Returning immutable sequence from mutable buffer accumulation.

@nikhil-zlai nikhil-zlai merged commit 9bceeb0 into main Jul 30, 2025
21 checks passed
@nikhil-zlai nikhil-zlai deleted the nikhil/iceberg-stats branch July 30, 2025 17:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants