Skip to content

Conversation

@tchow-zlai
Copy link
Collaborator

@tchow-zlai tchow-zlai commented Jul 23, 2025

Summary

Checklist

  • Added Unit Tests
  • Covered by existing CI
  • Integration tested
  • Documentation update

Summary by CodeRabbit

  • Bug Fixes

    • Improved handling of missing or incomplete metadata to prevent errors when retrieving output table information.
  • Refactor

    • Enhanced internal logic for constructing and layering metadata, ensuring more robust assignment of output table details across various planning components.
    • Updated metadata layering to explicitly include output table information for certain planning nodes.
    • Modified output table references to consistently include namespace prefixes.
    • Changed upload table suffix from "_upload" to "__upload" for consistency.
  • Chores

    • Updated imports to streamline access to extension methods and maintain consistency across modules.
  • Tests

    • Adjusted test validations to include namespace prefixes in output table names for more accurate verification.
    • Improved test setup to ensure clean state by dropping relevant tables before each run.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jul 23, 2025

Walkthrough

This change makes outputTable retrieval null-safe with fallback, updates the upload table suffix, and modifies planner metadata layering calls to explicitly pass or omit output table info. It also updates test assertions and refactors test metadata creation to use the layering utility.

Changes

File(s) Change Summary
api/src/main/scala/ai/chronon/api/Extensions.scala Made outputTable retrieval null-safe; changed upload suffix from _upload to __upload.
api/src/main/scala/ai/chronon/api/planner/GroupByPlanner.scala Adjusted MetaDataUtils.layer calls to add/remove explicit output table argument; updated table name usage in nodes.
api/src/main/scala/ai/chronon/api/planner/MetaDataUtils.scala Changed layer method to assign execution info's output table from copied metadata if no override.
api/src/main/scala/ai/chronon/api/planner/MonolithJoinPlanner.scala Added wildcard import; expanded MetaDataUtils.layer calls with explicit output table; changed table name construction in dependencies.
api/src/main/scala/ai/chronon/api/planner/StagingQueryPlanner.scala Added import; MetaDataUtils.layer now receives explicit output table argument in build plan.
api/src/test/scala/ai/chronon/api/test/planner/GroupByPlannerTest.scala Updated assertions to expect outputTable-based fully qualified names instead of name.
api/src/test/scala/ai/chronon/api/test/planner/MonolithJoinPlannerTest.scala Updated assertions to expect output table names instead of metadata names in dependencies.
spark/src/test/scala/ai/chronon/spark/test/batch/BatchNodeRunnerTest.scala Refactored test metadata creation to use MetaDataUtils.layer instead of manual object setup.

Sequence Diagram(s)

sequenceDiagram
  participant Planner
  participant Extensions
  participant MetaDataUtils
  participant Metadata

  Planner->>Extensions: Retrieve outputTable (null-safe)
  Extensions-->>Planner: Return outputTable or fallback
  Planner->>MetaDataUtils: Call layer(..., Some(outputTable))
  MetaDataUtils->>Metadata: Copy metadata, set executionInfo.outputTable
  MetaDataUtils-->>Planner: Return layered metadata
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~15 minutes

Poem

Output tables now safe and sound,
Through planners, their names rebound.
With nulls no longer causing fright,
Metadata flows just right.
A tidy tweak, a safer quest—
For code that passes every test!
🦾✨


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 838dd8c and 351c248.

📒 Files selected for processing (1)
  • spark/src/test/scala/ai/chronon/spark/test/batch/BatchNodeRunnerTest.scala (3 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • spark/src/test/scala/ai/chronon/spark/test/batch/BatchNodeRunnerTest.scala
✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch tchow/fix-output-table-names

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
api/src/main/scala/ai/chronon/api/planner/MetaDataUtils.scala (1)

43-46: Comment-code mismatch on output table source.

The comment states "use the base metadata's output table" but the code now uses copy.outputTable. This appears inconsistent unless the copy and base metadata have identical output tables.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 068dd8f and 4f2c907.

📒 Files selected for processing (5)
  • api/src/main/scala/ai/chronon/api/Extensions.scala (1 hunks)
  • api/src/main/scala/ai/chronon/api/planner/GroupByPlanner.scala (3 hunks)
  • api/src/main/scala/ai/chronon/api/planner/MetaDataUtils.scala (1 hunks)
  • api/src/main/scala/ai/chronon/api/planner/MonolithJoinPlanner.scala (2 hunks)
  • api/src/main/scala/ai/chronon/api/planner/StagingQueryPlanner.scala (2 hunks)
🧠 Learnings (5)
📓 Common learnings
Learnt from: tchow-zlai
PR: zipline-ai/chronon#263
File: cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryFormat.scala:56-57
Timestamp: 2025-01-24T23:55:40.650Z
Learning: For BigQuery table creation operations in BigQueryFormat.scala, allow exceptions to propagate directly without wrapping them in try-catch blocks, as the original BigQuery exceptions provide sufficient context.
api/src/main/scala/ai/chronon/api/planner/MonolithJoinPlanner.scala (2)

Learnt from: nikhil-zlai
PR: #70
File: service/src/main/java/ai/chronon/service/ApiProvider.java:6-6
Timestamp: 2024-12-03T04:04:33.809Z
Learning: The import scala.util.ScalaVersionSpecificCollectionsConverter in service/src/main/java/ai/chronon/service/ApiProvider.java is correct and should not be flagged in future reviews.

Learnt from: nikhil-zlai
PR: #50
File: spark/src/main/scala/ai/chronon/spark/stats/drift/SummaryUploader.scala:19-47
Timestamp: 2024-11-03T14:51:40.825Z
Learning: In Scala, the grouped method on collections returns an iterator, allowing for efficient batch processing without accumulating all records in memory.

api/src/main/scala/ai/chronon/api/planner/StagingQueryPlanner.scala (1)

Learnt from: nikhil-zlai
PR: #70
File: service/src/main/java/ai/chronon/service/ApiProvider.java:6-6
Timestamp: 2024-12-03T04:04:33.809Z
Learning: The import scala.util.ScalaVersionSpecificCollectionsConverter in service/src/main/java/ai/chronon/service/ApiProvider.java is correct and should not be flagged in future reviews.

api/src/main/scala/ai/chronon/api/planner/GroupByPlanner.scala (1)

Learnt from: nikhil-zlai
PR: #50
File: spark/src/main/scala/ai/chronon/spark/stats/drift/SummaryUploader.scala:19-47
Timestamp: 2024-11-03T14:51:40.825Z
Learning: In Scala, the grouped method on collections returns an iterator, allowing for efficient batch processing without accumulating all records in memory.

api/src/main/scala/ai/chronon/api/Extensions.scala (1)

Learnt from: tchow-zlai
PR: #263
File: cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryFormat.scala:56-57
Timestamp: 2025-01-24T23:55:40.650Z
Learning: For BigQuery table creation operations in BigQueryFormat.scala, allow exceptions to propagate directly without wrapping them in try-catch blocks, as the original BigQuery exceptions provide sufficient context.

🧰 Additional context used
🧠 Learnings (5)
📓 Common learnings
Learnt from: tchow-zlai
PR: zipline-ai/chronon#263
File: cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryFormat.scala:56-57
Timestamp: 2025-01-24T23:55:40.650Z
Learning: For BigQuery table creation operations in BigQueryFormat.scala, allow exceptions to propagate directly without wrapping them in try-catch blocks, as the original BigQuery exceptions provide sufficient context.
api/src/main/scala/ai/chronon/api/planner/MonolithJoinPlanner.scala (2)

Learnt from: nikhil-zlai
PR: #70
File: service/src/main/java/ai/chronon/service/ApiProvider.java:6-6
Timestamp: 2024-12-03T04:04:33.809Z
Learning: The import scala.util.ScalaVersionSpecificCollectionsConverter in service/src/main/java/ai/chronon/service/ApiProvider.java is correct and should not be flagged in future reviews.

Learnt from: nikhil-zlai
PR: #50
File: spark/src/main/scala/ai/chronon/spark/stats/drift/SummaryUploader.scala:19-47
Timestamp: 2024-11-03T14:51:40.825Z
Learning: In Scala, the grouped method on collections returns an iterator, allowing for efficient batch processing without accumulating all records in memory.

api/src/main/scala/ai/chronon/api/planner/StagingQueryPlanner.scala (1)

Learnt from: nikhil-zlai
PR: #70
File: service/src/main/java/ai/chronon/service/ApiProvider.java:6-6
Timestamp: 2024-12-03T04:04:33.809Z
Learning: The import scala.util.ScalaVersionSpecificCollectionsConverter in service/src/main/java/ai/chronon/service/ApiProvider.java is correct and should not be flagged in future reviews.

api/src/main/scala/ai/chronon/api/planner/GroupByPlanner.scala (1)

Learnt from: nikhil-zlai
PR: #50
File: spark/src/main/scala/ai/chronon/spark/stats/drift/SummaryUploader.scala:19-47
Timestamp: 2024-11-03T14:51:40.825Z
Learning: In Scala, the grouped method on collections returns an iterator, allowing for efficient batch processing without accumulating all records in memory.

api/src/main/scala/ai/chronon/api/Extensions.scala (1)

Learnt from: tchow-zlai
PR: #263
File: cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryFormat.scala:56-57
Timestamp: 2025-01-24T23:55:40.650Z
Learning: For BigQuery table creation operations in BigQueryFormat.scala, allow exceptions to propagate directly without wrapping them in try-catch blocks, as the original BigQuery exceptions provide sufficient context.

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (16)
  • GitHub Check: online_tests
  • GitHub Check: service_commons_tests
  • GitHub Check: cloud_aws_tests
  • GitHub Check: cloud_gcp_tests
  • GitHub Check: service_tests
  • GitHub Check: api_tests
  • GitHub Check: flink_tests
  • GitHub Check: aggregator_tests
  • GitHub Check: join_tests
  • GitHub Check: spark_tests
  • GitHub Check: streaming_tests
  • GitHub Check: analyzer_tests
  • GitHub Check: fetcher_tests
  • GitHub Check: batch_tests
  • GitHub Check: scala_compile_fmt_fix
  • GitHub Check: enforce_triggered_workflows
🔇 Additional comments (7)
api/src/main/scala/ai/chronon/api/planner/MonolithJoinPlanner.scala (2)

3-4: Import cleanup looks good.

Wildcard import simplifies the imports while maintaining functionality.


35-40: Consistent output table propagation.

The explicit passing of output table metadata to MetaDataUtils.layer aligns with the coordinated updates across planners.

api/src/main/scala/ai/chronon/api/planner/StagingQueryPlanner.scala (2)

4-4: Import addition supports enhanced metadata handling.

Wildcard import enables access to the updated Extensions functionality.


25-26: Consistent parameter passing to metadata layer.

Adding explicit output table parameter maintains consistency with other planner updates.

api/src/main/scala/ai/chronon/api/planner/GroupByPlanner.scala (3)

36-37: Explicit output table for backfill node.

Correctly passes output table metadata for backfill operations.


83-83: Simplified parameter list for uploadToKV.

Removing output table parameter suggests different metadata requirements for this node type.


108-108: Consistent with uploadToKV approach.

Streaming node follows same pattern as uploadToKV for metadata layering.

@tchow-zlai tchow-zlai force-pushed the tchow/fix-output-table-names branch from bebc4e5 to c1b1658 Compare July 23, 2025 02:13
val streamingOutputTableInfo = streamingNode.metaData.executionInfo.outputTableInfo
streamingOutputTableInfo should not be null
streamingOutputTableInfo.table should equal(gb.metaData.name + "__streaming")
streamingOutputTableInfo.table should equal(gb.metaData.outputNamespace + "." + gb.metaData.name + "__streaming")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
streamingOutputTableInfo.table should equal(gb.metaData.outputNamespace + "." + gb.metaData.name + "__streaming")
streamingOutputTableInfo.table should equal(gb.metaData.outputTable + "__streaming")

val outputTableInfo = uploadToKVNode.metaData.executionInfo.outputTableInfo
outputTableInfo should not be null
outputTableInfo.table should equal(gb.metaData.name + "__uploadToKV")
outputTableInfo.table should equal(gb.metaData.outputNamespace + "." + gb.metaData.name + "__uploadToKV")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
outputTableInfo.table should equal(gb.metaData.outputNamespace + "." + gb.metaData.name + "__uploadToKV")
outputTableInfo.table should equal(gb.metaData.outputTable + "__uploadToKV")

@tchow-zlai tchow-zlai force-pushed the tchow/fix-output-table-names branch from 2048539 to a2d365b Compare July 24, 2025 23:28
@tchow-zlai tchow-zlai merged commit df6e2fd into main Jul 25, 2025
20 checks passed
@tchow-zlai tchow-zlai deleted the tchow/fix-output-table-names branch July 25, 2025 17:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants