Skip to content

Conversation

@david-zlai
Copy link
Contributor

@david-zlai david-zlai commented Jul 25, 2025

Summary

^^^

Want to be able to set this to a user name spaced table or a dev table for testing

Checklist

  • Added Unit Tests
  • Covered by existing CI
  • Integration tested
  • Documentation update

Summary by CodeRabbit

  • New Features

    • Added a new command-line option to specify the name of the table used for partition tracking in the key-value store, allowing for dynamic configuration.
    • Introduced a new required option for partition tracking table name in batch and upload runners, with a default value provided.
  • Bug Fixes

    • Updated tests to align with the new configurable partition tracking table name.
  • Chores

    • Added a new constant for the command-line argument keyword related to partition tracking table name.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jul 25, 2025

Walkthrough

This change renames a constant in the NodeRunner trait and propagates this update throughout the codebase. It introduces a configurable command-line option for specifying the table partitions dataset name in batch and KV upload runners, updating method signatures and tests accordingly. No control flow or core logic is altered.

Changes

File(s) Change Summary
api/src/main/scala/ai/chronon/api/planner/NodeRunner.scala Renamed constant TablePartitionsDataset to DefaultTablePartitionsDataset.
spark/src/main/scala/ai/chronon/spark/batch/BatchNodeRunner.scala Added required CLI option for dataset name; updated method signatures to accept and use this parameter.
spark/src/main/scala/ai/chronon/spark/kv_store/KVUploadNodeRunner.scala Added required CLI option for dataset name with default; minor related code comments and imports.
spark/src/main/scala/ai/chronon/spark/submission/JobSubmitter.scala Added constant for CLI argument keyword for the dataset name.
spark/src/test/scala/ai/chronon/spark/test/batch/BatchNodeRunnerTest.scala Updated tests to explicitly use the new dataset constant and parameter.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant CLI
    participant BatchNodeRunner
    participant KVStore

    User->>CLI: Provide --table-partitions-dataset argument
    CLI->>BatchNodeRunner: Parse and pass dataset name
    BatchNodeRunner->>KVStore: Use specified dataset name for partition tracking
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Possibly related PRs

Suggested reviewers

  • piyush-zlai
  • tchow-zlai

Poem

A constant renamed, a flag in the breeze,
Configurable datasets, passed with ease.
Command lines now smarter, tests kept in line,
No logic upended—just naming refined.
🎉 Code marches onward, as clean as can be!

Note

⚡️ Unit Test Generation is now available in beta!

Learn more here, or try it out under "Finishing Touches" below.

✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch david/ZIP-800

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@david-zlai david-zlai requested a review from tchow-zlai July 25, 2025 22:31
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
spark/src/main/scala/ai/chronon/spark/kv_store/KVUploadNodeRunner.scala (1)

85-85: Remove unused commented code.

This commented override method serves no purpose and should be cleaned up.

-  //  override def tablePartitionsDataset(): String = tablePartitionsDataset
spark/src/main/scala/ai/chronon/spark/batch/BatchNodeRunner.scala (1)

301-301: Remove unused commented code.

This commented override method serves no purpose and should be cleaned up.

-  //  override def tablePartitionsDataset(): String = tablePartitionsDataset
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5f6fef0 and a4aea2d.

📒 Files selected for processing (5)
  • api/src/main/scala/ai/chronon/api/planner/NodeRunner.scala (1 hunks)
  • spark/src/main/scala/ai/chronon/spark/batch/BatchNodeRunner.scala (7 hunks)
  • spark/src/main/scala/ai/chronon/spark/kv_store/KVUploadNodeRunner.scala (3 hunks)
  • spark/src/main/scala/ai/chronon/spark/submission/JobSubmitter.scala (1 hunks)
  • spark/src/test/scala/ai/chronon/spark/test/batch/BatchNodeRunnerTest.scala (8 hunks)
🧰 Additional context used
🧠 Learnings (4)
spark/src/main/scala/ai/chronon/spark/kv_store/KVUploadNodeRunner.scala (12)

Learnt from: piyush-zlai
PR: #33
File: online/src/main/scala/ai/chronon/online/Api.scala:69-69
Timestamp: 2024-10-08T16:18:45.669Z
Learning: In the KVStore trait located at online/src/main/scala/ai/chronon/online/Api.scala, the default implementation of the create method (def create(dataset: String, props: Map[String, Any]): Unit = create(dataset)) doesn't leverage the props parameter, but subclasses like DynamoDBKVStoreImpl use the props parameter in their overridden implementations.

Learnt from: piyush-zlai
PR: #33
File: online/src/main/scala/ai/chronon/online/Api.scala:69-69
Timestamp: 2024-10-07T15:21:50.787Z
Learning: In the KVStore trait located at online/src/main/scala/ai/chronon/online/Api.scala, the default implementation of the create method (def create(dataset: String, props: Map[String, Any]): Unit = create(dataset)) doesn't leverage the props parameter, but subclasses like DynamoDBKVStoreImpl use the props parameter in their overridden implementations.

Learnt from: chewy-zlai
PR: #47
File: online/src/main/scala/ai/chronon/online/MetadataStore.scala:232-0
Timestamp: 2024-10-17T00:12:09.763Z
Learning: In the KVStore trait located at online/src/main/scala/ai/chronon/online/KVStore.scala, there are two create methods: def create(dataset: String): Unit and def create(dataset: String, props: Map[String, Any]): Unit. The version with props ignores the props parameter, and the simpler version without props is appropriate when props are not needed.

Learnt from: chewy-zlai
PR: #50
File: spark/src/test/scala/ai/chronon/spark/test/MockKVStore.scala:19-28
Timestamp: 2024-10-31T18:29:45.027Z
Learning: In MockKVStore located at spark/src/test/scala/ai/chronon/spark/test/MockKVStore.scala, the multiPut method is intended to be a simple implementation without dataset existence validation, duplicate validation logic elimination, or actual storage of key-value pairs for verification.

Learnt from: piyush-zlai
PR: #33
File: cloud_aws/src/main/scala/ai/chronon/integrations/aws/DynamoDBKVStoreImpl.scala:29-30
Timestamp: 2024-10-08T16:18:45.669Z
Learning: In the codebase, the KVStore implementation provides an implicit ExecutionContext in scope, so it's unnecessary to import another.

Learnt from: chewy-zlai
PR: #50
File: spark/src/test/scala/ai/chronon/spark/test/MockKVStore.scala:13-16
Timestamp: 2024-10-31T18:27:44.973Z
Learning: In MockKVStore.scala, the create method should reset the dataset even if the dataset already exists.

Learnt from: nikhil-zlai
PR: #70
File: service/src/main/java/ai/chronon/service/ApiProvider.java:6-6
Timestamp: 2024-12-03T04:04:33.809Z
Learning: The import scala.util.ScalaVersionSpecificCollectionsConverter in service/src/main/java/ai/chronon/service/ApiProvider.java is correct and should not be flagged in future reviews.

Learnt from: piyush-zlai
PR: #33
File: cloud_aws/src/test/scala/ai/chronon/integrations/aws/DynamoDBKVStoreTest.scala:175-175
Timestamp: 2024-10-07T15:09:51.567Z
Learning: Hardcoding future timestamps in tests within DynamoDBKVStoreTest.scala is acceptable when data is generated and queried within the same time range, ensuring the tests remain valid over time.

Learnt from: piyush-zlai
PR: #33
File: cloud_aws/src/main/scala/ai/chronon/integrations/aws/DynamoDBKVStoreImpl.scala:245-260
Timestamp: 2024-10-08T16:18:45.669Z
Learning: In DynamoDBKVStoreImpl.scala, refactoring methods like extractTimedValues and extractListValues to eliminate code duplication is discouraged if it would make the code more convoluted.

Learnt from: chewy-zlai
PR: #62
File: spark/src/main/scala/ai/chronon/spark/stats/drift/SummaryUploader.scala:9-10
Timestamp: 2024-11-06T21:54:56.160Z
Learning: In Spark applications, when defining serializable classes, passing an implicit ExecutionContext parameter can cause serialization issues. In such cases, it's acceptable to use scala.concurrent.ExecutionContext.Implicits.global.

Learnt from: piyush-zlai
PR: #53
File: hub/app/controllers/TimeSeriesController.scala:224-224
Timestamp: 2024-10-29T15:21:58.102Z
Learning: In the mocked data implementation in hub/app/controllers/TimeSeriesController.scala, potential NumberFormatException exceptions due to parsing errors (e.g., when using val featureId = name.split("_").last.toInt) are acceptable and will be addressed when adding the concrete backend.

Learnt from: piyush-zlai
PR: #33
File: cloud_aws/src/main/scala/ai/chronon/integrations/aws/DynamoDBKVStoreImpl.scala:67-111
Timestamp: 2024-10-07T15:04:30.069Z
Learning: In the DynamoDBKVStoreImpl class, the props parameter is kept as Map[String, Any] to handle binary properties without serialization, allowing flexibility in storing different types of data.

spark/src/main/scala/ai/chronon/spark/submission/JobSubmitter.scala (1)

Learnt from: nikhil-zlai
PR: #70
File: service/src/main/java/ai/chronon/service/ApiProvider.java:6-6
Timestamp: 2024-12-03T04:04:33.809Z
Learning: The import scala.util.ScalaVersionSpecificCollectionsConverter in service/src/main/java/ai/chronon/service/ApiProvider.java is correct and should not be flagged in future reviews.

spark/src/test/scala/ai/chronon/spark/test/batch/BatchNodeRunnerTest.scala (11)

Learnt from: piyush-zlai
PR: #33
File: cloud_aws/src/test/scala/ai/chronon/integrations/aws/DynamoDBKVStoreTest.scala:175-175
Timestamp: 2024-10-07T15:09:51.567Z
Learning: Hardcoding future timestamps in tests within DynamoDBKVStoreTest.scala is acceptable when data is generated and queried within the same time range, ensuring the tests remain valid over time.

Learnt from: chewy-zlai
PR: #50
File: spark/src/test/scala/ai/chronon/spark/test/MockKVStore.scala:19-28
Timestamp: 2024-10-31T18:29:45.027Z
Learning: In MockKVStore located at spark/src/test/scala/ai/chronon/spark/test/MockKVStore.scala, the multiPut method is intended to be a simple implementation without dataset existence validation, duplicate validation logic elimination, or actual storage of key-value pairs for verification.

Learnt from: chewy-zlai
PR: #50
File: spark/src/test/scala/ai/chronon/spark/test/MockKVStore.scala:13-16
Timestamp: 2024-10-31T18:27:44.973Z
Learning: In MockKVStore.scala, the create method should reset the dataset even if the dataset already exists.

Learnt from: piyush-zlai
PR: #44
File: hub/test/store/DynamoDBMonitoringStoreTest.scala:69-86
Timestamp: 2024-10-15T15:33:22.265Z
Learning: In hub/test/store/DynamoDBMonitoringStoreTest.scala, the current implementation of the generateListResponse method is acceptable as-is, and changes for resource handling and error management are not necessary at this time.

Learnt from: piyush-zlai
PR: #33
File: online/src/main/scala/ai/chronon/online/Api.scala:69-69
Timestamp: 2024-10-08T16:18:45.669Z
Learning: In the KVStore trait located at online/src/main/scala/ai/chronon/online/Api.scala, the default implementation of the create method (def create(dataset: String, props: Map[String, Any]): Unit = create(dataset)) doesn't leverage the props parameter, but subclasses like DynamoDBKVStoreImpl use the props parameter in their overridden implementations.

Learnt from: piyush-zlai
PR: #33
File: online/src/main/scala/ai/chronon/online/Api.scala:69-69
Timestamp: 2024-10-07T15:21:50.787Z
Learning: In the KVStore trait located at online/src/main/scala/ai/chronon/online/Api.scala, the default implementation of the create method (def create(dataset: String, props: Map[String, Any]): Unit = create(dataset)) doesn't leverage the props parameter, but subclasses like DynamoDBKVStoreImpl use the props parameter in their overridden implementations.

Learnt from: piyush-zlai
PR: #44
File: hub/app/controllers/ModelController.scala:15-18
Timestamp: 2024-10-17T19:46:42.629Z
Learning: References to MockDataService in hub/test/controllers/SearchControllerSpec.scala and hub/test/controllers/ModelControllerSpec.scala are needed for tests and should not be removed.

Learnt from: nikhil-zlai
PR: #70
File: service/src/main/java/ai/chronon/service/ApiProvider.java:6-6
Timestamp: 2024-12-03T04:04:33.809Z
Learning: The import scala.util.ScalaVersionSpecificCollectionsConverter in service/src/main/java/ai/chronon/service/ApiProvider.java is correct and should not be flagged in future reviews.

Learnt from: chewy-zlai
PR: #47
File: online/src/main/scala/ai/chronon/online/MetadataStore.scala:232-0
Timestamp: 2024-10-17T00:12:09.763Z
Learning: In the KVStore trait located at online/src/main/scala/ai/chronon/online/KVStore.scala, there are two create methods: def create(dataset: String): Unit and def create(dataset: String, props: Map[String, Any]): Unit. The version with props ignores the props parameter, and the simpler version without props is appropriate when props are not needed.

Learnt from: piyush-zlai
PR: #53
File: hub/app/controllers/TimeSeriesController.scala:224-224
Timestamp: 2024-10-29T15:21:58.102Z
Learning: In the mocked data implementation in hub/app/controllers/TimeSeriesController.scala, potential NumberFormatException exceptions due to parsing errors (e.g., when using val featureId = name.split("_").last.toInt) are acceptable and will be addressed when adding the concrete backend.

Learnt from: piyush-zlai
PR: #33
File: cloud_aws/src/main/scala/ai/chronon/integrations/aws/DynamoDBKVStoreImpl.scala:29-30
Timestamp: 2024-10-08T16:18:45.669Z
Learning: In the codebase, the KVStore implementation provides an implicit ExecutionContext in scope, so it's unnecessary to import another.

spark/src/main/scala/ai/chronon/spark/batch/BatchNodeRunner.scala (8)

Learnt from: piyush-zlai
PR: #33
File: online/src/main/scala/ai/chronon/online/Api.scala:69-69
Timestamp: 2024-10-08T16:18:45.669Z
Learning: In the KVStore trait located at online/src/main/scala/ai/chronon/online/Api.scala, the default implementation of the create method (def create(dataset: String, props: Map[String, Any]): Unit = create(dataset)) doesn't leverage the props parameter, but subclasses like DynamoDBKVStoreImpl use the props parameter in their overridden implementations.

Learnt from: piyush-zlai
PR: #33
File: online/src/main/scala/ai/chronon/online/Api.scala:69-69
Timestamp: 2024-10-07T15:21:50.787Z
Learning: In the KVStore trait located at online/src/main/scala/ai/chronon/online/Api.scala, the default implementation of the create method (def create(dataset: String, props: Map[String, Any]): Unit = create(dataset)) doesn't leverage the props parameter, but subclasses like DynamoDBKVStoreImpl use the props parameter in their overridden implementations.

Learnt from: piyush-zlai
PR: #33
File: cloud_aws/src/test/scala/ai/chronon/integrations/aws/DynamoDBKVStoreTest.scala:175-175
Timestamp: 2024-10-07T15:09:51.567Z
Learning: Hardcoding future timestamps in tests within DynamoDBKVStoreTest.scala is acceptable when data is generated and queried within the same time range, ensuring the tests remain valid over time.

Learnt from: chewy-zlai
PR: #47
File: online/src/main/scala/ai/chronon/online/MetadataStore.scala:232-0
Timestamp: 2024-10-17T00:12:09.763Z
Learning: In the KVStore trait located at online/src/main/scala/ai/chronon/online/KVStore.scala, there are two create methods: def create(dataset: String): Unit and def create(dataset: String, props: Map[String, Any]): Unit. The version with props ignores the props parameter, and the simpler version without props is appropriate when props are not needed.

Learnt from: chewy-zlai
PR: #50
File: spark/src/test/scala/ai/chronon/spark/test/MockKVStore.scala:19-28
Timestamp: 2024-10-31T18:29:45.027Z
Learning: In MockKVStore located at spark/src/test/scala/ai/chronon/spark/test/MockKVStore.scala, the multiPut method is intended to be a simple implementation without dataset existence validation, duplicate validation logic elimination, or actual storage of key-value pairs for verification.

Learnt from: piyush-zlai
PR: #33
File: cloud_aws/src/main/scala/ai/chronon/integrations/aws/DynamoDBKVStoreImpl.scala:29-30
Timestamp: 2024-10-08T16:18:45.669Z
Learning: In the codebase, the KVStore implementation provides an implicit ExecutionContext in scope, so it's unnecessary to import another.

Learnt from: piyush-zlai
PR: #33
File: cloud_aws/src/main/scala/ai/chronon/integrations/aws/DynamoDBKVStoreImpl.scala:67-111
Timestamp: 2024-10-07T15:04:30.069Z
Learning: In the DynamoDBKVStoreImpl class, the props parameter is kept as Map[String, Any] to handle binary properties without serialization, allowing flexibility in storing different types of data.

Learnt from: chewy-zlai
PR: #50
File: spark/src/test/scala/ai/chronon/spark/test/MockKVStore.scala:13-16
Timestamp: 2024-10-31T18:27:44.973Z
Learning: In MockKVStore.scala, the create method should reset the dataset even if the dataset already exists.

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (17)
  • GitHub Check: cloud_aws_tests
  • GitHub Check: scala_compile_fmt_fix
  • GitHub Check: cloud_gcp_tests
  • GitHub Check: api_tests
  • GitHub Check: service_commons_tests
  • GitHub Check: service_tests
  • GitHub Check: online_tests
  • GitHub Check: flink_tests
  • GitHub Check: aggregator_tests
  • GitHub Check: batch_tests
  • GitHub Check: streaming_tests
  • GitHub Check: join_tests
  • GitHub Check: fetcher_tests
  • GitHub Check: analyzer_tests
  • GitHub Check: groupby_tests
  • GitHub Check: spark_tests
  • GitHub Check: enforce_triggered_workflows
🔇 Additional comments (18)
api/src/main/scala/ai/chronon/api/planner/NodeRunner.scala (1)

8-8: LGTM! Clean rename to indicate default value.

The "Default" prefix clarifies this is a fallback value for the configurable dataset name.

spark/src/main/scala/ai/chronon/spark/submission/JobSubmitter.scala (1)

223-223: LGTM! Consistent naming for the new command-line argument.

Follows existing patterns in the constants object.

spark/src/test/scala/ai/chronon/spark/test/batch/BatchNodeRunnerTest.scala (8)

25-25: LGTM! Proper import for the renamed constant.


214-214: LGTM! Test updated for new parameter.

Correctly passes the dataset name to match the updated API.


231-231: LGTM! Assertion uses constant instead of magic string.

More maintainable than hardcoded values.


242-242: LGTM! Consistent parameter passing in test.


275-275: LGTM! Test coverage maintained with new parameter.


303-304: LGTM! Multi-line parameter passing handled correctly.


338-338: LGTM! All test scenarios updated consistently.


420-420: LGTM! Complex test scenario properly updated.

The partition format translation test correctly uses the new parameter.

spark/src/main/scala/ai/chronon/spark/kv_store/KVUploadNodeRunner.scala (2)

10-10: LGTM! Proper import for the renamed constant.


97-99: LGTM! Command-line option properly configured.

Good default value and clear description.

spark/src/main/scala/ai/chronon/spark/batch/BatchNodeRunner.scala (6)

9-9: LGTM! Proper import for the renamed constant.


43-45: LGTM! Command-line option properly configured.

Good default value and clear description for the new parameter.


185-189: LGTM! Method signature updated correctly.

The new parameter is properly added to enable configurable dataset names.


230-230: LGTM! Parameter used instead of hardcoded constant.

Correctly uses the passed parameter for input table partitions.


262-262: LGTM! Consistent parameter usage.

Output table partitions also use the configurable dataset name.


282-286: LGTM! Main method properly updated.

Correctly passes the new parameter from parsed arguments.

@david-zlai david-zlai merged commit 527e421 into main Jul 25, 2025
20 checks passed
@david-zlai david-zlai deleted the david/ZIP-800 branch July 25, 2025 23:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants