Skip to content

Conversation

@tchow-zlai
Copy link
Collaborator

@tchow-zlai tchow-zlai commented Jan 23, 2025

Summary

Checklist

  • Added Unit Tests
  • Covered by existing CI
  • Integration tested
  • Documentation update

Summary by CodeRabbit

  • New Features

    • Enhanced BigQuery table creation functionality with improved schema and partitioning support.
    • Streamlined table creation process in Spark's TableUtils.
  • Refactor

    • Simplified table existence checking logic.
    • Consolidated import statements for better readability.
    • Removed unused import in BigQuery catalog test.
    • Updated import statement in GcpFormatProviderTest for better integration with Spark BigQuery connector.
  • Bug Fixes

    • Improved error handling for table creation scenarios.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 23, 2025

Walkthrough

This pull request enhances the BigQuery table creation functionality across multiple files. The BigQueryFormat class now implements a robust createTable method that supports schema conversion, partitioning, and table creation in BigQuery. Simultaneously, the TableUtils class streamlines its table creation logic by simplifying the existence check and creation process. A minor test file modification removes an unused import from the BigQuery catalog test.

Changes

File Change Summary
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryFormat.scala Implemented createTable method with schema conversion, partitioning logic, and BigQuery table creation support
cloud_gcp/src/test/scala/ai/chronon/integrations/cloud_gcp/BigQueryCatalogTest.scala Removed GoogleCloudStorageFileSystem import
spark/src/main/scala/ai/chronon/spark/TableUtils.scala Simplified table existence check and creation logic
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/GcpFormatProvider.scala Streamlined import statements and refined optional value handling
cloud_gcp/src/test/scala/ai/chronon/integrations/cloud_gcp/GcpFormatProviderTest.scala Modified import statement for BigQuery library

Possibly related PRs

Suggested reviewers

  • nikhil-zlai
  • piyush-zlai
  • david-zlai

Poem

🚀 In clouds of data, tables take flight,
BigQuery's schema, now burning bright,
Partitions dance, imports take wing,
Code transforms with each new string!
Scala's magic makes data sing! 🌟

Warning

Review ran into problems

🔥 Problems

GitHub Actions: Resource not accessible by integration - https://docs.github.com/rest/actions/workflow-runs#list-workflow-runs-for-a-repository.

Please grant the required permissions to the CodeRabbit GitHub App under the organization or repository settings.


🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@tchow-zlai tchow-zlai force-pushed the tchow/bq-createtable branch 2 times, most recently from 0e1fa49 to 0e284cc Compare January 23, 2025 18:07
@tchow-zlai tchow-zlai force-pushed the tchow/bq-createtable branch from 0e284cc to 44476e7 Compare January 23, 2025 20:10
@tchow-zlai tchow-zlai force-pushed the tchow/format-ctas branch 2 times, most recently from 8db6f82 to b8da826 Compare January 23, 2025 20:18
@tchow-zlai tchow-zlai force-pushed the tchow/bq-createtable branch from 44476e7 to ff2c7b7 Compare January 23, 2025 20:18
@tchow-zlai tchow-zlai force-pushed the tchow/bq-createtable branch 2 times, most recently from 7cdc7c1 to bc7933b Compare January 23, 2025 21:57
import ai.chronon.spark.format.Format
import com.google.cloud.bigquery.BigQuery
import com.google.cloud.bigquery.connector.common.BigQueryUtil
import com.google.cloud.spark.bigquery.SchemaConverters
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

had to pull in all the shaded stuff from the bigquery connector in order to leverage some of the utils.

Base automatically changed from tchow/format-ctas to main January 24, 2025 23:05
tchow-zlai and others added 6 commits January 24, 2025 15:06
Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>
Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>
Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>
Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>
Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>
Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>
@tchow-zlai tchow-zlai force-pushed the tchow/bq-createtable branch from 76466ad to b3d765b Compare January 24, 2025 23:06
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryFormat.scala (1)

59-59: Inner function usage.
This call is straightforward. Keep it if you see reusability; otherwise consider inlining.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro (Legacy)

📥 Commits

Reviewing files that changed from the base of the PR and between 378e1eb and b3d765b.

📒 Files selected for processing (3)
  • cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryFormat.scala (2 hunks)
  • cloud_gcp/src/test/scala/ai/chronon/integrations/cloud_gcp/BigQueryCatalogTest.scala (0 hunks)
  • spark/src/main/scala/ai/chronon/spark/TableUtils.scala (1 hunks)
💤 Files with no reviewable changes (1)
  • cloud_gcp/src/test/scala/ai/chronon/integrations/cloud_gcp/BigQueryCatalogTest.scala
⏰ Context from checks skipped due to timeout of 90000ms (6)
  • GitHub Check: table_utils_delta_format_spark_tests
  • GitHub Check: other_spark_tests
  • GitHub Check: join_spark_tests
  • GitHub Check: fetcher_spark_tests
  • GitHub Check: mutation_spark_tests
  • GitHub Check: scala_compile_fmt_fix
🔇 Additional comments (3)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryFormat.scala (2)

5-10: Imports appear correct.


30-57: Handle multi-col partitions with a typed exception.
If multiple partition columns are detected, consider throwing a more descriptive error rather than using assert to avoid abrupt termination.

Validate tableId parse.
Ensure shadedTableId.getProject or .getDataset isn't empty if the user doesn't specify them.

spark/src/main/scala/ai/chronon/spark/TableUtils.scala (1)

288-295: Race condition possible.
If another process creates the table after tableReachable but before the createTable call, a conflict might occur.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryFormat.scala (1)

5-10: Consider documenting repackaged imports usage.

Document why repackaged BigQuery classes are used to help future maintainers.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro (Legacy)

📥 Commits

Reviewing files that changed from the base of the PR and between 6e09aec and d2f56f0.

📒 Files selected for processing (1)
  • cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryFormat.scala (2 hunks)
🧰 Additional context used
📓 Learnings (1)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryFormat.scala (2)
Learnt from: tchow-zlai
PR: zipline-ai/chronon#263
File: cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryFormat.scala:29-60
Timestamp: 2025-01-24T23:55:30.256Z
Learning: In BigQuery integration, table existence check is performed outside the BigQueryFormat.createTable method, at a higher level in TableUtils.createTable.
Learnt from: tchow-zlai
PR: zipline-ai/chronon#263
File: cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryFormat.scala:56-57
Timestamp: 2025-01-24T23:55:40.650Z
Learning: For BigQuery table creation operations in BigQueryFormat.scala, allow exceptions to propagate directly without wrapping them in try-catch blocks, as the original BigQuery exceptions provide sufficient context.
⏰ Context from checks skipped due to timeout of 90000ms (6)
  • GitHub Check: join_spark_tests
  • GitHub Check: fetcher_spark_tests
  • GitHub Check: other_spark_tests
  • GitHub Check: mutation_spark_tests
  • GitHub Check: table_utils_delta_format_spark_tests
  • GitHub Check: scala_compile_fmt_fix
🔇 Additional comments (2)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryFormat.scala (2)

38-39: Consider customizing schema conversion.

Default schema conversion config might not handle all edge cases.


29-59: Implementation looks solid.

Clean implementation with proper partition handling and schema conversion.

@tchow-zlai tchow-zlai merged commit c0f1645 into main Jan 25, 2025
9 checks passed
@tchow-zlai tchow-zlai deleted the tchow/bq-createtable branch January 25, 2025 07:42
tchow-zlai added a commit that referenced this pull request Jan 27, 2025
## Summary
- With #263 we control table
creation ourselves. We don't need to rely on indirect writes to then do
the table creation (and partitioning) for us, we just simply use the
storage API to write directly into the table we created. This should be
much more performant and preferred over indirect writes because we don't
need to stage data, then load as a temp BQ table, and it uses the
BigQuery storage API directly.
- Remove configs that are used only for indirect writes

## Checklist
- [ ] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

## Release Notes

- **Improvements**
- Enhanced BigQuery data writing process with more precise configuration
options.
  - Simplified table creation and partition insertion logic.
- Improved handling of DataFrame column arrangements during data
operations.

- **Changes**
  - Updated BigQuery write method to use a direct writing approach.
- Introduced a new option to prevent table creation if it does not
exist.
  - Modified table creation process to be more format-aware.
  - Streamlined partition insertion mechanism.

These updates improve data management and writing efficiency in cloud
data processing workflows.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->

---------

Co-authored-by: Thomas Chow <[email protected]>
nikhil-zlai pushed a commit that referenced this pull request Feb 4, 2025
## Summary


- https://app.asana.com/0/1208949807589885/1209206040434612/f 
- Support explicit bigquery table creation. 
## Checklist
- [ ] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
  - https://app.asana.com/0/0/1209206040434612
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Enhanced BigQuery table creation functionality with improved schema
and partitioning support.
	- Streamlined table creation process in Spark's TableUtils.

- **Refactor**
	- Simplified table existence checking logic.
	- Consolidated import statements for better readability.
	- Removed unused import in BigQuery catalog test.
- Updated import statement in GcpFormatProviderTest for better
integration with Spark BigQuery connector.

- **Bug Fixes**
	- Improved error handling for table creation scenarios.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->

---------

Co-authored-by: Thomas Chow <[email protected]>
nikhil-zlai pushed a commit that referenced this pull request Feb 4, 2025
## Summary
- With #263 we control table
creation ourselves. We don't need to rely on indirect writes to then do
the table creation (and partitioning) for us, we just simply use the
storage API to write directly into the table we created. This should be
much more performant and preferred over indirect writes because we don't
need to stage data, then load as a temp BQ table, and it uses the
BigQuery storage API directly.
- Remove configs that are used only for indirect writes

## Checklist
- [ ] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

## Release Notes

- **Improvements**
- Enhanced BigQuery data writing process with more precise configuration
options.
  - Simplified table creation and partition insertion logic.
- Improved handling of DataFrame column arrangements during data
operations.

- **Changes**
  - Updated BigQuery write method to use a direct writing approach.
- Introduced a new option to prevent table creation if it does not
exist.
  - Modified table creation process to be more format-aware.
  - Streamlined partition insertion mechanism.

These updates improve data management and writing efficiency in cloud
data processing workflows.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->

---------

Co-authored-by: Thomas Chow <[email protected]>
@coderabbitai coderabbitai bot mentioned this pull request Apr 18, 2025
4 tasks
kumar-zlai pushed a commit that referenced this pull request Apr 25, 2025
## Summary


- https://app.asana.com/0/1208949807589885/1209206040434612/f 
- Support explicit bigquery table creation. 
## Checklist
- [ ] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
  - https://app.asana.com/0/0/1209206040434612
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Enhanced BigQuery table creation functionality with improved schema
and partitioning support.
	- Streamlined table creation process in Spark's TableUtils.

- **Refactor**
	- Simplified table existence checking logic.
	- Consolidated import statements for better readability.
	- Removed unused import in BigQuery catalog test.
- Updated import statement in GcpFormatProviderTest for better
integration with Spark BigQuery connector.

- **Bug Fixes**
	- Improved error handling for table creation scenarios.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->

---------

Co-authored-by: Thomas Chow <[email protected]>
kumar-zlai pushed a commit that referenced this pull request Apr 25, 2025
## Summary
- With #263 we control table
creation ourselves. We don't need to rely on indirect writes to then do
the table creation (and partitioning) for us, we just simply use the
storage API to write directly into the table we created. This should be
much more performant and preferred over indirect writes because we don't
need to stage data, then load as a temp BQ table, and it uses the
BigQuery storage API directly.
- Remove configs that are used only for indirect writes

## Checklist
- [ ] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

## Release Notes

- **Improvements**
- Enhanced BigQuery data writing process with more precise configuration
options.
  - Simplified table creation and partition insertion logic.
- Improved handling of DataFrame column arrangements during data
operations.

- **Changes**
  - Updated BigQuery write method to use a direct writing approach.
- Introduced a new option to prevent table creation if it does not
exist.
  - Modified table creation process to be more format-aware.
  - Streamlined partition insertion mechanism.

These updates improve data management and writing efficiency in cloud
data processing workflows.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->

---------

Co-authored-by: Thomas Chow <[email protected]>
kumar-zlai pushed a commit that referenced this pull request Apr 29, 2025
## Summary


- https://app.asana.com/0/1208949807589885/1209206040434612/f 
- Support explicit bigquery table creation. 
## Checklist
- [ ] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
  - https://app.asana.com/0/0/1209206040434612
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Enhanced BigQuery table creation functionality with improved schema
and partitioning support.
	- Streamlined table creation process in Spark's TableUtils.

- **Refactor**
	- Simplified table existence checking logic.
	- Consolidated import statements for better readability.
	- Removed unused import in BigQuery catalog test.
- Updated import statement in GcpFormatProviderTest for better
integration with Spark BigQuery connector.

- **Bug Fixes**
	- Improved error handling for table creation scenarios.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->

---------

Co-authored-by: Thomas Chow <[email protected]>
kumar-zlai pushed a commit that referenced this pull request Apr 29, 2025
## Summary
- With #263 we control table
creation ourselves. We don't need to rely on indirect writes to then do
the table creation (and partitioning) for us, we just simply use the
storage API to write directly into the table we created. This should be
much more performant and preferred over indirect writes because we don't
need to stage data, then load as a temp BQ table, and it uses the
BigQuery storage API directly.
- Remove configs that are used only for indirect writes

## Checklist
- [ ] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

## Release Notes

- **Improvements**
- Enhanced BigQuery data writing process with more precise configuration
options.
  - Simplified table creation and partition insertion logic.
- Improved handling of DataFrame column arrangements during data
operations.

- **Changes**
  - Updated BigQuery write method to use a direct writing approach.
- Introduced a new option to prevent table creation if it does not
exist.
  - Modified table creation process to be more format-aware.
  - Streamlined partition insertion mechanism.

These updates improve data management and writing efficiency in cloud
data processing workflows.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->

---------

Co-authored-by: Thomas Chow <[email protected]>
chewy-zlai pushed a commit that referenced this pull request May 15, 2025
## Summary


- https://app.asana.com/0/1208949807589885/1209206040434612/f 
- Support explicit bigquery table creation. 
## Checklist
- [ ] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
  - https://app.asana.com/0/0/1209206040434612
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Enhanced BigQuery table creation functionality with improved schema
and partitioning support.
	- Streamlined table creation process in Spark's TableUtils.

- **Refactor**
	- Simplified table existence checking logic.
	- Consolidated import statements for better readability.
	- Removed unused import in BigQuery catalog test.
- Updated import statement in GcpFormatProviderTest for better
integration with Spark BigQuery connector.

- **Bug Fixes**
	- Improved error handling for table creation scenarios.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->

---------

Co-authored-by: Thomas Chow <[email protected]>
chewy-zlai pushed a commit that referenced this pull request May 15, 2025
## Summary
- With #263 we control table
creation ourselves. We don't need to rely on indirect writes to then do
the table creation (and partitioning) for us, we just simply use the
storage API to write directly into the table we created. This should be
much more performant and preferred over indirect writes because we don't
need to stage data, then load as a temp BQ table, and it uses the
BigQuery storage API directly.
- Remove configs that are used only for indirect writes

## Checklist
- [ ] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

## Release Notes

- **Improvements**
- Enhanced BigQuery data writing process with more precise configuration
options.
  - Simplified table creation and partition insertion logic.
- Improved handling of DataFrame column arrangements during data
operations.

- **Changes**
  - Updated BigQuery write method to use a direct writing approach.
- Introduced a new option to prevent table creation if it does not
exist.
  - Modified table creation process to be more format-aware.
  - Streamlined partition insertion mechanism.

These updates improve data management and writing efficiency in cloud
data processing workflows.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->

---------

Co-authored-by: Thomas Chow <[email protected]>
chewy-zlai pushed a commit that referenced this pull request May 15, 2025
## Summary


- https://app.asana.com/0/1208949807589885/1209206040434612/f 
- Support explicit bigquery table creation. 
## Checklist
- [ ] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
  - https://app.asana.com/0/0/1209206040434612
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Enhanced BigQuery table creation functionality with improved schema
and partitioning support.
	- Streamlined table creation process in Spark's TableUtils.

- **Refactor**
	- Simplified table existence checking logic.
	- Consolidated import statements for better readability.
	- Removed unused import in BigQuery catalog test.
- Updated import statement in GcpFormatProviderTest for better
integration with Spark BigQuery connector.

- **Bug Fixes**
	- Improved error handling for table creation scenarios.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->

---------

Co-authored-by: Thomas Chow <[email protected]>
chewy-zlai pushed a commit that referenced this pull request May 15, 2025
## Summary
- With #263 we control table
creation ourselves. We don't need to rely on indirect writes to then do
the table creation (and partitioning) for us, we just simply use the
storage API to write directly into the table we created. This should be
much more performant and preferred over indirect writes because we don't
need to stage data, then load as a temp BQ table, and it uses the
BigQuery storage API directly.
- Remove configs that are used only for indirect writes

## Checklist
- [ ] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

## Release Notes

- **Improvements**
- Enhanced BigQuery data writing process with more precise configuration
options.
  - Simplified table creation and partition insertion logic.
- Improved handling of DataFrame column arrangements during data
operations.

- **Changes**
  - Updated BigQuery write method to use a direct writing approach.
- Introduced a new option to prevent table creation if it does not
exist.
  - Modified table creation process to be more format-aware.
  - Streamlined partition insertion mechanism.

These updates improve data management and writing efficiency in cloud
data processing workflows.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->

---------

Co-authored-by: Thomas Chow <[email protected]>
chewy-zlai pushed a commit that referenced this pull request May 16, 2025
## Summary


- https://app.asana.com/0/1208949807589885/1209206040434612/f 
- Support explicit bigquery table creation. 
## Cheour clientslist
- [ ] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
  - https://app.asana.com/0/0/1209206040434612
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Enhanced BigQuery table creation functionality with improved schema
and partitioning support.
	- Streamlined table creation process in Spark's TableUtils.

- **Refactor**
	- Simplified table existence cheour clientsing logic.
	- Consolidated import statements for better readability.
	- Removed unused import in BigQuery catalog test.
- Updated import statement in GcpFormatProviderTest for better
integration with Spark BigQuery connector.

- **Bug Fixes**
	- Improved error handling for table creation scenarios.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to traour clients
the status of staour clientss when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->

---------

Co-authored-by: Thomas Chow <[email protected]>
chewy-zlai pushed a commit that referenced this pull request May 16, 2025
## Summary
- With #263 we control table
creation ourselves. We don't need to rely on indirect writes to then do
the table creation (and partitioning) for us, we just simply use the
storage API to write directly into the table we created. This should be
much more performant and preferred over indirect writes because we don't
need to stage data, then load as a temp BQ table, and it uses the
BigQuery storage API directly.
- Remove configs that are used only for indirect writes

## Cheour clientslist
- [ ] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

## Release Notes

- **Improvements**
- Enhanced BigQuery data writing process with more precise configuration
options.
  - Simplified table creation and partition insertion logic.
- Improved handling of DataFrame column arrangements during data
operations.

- **Changes**
  - Updated BigQuery write method to use a direct writing approach.
- Introduced a new option to prevent table creation if it does not
exist.
  - Modified table creation process to be more format-aware.
  - Streamlined partition insertion mechanism.

These updates improve data management and writing efficiency in cloud
data processing workflows.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to traour clients
the status of staour clientss when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->

---------

Co-authored-by: Thomas Chow <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants