-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BEAM-9650] Adding support for ReadAll from BigQuery transform #13170
Conversation
Run Python 3.8 PostCommit |
1 similar comment
Run Python 3.8 PostCommit |
Run Python_PVR_Flink PreCommit |
Run Python 3.8 PostCommit |
Run Python_PVR_Flink PreCommit |
Run Python 3.8 PostCommit |
Run Python_PVR_Flink PreCommit |
2 similar comments
Run Python_PVR_Flink PreCommit |
Run Python_PVR_Flink PreCommit |
Run Python 3.8 PostCommit |
0fc4fd6
to
814e241
Compare
Run Python 3.8 PostCommit |
1 similar comment
Run Python 3.8 PostCommit |
Run Portable_Python PreCommit |
Run Python 3.8 PostCommit |
Run Python PreCommit |
db8d9e7
to
0842eaa
Compare
Run Python 3.8 PostCommit |
Run Python 3.8 PostCommit |
| 'PeriodicImpulse' >> PeriodicImpulse( | ||
first_timestamp, last_timestamp, interval, True) | ||
| 'MapToReadRequest' >> beam.Map( | ||
lambda x: BigQueryReadRequest(table='dataset.table')) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see a few different names here,
ReadAllFromBigQuery
ReadFromBigQueryRequest
BigQueryReadRequest
I'm a bit confused by the differences and interaction between these classes.
If ReadFromBigQueryRequest
is something users interact with it should not be in an internal file (e.g. bigquery_read_internal.py
). Is there a need to expose that at all? Instead could it just be:
side_input = (
p
| 'PeriodicImpulse' >> PeriodicImpulse(...)
| beam.io.ReadAllFromBigQuery(table=...))
Though this would make the initial example of several requests being included in a single ReadAll not possible. Is this something that needs to be special cased, as opposed to say, using a flatten?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regarding names - yes, that's a little confusing. The only names should be:
ReadFromBigQueryRequest
- this is an input element forReadAllFromBigQuery
, and it represents a query or a table to be read (with a few other parameters).ReadAllFromBigQuery
- This is the transform that issues BQ reads.
All other names are misnaming in the configuration
Regarding your example - that's interesting. I recognize that what you show would be the most common use case (same query/table always, rather than varying) - with the only exception that some queries could be slightly updated over time (e.g. read only partitions of the last few days).
otoh, this would create two ways of using the transform, and complicate the constructor (all of the parameters in ReadFromBQRequest would need to be available in the constructor).
Users could build this functionality themselves though. My feeling is that it's better to build a transform that is more composable, and provide an example for users trying to build the functionality you propose. WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My feeling is that it's better to build a transform that is more composable, and provide an example for users trying to build the functionality you propose. WDYT?
+1.
sources_to_read, cleanup_locations = ( | ||
pcoll | ||
| beam.ParDo( | ||
# TODO(pabloem): Make sure we have all necessary args. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be resolved or attributed to a Jira issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removed. thanks Tyson!
define project and dataset (ex.: ``'PROJECT:DATASET.TABLE'``). | ||
:param flatten_results: | ||
Flattens all nested and repeated fields in the query results. | ||
The default value is :data:`True`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The default here is False
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oops good catch. Thanks Tyson!
if element.query is not None: | ||
bq.clean_up_temporary_dataset(self._get_project()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can this be moved up to the other if element.query
condition? That may allow putting the yield
into the for loop above, getting rid of the intermediate split_result
and avoiding the additional iteration.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that's not possible in this case. We issue a table export in export_files, and only after that's finished is that we can delete the dataset. But I've moved the yield above.
self.table = table | ||
self.validate() | ||
|
||
# We use this internal object ID to generate BigQuery export directories. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
May be worth noting that there is also a UUID involved. I was worried about collisions until I read on a bit further.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've added this to the Pydoc of the transform.
da42aba
to
51d4243
Compare
d470c4f
to
06c774c
Compare
Run Python 3.8 PostCommit |
This adds a DoFn to perform BigQuery exports and pass them downstream to be consumed.
Eventually, we'll move
ReadFromBigQuery
to use this as part of its implementation, but for now, we'll leave them separate.r: @chamikaramj
r: @tysonjh
Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
R: @username
).[BEAM-XXX] Fixes bug in ApproximateQuantiles
, where you replaceBEAM-XXX
with the appropriate JIRA issue, if applicable. This will automatically link the pull request to the issue.CHANGES.md
with noteworthy changes.See the Contributor Guide for more tips on how to make review process smoother.
Post-Commit Tests Status (on master branch)
Pre-Commit Tests Status (on master branch)
See .test-infra/jenkins/README for trigger phrase, status and link of all Jenkins jobs.
GitHub Actions Tests Status (on master branch)
See CI.md for more information about GitHub Actions CI.