You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The feature is introduced in #25392. When --setEnableBundling=true pipeline option is set, it turns out that BigQueryIO only reads a small fraction of row for large table. Reproduced reading tpcds_1T.web_sales table.
Number of rows: 720,000,376 --setEnableBundling=true: 44,550,489 rows read --setEnableBundling=false: 720,000,376 rows read
Reading from tpcds_1G.web_sales table, the issue is not triggered, as 18,000 rows are read.
Issue Priority
Priority: 1 (data loss / total loss of function)
Issue Components
Component: Python SDK
Component: Java SDK
Component: Go SDK
Component: Typescript SDK
Component: IO connector
Component: Beam examples
Component: Beam playground
Component: Beam katas
Component: Website
Component: Spark Runner
Component: Flink Runner
Component: Samza Runner
Component: Twister2 Runner
Component: Hazelcast Jet Runner
Component: Google Cloud Dataflow Runner
The text was updated successfully, but these errors were encountered:
The feature is introduced in Beam v2.46.0 and is activated only when this currently undocumented pipeline option is set. No production user is using it. @vachan-shetty is working on fix. Feel free to assign to yourself.
What happened?
The feature is introduced in #25392. When
--setEnableBundling=true
pipeline option is set, it turns out that BigQueryIO only reads a small fraction of row for large table. Reproduced readingtpcds_1T.web_sales
table.Number of rows: 720,000,376
--setEnableBundling=true
: 44,550,489 rows read--setEnableBundling=false
: 720,000,376 rows readReading from
tpcds_1G.web_sales
table, the issue is not triggered, as 18,000 rows are read.Issue Priority
Priority: 1 (data loss / total loss of function)
Issue Components
The text was updated successfully, but these errors were encountered: