Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MIGRATION: add bytes_scanned to querylog #3360

Merged
merged 2 commits into from
Nov 14, 2022
Merged

Conversation

volokluev
Copy link
Member

The attribution log contains attribution for internal customers but not for sentry customers (no project_id in attribution log). We want to be able to analyze bytes scanned by sentry customers as well. Hence we need to add this column

@volokluev volokluev requested a review from a team as a code owner November 8, 2022 22:44
@github-actions
Copy link

github-actions bot commented Nov 8, 2022

This PR has a migration; here is the generated SQL

-- start migrations

-- forward migration querylog : 0004_add_bytes_scanned
Local operations:
ALTER TABLE querylog_local ADD COLUMN IF NOT EXISTS clickhouse_queries.bytes_scanned Array(UInt64) AFTER clickhouse_queries.array_join_columns;


Dist operations:
ALTER TABLE querylog_dist ADD COLUMN IF NOT EXISTS clickhouse_queries.bytes_scanned Array(UInt64) AFTER clickhouse_queries.array_join_columns;
-- end forward migration querylog : 0004_add_bytes_scanned




-- backward migration querylog : 0004_add_bytes_scanned
Local operations:
ALTER TABLE querylog_local DROP COLUMN IF EXISTS clickhouse_queries.bytes_scanned;


Dist operations:
ALTER TABLE querylog_dist DROP COLUMN IF EXISTS clickhouse_queries.bytes_scanned;
-- end backward migration querylog : 0004_add_bytes_scanned

@codecov-commenter
Copy link

codecov-commenter commented Nov 8, 2022

Codecov Report

Base: 90.75% // Head: 21.87% // Decreases project coverage by -68.87% ⚠️

Coverage data is based on head (bd965f9) compared to base (8bb2a87).
Patch coverage: 11.50% of modified lines in pull request are covered.

❗ Current head bd965f9 differs from pull request most recent head bb7b41a. Consider uploading reports for the commit bb7b41a to get more accurate results

Additional details and impacted files
@@             Coverage Diff             @@
##           master    #3360       +/-   ##
===========================================
- Coverage   90.75%   21.87%   -68.88%     
===========================================
  Files         704      663       -41     
  Lines       32335    31209     -1126     
===========================================
- Hits        29345     6827    -22518     
- Misses       2990    24382    +21392     
Impacted Files Coverage Δ
snuba/admin/clickhouse/querylog.py 0.00% <0.00%> (-100.00%) ⬇️
snuba/admin/migrations_policies.py 0.00% <0.00%> (-100.00%) ⬇️
snuba/cli/__init__.py 0.00% <0.00%> (-68.89%) ⬇️
snuba/cli/optimize.py 0.00% <0.00%> (ø)
snuba/clickhouse/optimize/optimize.py 0.00% <0.00%> (ø)
snuba/clickhouse/optimize/optimize_scheduler.py 0.00% <ø> (ø)
snuba/clickhouse/optimize/optimize_tracker.py 0.00% <ø> (ø)
snuba/clickhouse/optimize/util.py 0.00% <0.00%> (ø)
snuba/core/initialize.py 0.00% <0.00%> (-100.00%) ⬇️
snuba/replacer.py 0.00% <0.00%> (-92.65%) ⬇️
... and 633 more

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

Comment on lines 23 to 30
column=Column(
"clickhouse_queries.bytes_scanned",
Array(
UInt(64),
Modifiers(
default="arrayResize([0], length(clickhouse_queries.sql))"
),
),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will not be this simple.
If I applied this migration the way it is, the consumer would break.
The consumer would try write rows in the clickhouse_queries nested column that would not be of the same size as the bytes_scanned array would not be passed.
As such Clickhouse will consider this array as passed empty.

The migration has to happen in the consumer first passing the column first. You will have to set the input_format_skip_unknown_fields Clickhouse settings in your write so Clickhouse will ignore the unknown column. Then we can apply the migration.

This was the example https://github.com/getsentry/snuba/pull/2232/files

Copy link
Member

@evanh evanh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on the previous PR to change how data is written, I think this should be OK.

Comment on lines 27 to 29
Modifiers(
default="arrayResize([0], length(clickhouse_queries.sql))"
),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should be able to remove this as you are now writing the actual empty array.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But not all rows have this column written. Doesn't it need to exist until that's no longer the case?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, because the column does not exist in the DB.
The critical part is that you deploy the change that writes the column for all rows before we apply the migration.

The default here proved to be useless right after we added it back in 2020.
It was a desperate move to make a migration work without having to stop the consumer. We did not know about input_format_skip_unknown_fields back then.

@volokluev volokluev merged commit e430ea7 into master Nov 14, 2022
@volokluev volokluev deleted the volo/bytes_scanned_querylog branch November 14, 2022 20:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants