Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: Enable pre-commit CI check #2612

Merged
merged 5 commits into from
Apr 19, 2022
Merged

Conversation

asottile-sentry
Copy link
Member

@asottile-sentry asottile-sentry commented Apr 14, 2022

commits are split into atomic tasks -- it's recommended to view the reformat commits with ?w=1


previously the check in CI was always silently passing due to master not being
available with actions/checkout@v2:

fatal: ambiguous argument 'master': unknown revision or path not in the working tree.
Use '--' to separate paths from revisions, like this:
'git <command> [<revision>...] -- [<file>...]'

Copy link
Member Author

@asottile-sentry asottile-sentry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I read through the automated patches and commented on the surprises

Comment on lines 9 to +17
"""\b
o
O
O
o
.oOo 'OoOo. O o OoOo. .oOoO'
`Ooo. o O o O O o O o
O O o O o o O o O
`OoO' o O `OoO'o `OoO' `OoO'o
"""
o
O
O
o
.oOo 'OoOo. O o OoOo. .oOoO'
`Ooo. o O o O O o O o
O O o O o o O o O
`OoO' o O `OoO'o `OoO' `OoO'o"""
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

black actually changed the string contents here subtly -- I verified with snuba --help that click still renders this properly

an alternative here is to change the string to:

   """\b\

                     o
                    O
                    O
                    o
.oOo  'OoOo. O   o  OoOo. .oOoO'
`Ooo.  o   O o   O  O   o O   o
    O  O   o O   o  o   O o   O
`OoO'  o   O `OoO'o `OoO' `OoO'o
"""

(start the tqs with a backslash and a blank line) -- then black will leave it alone and the string ending looks better imo) -- this is super minor though, let me know which is preferrred (leave it as is vs. adding the backslash vs. adding some # fmt: off -- I'm leaning towards leaving it as is though)

@asottile-sentry asottile-sentry force-pushed the asottile-fix-pre-commit branch 2 times, most recently from 6af0196 to e1a49f3 Compare April 14, 2022 14:22
@github-actions
Copy link

This PR has a migration; here is the generated SQL

-- start migrations

-- migration discover : 0001_discover_merge_table
Local operations:
CREATE TABLE IF NOT EXISTS discover_local (event_id UUID, project_id UInt64, type LowCardinality(String), timestamp DateTime, platform LowCardinality(String), environment LowCardinality(Nullable(String)), release LowCardinality(Nullable(String)), dist LowCardinality(Nullable(String)), transaction_name LowCardinality(Nullable(String)), message Nullable(String), title Nullable(String), user LowCardinality(String), user_hash UInt64, user_id Nullable(String), user_name Nullable(String), user_email Nullable(String), ip_address_v4 Nullable(IPv4), ip_address_v6 Nullable(IPv6), sdk_name LowCardinality(Nullable(String)), sdk_version LowCardinality(Nullable(String)), http_method LowCardinality(Nullable(String)), http_referer Nullable(String), tags Nested(key String, value String), contexts Nested(key String, value String)) ENGINE Merge(currentDatabase(), '^errors_local$|^transactions_local$');


Dist operations:
CREATE TABLE IF NOT EXISTS discover_dist (event_id UUID, project_id UInt64, type LowCardinality(String), timestamp DateTime, platform LowCardinality(String), environment LowCardinality(Nullable(String)), release LowCardinality(Nullable(String)), dist LowCardinality(Nullable(String)), transaction_name LowCardinality(Nullable(String)), message Nullable(String), title Nullable(String), user LowCardinality(String), user_hash UInt64, user_id Nullable(String), user_name Nullable(String), user_email Nullable(String), ip_address_v4 Nullable(IPv4), ip_address_v6 Nullable(IPv6), sdk_name LowCardinality(Nullable(String)), sdk_version LowCardinality(Nullable(String)), http_method LowCardinality(Nullable(String)), http_referer Nullable(String), tags Nested(key String, value String), contexts Nested(key String, value String)) ENGINE Distributed(cluster_one_sh, default, discover_local);
-- end migration discover : 0001_discover_merge_table
-- migration discover : 0003_discover_fix_user_column
Local operations:
ALTER TABLE discover_local MODIFY COLUMN user String;


Dist operations:
ALTER TABLE discover_dist MODIFY COLUMN user String;
-- end migration discover : 0003_discover_fix_user_column
-- migration discover : 0006_discover_add_trace_id
Local operations:
ALTER TABLE discover_local ADD COLUMN IF NOT EXISTS trace_id Nullable(UUID) AFTER contexts;


Dist operations:
ALTER TABLE discover_dist ADD COLUMN IF NOT EXISTS trace_id Nullable(UUID) AFTER contexts;
-- end migration discover : 0006_discover_add_trace_id
-- migration events : 0003_errors
Local operations:
CREATE TABLE IF NOT EXISTS errors_local (org_id UInt64, project_id UInt64, timestamp DateTime, event_id UUID CODEC (NONE), event_hash UInt64 MATERIALIZED cityHash64(toString(event_id)) CODEC (NONE), platform LowCardinality(String), environment LowCardinality(Nullable(String)), release LowCardinality(Nullable(String)), dist LowCardinality(Nullable(String)), ip_address_v4 Nullable(IPv4), ip_address_v6 Nullable(IPv6), user String DEFAULT '', user_hash UInt64 MATERIALIZED cityHash64(user), user_id Nullable(String), user_name Nullable(String), user_email Nullable(String), sdk_name LowCardinality(Nullable(String)), sdk_version LowCardinality(Nullable(String)), tags Nested(key String, value String), _tags_flattened String, contexts Nested(key String, value String), _contexts_flattened String, transaction_name LowCardinality(String) DEFAULT '', transaction_hash UInt64 MATERIALIZED cityHash64(transaction_name), span_id Nullable(UInt64), trace_id Nullable(UUID), partition UInt16, offset UInt64 CODEC (DoubleDelta, LZ4), message_timestamp DateTime, retention_days UInt16, deleted UInt8, group_id UInt64, primary_hash FixedString(32), primary_hash_hex UInt64 MATERIALIZED hex(primary_hash), event_string String CODEC (NONE), received DateTime, message String, title String, culprit String, level LowCardinality(String), location Nullable(String), version LowCardinality(Nullable(String)), type LowCardinality(String), exception_stacks Nested(type Nullable(String), value Nullable(String), mechanism_type Nullable(String), mechanism_handled Nullable(UInt8)), exception_frames Nested(abs_path Nullable(String), colno Nullable(UInt32), filename Nullable(String), function Nullable(String), lineno Nullable(UInt32), in_app Nullable(UInt8), package Nullable(String), module Nullable(String), stack_level Nullable(UInt16)), sdk_integrations Array(String), modules Nested(name String, version String)) ENGINE ReplicatedReplacingMergeTree('/clickhouse/tables/events/{shard}/default/errors_local', '{replica}', deleted) ORDER BY (org_id, project_id, toStartOfDay(timestamp), primary_hash_hex, event_hash) PARTITION BY (toMonday(timestamp), if(retention_days = 30, 30, 90)) SAMPLE BY event_hash TTL timestamp + toIntervalDay(retention_days) SETTINGS index_granularity=8192;


Dist operations:
CREATE TABLE IF NOT EXISTS errors_dist (org_id UInt64, project_id UInt64, timestamp DateTime, event_id UUID CODEC (NONE), event_hash UInt64 MATERIALIZED cityHash64(toString(event_id)) CODEC (NONE), platform LowCardinality(String), environment LowCardinality(Nullable(String)), release LowCardinality(Nullable(String)), dist LowCardinality(Nullable(String)), ip_address_v4 Nullable(IPv4), ip_address_v6 Nullable(IPv6), user String DEFAULT '', user_hash UInt64 MATERIALIZED cityHash64(user), user_id Nullable(String), user_name Nullable(String), user_email Nullable(String), sdk_name LowCardinality(Nullable(String)), sdk_version LowCardinality(Nullable(String)), tags Nested(key String, value String), _tags_flattened String, contexts Nested(key String, value String), _contexts_flattened String, transaction_name LowCardinality(String) DEFAULT '', transaction_hash UInt64 MATERIALIZED cityHash64(transaction_name), span_id Nullable(UInt64), trace_id Nullable(UUID), partition UInt16, offset UInt64 CODEC (DoubleDelta, LZ4), message_timestamp DateTime, retention_days UInt16, deleted UInt8, group_id UInt64, primary_hash FixedString(32), primary_hash_hex UInt64 MATERIALIZED hex(primary_hash), event_string String CODEC (NONE), received DateTime, message String, title String, culprit String, level LowCardinality(String), location Nullable(String), version LowCardinality(Nullable(String)), type LowCardinality(String), exception_stacks Nested(type Nullable(String), value Nullable(String), mechanism_type Nullable(String), mechanism_handled Nullable(UInt8)), exception_frames Nested(abs_path Nullable(String), colno Nullable(UInt32), filename Nullable(String), function Nullable(String), lineno Nullable(UInt32), in_app Nullable(UInt8), package Nullable(String), module Nullable(String), stack_level Nullable(UInt16)), sdk_integrations Array(String), modules Nested(name String, version String)) ENGINE Distributed(cluster_one_sh, default, errors_local, event_hash);
-- end migration events : 0003_errors
-- migration events : 0007_groupedmessages
Local operations:
CREATE TABLE IF NOT EXISTS groupedmessage_local (offset UInt64, record_deleted UInt8, project_id UInt64, id UInt64, status Nullable(UInt8), last_seen Nullable(DateTime), first_seen Nullable(DateTime), active_at Nullable(DateTime), first_release_id Nullable(UInt64)) ENGINE ReplicatedReplacingMergeTree('/clickhouse/tables/cdc/all/default/groupedmessage_local', '{replica}', offset) ORDER BY (project_id, id) SAMPLE BY id;


Dist operations:
CREATE TABLE IF NOT EXISTS groupedmessage_dist (offset UInt64, record_deleted UInt8, project_id UInt64, id UInt64, status Nullable(UInt8), last_seen Nullable(DateTime), first_seen Nullable(DateTime), active_at Nullable(DateTime), first_release_id Nullable(UInt64)) ENGINE Distributed(cluster_one_sh, default, groupedmessage_local);
-- end migration events : 0007_groupedmessages
-- migration events : 0008_groupassignees
Local operations:
CREATE TABLE IF NOT EXISTS groupassignee_local (offset UInt64, record_deleted UInt8, project_id UInt64, group_id UInt64, date_added Nullable(DateTime), user_id Nullable(UInt64), team_id Nullable(UInt64)) ENGINE ReplicatedReplacingMergeTree('/clickhouse/tables/cdc/all/default/groupassignee_local', '{replica}', offset) ORDER BY (project_id, group_id);


Dist operations:
CREATE TABLE IF NOT EXISTS groupassignee_dist (offset UInt64, record_deleted UInt8, project_id UInt64, group_id UInt64, date_added Nullable(DateTime), user_id Nullable(UInt64), team_id Nullable(UInt64)) ENGINE Distributed(cluster_one_sh, default, groupassignee_local);
-- end migration events : 0008_groupassignees
-- migration events : 0010_groupedmessages_onpremise_compatibility
Non SQL operation - Sync project ID colum for onpremise
-- end migration events : 0010_groupedmessages_onpremise_compatibility
-- migration events : 0011_rebuild_errors
Local operations:
CREATE TABLE IF NOT EXISTS errors_local_new (project_id UInt64, timestamp DateTime, event_id UUID CODEC (NONE), platform LowCardinality(String), environment LowCardinality(Nullable(String)), release LowCardinality(Nullable(String)), dist LowCardinality(Nullable(String)), ip_address_v4 Nullable(IPv4), ip_address_v6 Nullable(IPv6), user String DEFAULT '', user_hash UInt64 MATERIALIZED cityHash64(user), user_id Nullable(String), user_name Nullable(String), user_email Nullable(String), sdk_name LowCardinality(Nullable(String)), sdk_version LowCardinality(Nullable(String)), http_method LowCardinality(Nullable(String)), http_referer Nullable(String), tags Nested(key String, value String), contexts Nested(key String, value String), transaction_name LowCardinality(String) DEFAULT '', transaction_hash UInt64 MATERIALIZED cityHash64(transaction_name), span_id Nullable(UInt64), trace_id Nullable(UUID), partition UInt16, offset UInt64 CODEC (DoubleDelta, LZ4), message_timestamp DateTime, retention_days UInt16, deleted UInt8, group_id UInt64, primary_hash UUID, received DateTime, message String, title String, culprit String, level LowCardinality(String), location Nullable(String), version LowCardinality(Nullable(String)), type LowCardinality(String), exception_stacks Nested(type Nullable(String), value Nullable(String), mechanism_type Nullable(String), mechanism_handled Nullable(UInt8)), exception_frames Nested(abs_path Nullable(String), colno Nullable(UInt32), filename Nullable(String), function Nullable(String), lineno Nullable(UInt32), in_app Nullable(UInt8), package Nullable(String), module Nullable(String), stack_level Nullable(UInt16)), sdk_integrations Array(String), modules Nested(name String, version String)) ENGINE ReplicatedReplacingMergeTree('/clickhouse/tables/events/{shard}/default/errors_local_new', '{replica}', deleted) ORDER BY (project_id, toStartOfDay(timestamp), primary_hash, cityHash64(event_id)) PARTITION BY (retention_days, toMonday(timestamp)) SAMPLE BY cityHash64(event_id) TTL timestamp + toIntervalDay(retention_days) SETTINGS index_granularity=8192;
ALTER TABLE errors_local_new ADD COLUMN IF NOT EXISTS _tags_hash_map Array(UInt64) MATERIALIZED arrayMap((k, v) -> cityHash64(concat(replaceRegexpAll(k, '(\\=|\\\\)', '\\\\\\1'), '=', v)), tags.key, tags.value) AFTER tags;
DROP TABLE IF EXISTS errors_local;
RENAME TABLE errors_local_new TO errors_local;


Dist operations:
CREATE TABLE IF NOT EXISTS errors_dist_new (project_id UInt64, timestamp DateTime, event_id UUID CODEC (NONE), platform LowCardinality(String), environment LowCardinality(Nullable(String)), release LowCardinality(Nullable(String)), dist LowCardinality(Nullable(String)), ip_address_v4 Nullable(IPv4), ip_address_v6 Nullable(IPv6), user String DEFAULT '', user_hash UInt64 MATERIALIZED cityHash64(user), user_id Nullable(String), user_name Nullable(String), user_email Nullable(String), sdk_name LowCardinality(Nullable(String)), sdk_version LowCardinality(Nullable(String)), http_method LowCardinality(Nullable(String)), http_referer Nullable(String), tags Nested(key String, value String), contexts Nested(key String, value String), transaction_name LowCardinality(String) DEFAULT '', transaction_hash UInt64 MATERIALIZED cityHash64(transaction_name), span_id Nullable(UInt64), trace_id Nullable(UUID), partition UInt16, offset UInt64 CODEC (DoubleDelta, LZ4), message_timestamp DateTime, retention_days UInt16, deleted UInt8, group_id UInt64, primary_hash UUID, received DateTime, message String, title String, culprit String, level LowCardinality(String), location Nullable(String), version LowCardinality(Nullable(String)), type LowCardinality(String), exception_stacks Nested(type Nullable(String), value Nullable(String), mechanism_type Nullable(String), mechanism_handled Nullable(UInt8)), exception_frames Nested(abs_path Nullable(String), colno Nullable(UInt32), filename Nullable(String), function Nullable(String), lineno Nullable(UInt32), in_app Nullable(UInt8), package Nullable(String), module Nullable(String), stack_level Nullable(UInt16)), sdk_integrations Array(String), modules Nested(name String, version String)) ENGINE Distributed(cluster_one_sh, default, errors_local, cityHash64(event_id));
ALTER TABLE errors_dist_new ADD COLUMN IF NOT EXISTS _tags_hash_map Array(UInt64) MATERIALIZED arrayMap((k, v) -> cityHash64(concat(replaceRegexpAll(k, '(\\=|\\\\)', '\\\\\\1'), '=', v)), tags.key, tags.value) AFTER tags;
DROP TABLE IF EXISTS errors_dist;
RENAME TABLE errors_dist_new TO errors_dist;
CREATE TABLE IF NOT EXISTS errors_dist_ro (project_id UInt64, timestamp DateTime, event_id UUID CODEC (NONE), platform LowCardinality(String), environment LowCardinality(Nullable(String)), release LowCardinality(Nullable(String)), dist LowCardinality(Nullable(String)), ip_address_v4 Nullable(IPv4), ip_address_v6 Nullable(IPv6), user String DEFAULT '', user_hash UInt64 MATERIALIZED cityHash64(user), user_id Nullable(String), user_name Nullable(String), user_email Nullable(String), sdk_name LowCardinality(Nullable(String)), sdk_version LowCardinality(Nullable(String)), http_method LowCardinality(Nullable(String)), http_referer Nullable(String), tags Nested(key String, value String), contexts Nested(key String, value String), transaction_name LowCardinality(String) DEFAULT '', transaction_hash UInt64 MATERIALIZED cityHash64(transaction_name), span_id Nullable(UInt64), trace_id Nullable(UUID), partition UInt16, offset UInt64 CODEC (DoubleDelta, LZ4), message_timestamp DateTime, retention_days UInt16, deleted UInt8, group_id UInt64, primary_hash UUID, received DateTime, message String, title String, culprit String, level LowCardinality(String), location Nullable(String), version LowCardinality(Nullable(String)), type LowCardinality(String), exception_stacks Nested(type Nullable(String), value Nullable(String), mechanism_type Nullable(String), mechanism_handled Nullable(UInt8)), exception_frames Nested(abs_path Nullable(String), colno Nullable(UInt32), filename Nullable(String), function Nullable(String), lineno Nullable(UInt32), in_app Nullable(UInt8), package Nullable(String), module Nullable(String), stack_level Nullable(UInt16)), sdk_integrations Array(String), modules Nested(name String, version String)) ENGINE Distributed(cluster_one_sh, default, errors_local, cityHash64(event_id));
-- end migration events : 0011_rebuild_errors
-- migration events : 0013_errors_add_hierarchical_hashes
Local operations:
ALTER TABLE errors_local ADD COLUMN IF NOT EXISTS hierarchical_hashes Array(UUID) AFTER primary_hash;
ALTER TABLE sentry_local ADD COLUMN IF NOT EXISTS hierarchical_hashes Array(FixedString(32)) AFTER primary_hash;


Dist operations:
ALTER TABLE errors_dist ADD COLUMN IF NOT EXISTS hierarchical_hashes Array(UUID) AFTER primary_hash;
ALTER TABLE sentry_dist ADD COLUMN IF NOT EXISTS hierarchical_hashes Array(FixedString(32)) AFTER primary_hash;
-- end migration events : 0013_errors_add_hierarchical_hashes
-- migration metrics : 0002_metrics_sets
Local operations:
CREATE TABLE IF NOT EXISTS metrics_sets_local (org_id UInt64, project_id UInt64, metric_id UInt64, granularity UInt32, tags Nested(key UInt64, value UInt64), timestamp DateTime, retention_days UInt16, value AggregateFunction(uniqCombined64, UInt64)) ENGINE ReplicatedAggregatingMergeTree('/clickhouse/tables/metrics/{shard}/default/metrics_sets_local', '{replica}') ORDER BY (org_id, project_id, metric_id, granularity, timestamp, tags.key, tags.value) PARTITION BY (retention_days, toMonday(timestamp)) SETTINGS index_granularity=256;
ALTER TABLE metrics_sets_local ADD COLUMN IF NOT EXISTS _tags_hash Array(UInt64) MATERIALIZED arrayMap((k, v) -> cityHash64(concat(toString(k), '=', toString(v))), tags.key, tags.value) AFTER tags.value;
ALTER TABLE metrics_sets_local ADD INDEX IF NOT EXISTS bf_tags_hash _tags_hash TYPE bloom_filter() GRANULARITY 1;
ALTER TABLE metrics_sets_local ADD INDEX IF NOT EXISTS bf_tags_key_hash tags.key TYPE bloom_filter() GRANULARITY 1;
CREATE MATERIALIZED VIEW IF NOT EXISTS metrics_sets_mv_local TO metrics_sets_local (org_id UInt64, project_id UInt64, metric_id UInt64, granularity UInt32, tags Nested(key UInt64, value UInt64), timestamp DateTime, retention_days UInt16, value AggregateFunction(uniqCombined64, UInt64)) AS 
SELECT
    org_id,
    project_id,
    metric_id,
    60 as granularity,
    tags.key,
    tags.value,
    toStartOfInterval(timestamp, INTERVAL 60 second) as timestamp,
    retention_days,
    uniqCombined64State(arrayJoin(set_values)) as value
FROM metrics_buckets_local
WHERE materialization_version = 0
GROUP BY
    org_id,
    project_id,
    metric_id,
    tags.key,
    tags.value,
    timestamp,
    granularity,
    retention_days
;


Dist operations:
CREATE TABLE IF NOT EXISTS metrics_sets_dist (org_id UInt64, project_id UInt64, metric_id UInt64, granularity UInt32, tags Nested(key UInt64, value UInt64), timestamp DateTime, retention_days UInt16, value AggregateFunction(uniqCombined64, UInt64)) ENGINE Distributed(cluster_one_sh, default, metrics_sets_local);
ALTER TABLE metrics_sets_dist ADD COLUMN IF NOT EXISTS _tags_hash Array(UInt64) MATERIALIZED arrayMap((k, v) -> cityHash64(concat(toString(k), '=', toString(v))), tags.key, tags.value) AFTER tags.value;
-- end migration metrics : 0002_metrics_sets
-- migration metrics : 0020_polymorphic_buckets_table
Local operations:
CREATE TABLE IF NOT EXISTS metrics_raw_local (use_case_id LowCardinality(String), org_id UInt64, project_id UInt64, metric_id UInt64, timestamp DateTime, tags Nested(key UInt64, value UInt64), metric_type LowCardinality(String), set_values Array(UInt64), count_value Float64, distribution_values Array(Float64), materialization_version UInt8, retention_days UInt16, partition UInt16, offset UInt64) ENGINE ReplicatedMergeTree('/clickhouse/tables/metrics/{shard}/default/metrics_raw_local', '{replica}') ORDER BY (use_case_id, metric_type, org_id, project_id, metric_id, timestamp) PARTITION BY (toStartOfDay(timestamp)) TTL timestamp + toIntervalDay(7);


Dist operations:
CREATE TABLE IF NOT EXISTS metrics_raw_dist (use_case_id LowCardinality(String), org_id UInt64, project_id UInt64, metric_id UInt64, timestamp DateTime, tags Nested(key UInt64, value UInt64), metric_type LowCardinality(String), set_values Array(UInt64), count_value Float64, distribution_values Array(Float64), materialization_version UInt8, retention_days UInt16, partition UInt16, offset UInt64) ENGINE Distributed(cluster_one_sh, default, metrics_raw_local);
-- end migration metrics : 0020_polymorphic_buckets_table
-- migration metrics : 0022_repartition_polymorphic_table
Local operations:
CREATE TABLE IF NOT EXISTS metrics_raw_v2_local (use_case_id LowCardinality(String), org_id UInt64, project_id UInt64, metric_id UInt64, timestamp DateTime, tags Nested(key UInt64, value UInt64), metric_type LowCardinality(String), set_values Array(UInt64), count_value Float64, distribution_values Array(Float64), materialization_version UInt8, retention_days UInt16, partition UInt16, offset UInt64) ENGINE ReplicatedMergeTree('/clickhouse/tables/metrics/{shard}/default/metrics_raw_v2_local', '{replica}') ORDER BY (use_case_id, metric_type, org_id, project_id, metric_id, timestamp) PARTITION BY (toStartOfInterval(timestamp, INTERVAL 3 day)) TTL timestamp + toIntervalDay(7);


Dist operations:
CREATE TABLE IF NOT EXISTS metrics_raw_v2_dist (use_case_id LowCardinality(String), org_id UInt64, project_id UInt64, metric_id UInt64, timestamp DateTime, tags Nested(key UInt64, value UInt64), metric_type LowCardinality(String), set_values Array(UInt64), count_value Float64, distribution_values Array(Float64), materialization_version UInt8, retention_days UInt16, partition UInt16, offset UInt64) ENGINE Distributed(cluster_one_sh, default, metrics_raw_v2_local);
-- end migration metrics : 0022_repartition_polymorphic_table
-- migration outcomes : 0001_outcomes
Local operations:
CREATE TABLE IF NOT EXISTS outcomes_raw_local (org_id UInt64, project_id UInt64, key_id Nullable(UInt64), timestamp DateTime, outcome UInt8, reason LowCardinality(Nullable(String)), event_id Nullable(UUID)) ENGINE ReplicatedMergeTree('/clickhouse/tables/outcomes/{shard}/default/outcomes_raw_local', '{replica}') ORDER BY (org_id, project_id, timestamp) PARTITION BY (toMonday(timestamp)) SETTINGS index_granularity=16384;
CREATE TABLE IF NOT EXISTS outcomes_hourly_local (org_id UInt64, project_id UInt64, key_id UInt64, timestamp DateTime, outcome UInt8, reason LowCardinality(String), times_seen UInt64) ENGINE ReplicatedSummingMergeTree('/clickhouse/tables/outcomes/{shard}/default/outcomes_hourly_local', '{replica}') ORDER BY (org_id, project_id, key_id, outcome, reason, timestamp) PARTITION BY (toMonday(timestamp)) SETTINGS index_granularity=256;
CREATE MATERIALIZED VIEW IF NOT EXISTS outcomes_mv_hourly_local TO outcomes_hourly_local (org_id UInt64, project_id UInt64, key_id UInt64, timestamp DateTime, outcome UInt8, reason String, times_seen UInt64) AS 
                    SELECT
                        org_id,
                        project_id,
                        ifNull(key_id, 0) AS key_id,
                        toStartOfHour(timestamp) AS timestamp,
                        outcome,
                        ifNull(reason, 'none') AS reason,
                        count() AS times_seen
                    FROM outcomes_raw_local
                    GROUP BY org_id, project_id, key_id, timestamp, outcome, reason
                ;


Dist operations:
CREATE TABLE IF NOT EXISTS outcomes_raw_dist (org_id UInt64, project_id UInt64, key_id Nullable(UInt64), timestamp DateTime, outcome UInt8, reason LowCardinality(Nullable(String)), event_id Nullable(UUID)) ENGINE Distributed(cluster_one_sh, default, outcomes_raw_local, org_id);
CREATE TABLE IF NOT EXISTS outcomes_hourly_dist (org_id UInt64, project_id UInt64, key_id UInt64, timestamp DateTime, outcome UInt8, reason LowCardinality(String), times_seen UInt64) ENGINE Distributed(cluster_one_sh, default, outcomes_hourly_local, org_id);
-- end migration outcomes : 0001_outcomes
-- migration outcomes : 0002_outcomes_remove_size_and_bytes
Local operations:
ALTER TABLE outcomes_raw_local DROP COLUMN IF EXISTS size;
ALTER TABLE outcomes_hourly_local DROP COLUMN IF EXISTS bytes_received;


Dist operations:
n/a
-- end migration outcomes : 0002_outcomes_remove_size_and_bytes
-- migration outcomes : 0004_outcomes_matview_additions
Local operations:
DROP TABLE IF EXISTS outcomes_mv_hourly_local;
CREATE MATERIALIZED VIEW IF NOT EXISTS outcomes_mv_hourly_local TO outcomes_hourly_local (org_id UInt64, project_id UInt64, key_id UInt64, timestamp DateTime, outcome UInt8, reason String, category UInt8, quantity UInt64, times_seen UInt64) AS 
                    SELECT
                        org_id,
                        project_id,
                        ifNull(key_id, 0) AS key_id,
                        toStartOfHour(timestamp) AS timestamp,
                        outcome,
                        ifNull(reason, 'none') AS reason,
                        category,
                        count() AS times_seen,
                        sum(quantity) AS quantity
                    FROM outcomes_raw_local
                    GROUP BY org_id, project_id, key_id, timestamp, outcome, reason, category
                ;


Dist operations:
n/a
-- end migration outcomes : 0004_outcomes_matview_additions
-- migration profiles : 0001_profiles
Local operations:
CREATE TABLE IF NOT EXISTS profiles_local (organization_id UInt64, project_id UInt64, transaction_id UUID, profile_id UUID, received DateTime, profile String CODEC (LZ4HC(9)), android_api_level Nullable(UInt32), device_classification LowCardinality(String), device_locale LowCardinality(String), device_manufacturer LowCardinality(String), device_model LowCardinality(String), device_os_build_number LowCardinality(Nullable(String)), device_os_name LowCardinality(String), device_os_version LowCardinality(String), duration_ns UInt64, environment LowCardinality(Nullable(String)), platform LowCardinality(String), trace_id UUID, transaction_name LowCardinality(String), version_name String, version_code String, retention_days UInt16, partition UInt16, offset UInt64) ENGINE ReplicatedReplacingMergeTree('/clickhouse/tables/profiles/{shard}/default/profiles_local', '{replica}') ORDER BY (organization_id, project_id, toStartOfDay(received), cityHash64(profile_id)) PARTITION BY (retention_days, toMonday(received)) SAMPLE BY cityHash64(profile_id) TTL received + toIntervalDay(retention_days) SETTINGS index_granularity=8192;


Dist operations:
CREATE TABLE IF NOT EXISTS profiles_dist (organization_id UInt64, project_id UInt64, transaction_id UUID, profile_id UUID, received DateTime, profile String CODEC (LZ4HC(9)), android_api_level Nullable(UInt32), device_classification LowCardinality(String), device_locale LowCardinality(String), device_manufacturer LowCardinality(String), device_model LowCardinality(String), device_os_build_number LowCardinality(Nullable(String)), device_os_name LowCardinality(String), device_os_version LowCardinality(String), duration_ns UInt64, environment LowCardinality(Nullable(String)), platform LowCardinality(String), trace_id UUID, transaction_name LowCardinality(String), version_name String, version_code String, retention_days UInt16, partition UInt16, offset UInt64) ENGINE Distributed(cluster_one_sh, default, profiles_local, cityHash64(profile_id));
-- end migration profiles : 0001_profiles
-- migration querylog : 0001_querylog
Local operations:
CREATE TABLE IF NOT EXISTS querylog_local (request_id UUID, request_body String, referrer LowCardinality(String), dataset LowCardinality(String), projects Array(UInt64), organization Nullable(UInt64), timestamp DateTime, duration_ms UInt32, status Enum('success' = 0, 'error' = 1, 'rate-limited' = 2), clickhouse_queries Nested(sql String, status Enum('success' = 0, 'error' = 1, 'rate-limited' = 2), trace_id Nullable(UUID), duration_ms UInt32, stats String, final UInt8, cache_hit UInt8, sample Float32, max_threads UInt8, num_days UInt32, clickhouse_table LowCardinality(String), query_id String, is_duplicate UInt8, consistent UInt8)) ENGINE ReplicatedMergeTree('/clickhouse/tables/querylog/{shard}/default/querylog_local', '{replica}') ORDER BY (toStartOfDay(timestamp), request_id) PARTITION BY (toMonday(timestamp)) SAMPLE BY request_id;


Dist operations:
CREATE TABLE IF NOT EXISTS querylog_dist (request_id UUID, request_body String, referrer LowCardinality(String), dataset LowCardinality(String), projects Array(UInt64), organization Nullable(UInt64), timestamp DateTime, duration_ms UInt32, status Enum('success' = 0, 'error' = 1, 'rate-limited' = 2), clickhouse_queries Nested(sql String, status Enum('success' = 0, 'error' = 1, 'rate-limited' = 2), trace_id Nullable(UUID), duration_ms UInt32, stats String, final UInt8, cache_hit UInt8, sample Float32, max_threads UInt8, num_days UInt32, clickhouse_table LowCardinality(String), query_id String, is_duplicate UInt8, consistent UInt8)) ENGINE Distributed(cluster_one_sh, default, querylog_local);
-- end migration querylog : 0001_querylog
-- migration querylog : 0002_status_type_change
Local operations:
ALTER TABLE querylog_local MODIFY COLUMN status LowCardinality(String);
ALTER TABLE querylog_local MODIFY COLUMN clickhouse_queries.status Array(LowCardinality(String));


Dist operations:
ALTER TABLE querylog_dist MODIFY COLUMN status LowCardinality(String);
ALTER TABLE querylog_dist MODIFY COLUMN clickhouse_queries.status Array(LowCardinality(String));
-- end migration querylog : 0002_status_type_change
-- migration sessions : 0001_sessions
Local operations:
CREATE TABLE IF NOT EXISTS sessions_raw_local (session_id UUID, distinct_id UUID, seq UInt64, org_id UInt64, project_id UInt64, retention_days UInt16, duration UInt32, status UInt8, errors UInt16, received DateTime, started DateTime, release LowCardinality(String), environment LowCardinality(String)) ENGINE ReplicatedMergeTree('/clickhouse/tables/sessions/{shard}/default/sessions_raw_local', '{replica}') ORDER BY (org_id, project_id, release, environment, started) PARTITION BY (toMonday(started)) SETTINGS index_granularity=16384;
CREATE TABLE IF NOT EXISTS sessions_hourly_local (org_id UInt64, project_id UInt64, started DateTime, release LowCardinality(String), environment LowCardinality(String), duration_quantiles AggregateFunction(quantilesIf(0.5, 0.9), UInt32, UInt8), sessions AggregateFunction(countIf, UUID, UInt8), users AggregateFunction(uniqIf, UUID, UInt8), sessions_crashed AggregateFunction(countIf, UUID, UInt8), sessions_abnormal AggregateFunction(countIf, UUID, UInt8), sessions_errored AggregateFunction(uniqIf, UUID, UInt8), users_crashed AggregateFunction(uniqIf, UUID, UInt8), users_abnormal AggregateFunction(uniqIf, UUID, UInt8), users_errored AggregateFunction(uniqIf, UUID, UInt8)) ENGINE ReplicatedAggregatingMergeTree('/clickhouse/tables/sessions/{shard}/default/sessions_hourly_local', '{replica}') ORDER BY (org_id, project_id, release, environment, started) PARTITION BY (toMonday(started)) SETTINGS index_granularity=256;
CREATE MATERIALIZED VIEW IF NOT EXISTS sessions_hourly_mv_local TO sessions_hourly_local (org_id UInt64, project_id UInt64, started DateTime, release LowCardinality(String), environment LowCardinality(String), duration_quantiles AggregateFunction(quantilesIf(0.5, 0.9), UInt32, UInt8), sessions AggregateFunction(countIf, UUID, UInt8), users AggregateFunction(uniqIf, UUID, UInt8), sessions_crashed AggregateFunction(countIf, UUID, UInt8), sessions_abnormal AggregateFunction(countIf, UUID, UInt8), sessions_errored AggregateFunction(uniqIf, UUID, UInt8), users_crashed AggregateFunction(uniqIf, UUID, UInt8), users_abnormal AggregateFunction(uniqIf, UUID, UInt8), users_errored AggregateFunction(uniqIf, UUID, UInt8)) AS 
SELECT
    org_id,
    project_id,
    toStartOfHour(started) as started,
    release,
    environment,
    quantilesIfState(0.5, 0.9)(
        duration,
        duration <> 4294967295 AND status == 1
    ) as duration_quantiles,
    countIfState(session_id, seq == 0) as sessions,
    uniqIfState(distinct_id, distinct_id != '00000000-0000-0000-0000-000000000000') as users,
    countIfState(session_id, status == 2) as sessions_crashed,
    countIfState(session_id, status == 3) as sessions_abnormal,
    uniqIfState(session_id, errors > 0) as sessions_errored,
    uniqIfState(distinct_id, status == 2) as users_crashed,
    uniqIfState(distinct_id, status == 3) as users_abnormal,
    uniqIfState(distinct_id, errors > 0) as users_errored
FROM
    sessions_raw_local
GROUP BY
    org_id, project_id, started, release, environment
;


Dist operations:
CREATE TABLE IF NOT EXISTS sessions_raw_dist (session_id UUID, distinct_id UUID, seq UInt64, org_id UInt64, project_id UInt64, retention_days UInt16, duration UInt32, status UInt8, errors UInt16, received DateTime, started DateTime, release LowCardinality(String), environment LowCardinality(String)) ENGINE Distributed(cluster_one_sh, default, sessions_raw_local, org_id);
CREATE TABLE IF NOT EXISTS sessions_hourly_dist (org_id UInt64, project_id UInt64, started DateTime, release LowCardinality(String), environment LowCardinality(String), duration_quantiles AggregateFunction(quantilesIf(0.5, 0.9), UInt32, UInt8), sessions AggregateFunction(countIf, UUID, UInt8), users AggregateFunction(uniqIf, UUID, UInt8), sessions_crashed AggregateFunction(countIf, UUID, UInt8), sessions_abnormal AggregateFunction(countIf, UUID, UInt8), sessions_errored AggregateFunction(uniqIf, UUID, UInt8), users_crashed AggregateFunction(uniqIf, UUID, UInt8), users_abnormal AggregateFunction(uniqIf, UUID, UInt8), users_errored AggregateFunction(uniqIf, UUID, UInt8)) ENGINE Distributed(cluster_one_sh, default, sessions_hourly_local, org_id);
-- end migration sessions : 0001_sessions
-- migration sessions : 0002_sessions_aggregates
Local operations:
ALTER TABLE sessions_raw_local ADD COLUMN IF NOT EXISTS quantity UInt32 DEFAULT 1 AFTER distinct_id;
ALTER TABLE sessions_raw_local ADD COLUMN IF NOT EXISTS user_agent LowCardinality(String) AFTER environment;
ALTER TABLE sessions_raw_local ADD COLUMN IF NOT EXISTS os LowCardinality(String) AFTER user_agent;
ALTER TABLE sessions_hourly_local ADD COLUMN IF NOT EXISTS user_agent LowCardinality(String) AFTER environment;
ALTER TABLE sessions_hourly_local ADD COLUMN IF NOT EXISTS os LowCardinality(String) AFTER user_agent;
ALTER TABLE sessions_hourly_local ADD COLUMN IF NOT EXISTS duration_avg AggregateFunction(avgIf, UInt32, UInt8) AFTER duration_quantiles;
ALTER TABLE sessions_hourly_local ADD COLUMN IF NOT EXISTS sessions_preaggr AggregateFunction(sumIf, UInt32, UInt8) AFTER sessions;
ALTER TABLE sessions_hourly_local ADD COLUMN IF NOT EXISTS sessions_crashed_preaggr AggregateFunction(sumIf, UInt32, UInt8) AFTER sessions_crashed;
ALTER TABLE sessions_hourly_local ADD COLUMN IF NOT EXISTS sessions_abnormal_preaggr AggregateFunction(sumIf, UInt32, UInt8) AFTER sessions_abnormal;
ALTER TABLE sessions_hourly_local ADD COLUMN IF NOT EXISTS sessions_errored_preaggr AggregateFunction(sumIf, UInt32, UInt8) AFTER sessions_errored;


Dist operations:
ALTER TABLE sessions_raw_dist ADD COLUMN IF NOT EXISTS quantity UInt32 DEFAULT 1 AFTER distinct_id;
ALTER TABLE sessions_raw_dist ADD COLUMN IF NOT EXISTS user_agent LowCardinality(String) AFTER environment;
ALTER TABLE sessions_raw_dist ADD COLUMN IF NOT EXISTS os LowCardinality(String) AFTER user_agent;
ALTER TABLE sessions_hourly_dist ADD COLUMN IF NOT EXISTS user_agent LowCardinality(String) AFTER environment;
ALTER TABLE sessions_hourly_dist ADD COLUMN IF NOT EXISTS os LowCardinality(String) AFTER user_agent;
ALTER TABLE sessions_hourly_dist ADD COLUMN IF NOT EXISTS duration_avg AggregateFunction(avgIf, UInt32, UInt8) AFTER duration_quantiles;
ALTER TABLE sessions_hourly_dist ADD COLUMN IF NOT EXISTS sessions_preaggr AggregateFunction(sumIf, UInt32, UInt8) AFTER sessions;
ALTER TABLE sessions_hourly_dist ADD COLUMN IF NOT EXISTS sessions_crashed_preaggr AggregateFunction(sumIf, UInt32, UInt8) AFTER sessions_crashed;
ALTER TABLE sessions_hourly_dist ADD COLUMN IF NOT EXISTS sessions_abnormal_preaggr AggregateFunction(sumIf, UInt32, UInt8) AFTER sessions_abnormal;
ALTER TABLE sessions_hourly_dist ADD COLUMN IF NOT EXISTS sessions_errored_preaggr AggregateFunction(sumIf, UInt32, UInt8) AFTER sessions_errored;
-- end migration sessions : 0002_sessions_aggregates
-- migration sessions : 0003_sessions_matview
Local operations:
DROP TABLE IF EXISTS sessions_hourly_mv_local;
CREATE MATERIALIZED VIEW IF NOT EXISTS sessions_hourly_mv_local TO sessions_hourly_local (org_id UInt64, project_id UInt64, started DateTime, release LowCardinality(String), environment LowCardinality(String), user_agent LowCardinality(String), os LowCardinality(String), duration_quantiles AggregateFunction(quantilesIf(0.5, 0.9), UInt32, UInt8), duration_avg AggregateFunction(avgIf, UInt32, UInt8), sessions AggregateFunction(countIf, UUID, UInt8), sessions_preaggr AggregateFunction(sumIf, UInt32, UInt8), sessions_crashed AggregateFunction(countIf, UUID, UInt8), sessions_crashed_preaggr AggregateFunction(sumIf, UInt32, UInt8), sessions_abnormal AggregateFunction(countIf, UUID, UInt8), sessions_abnormal_preaggr AggregateFunction(sumIf, UInt32, UInt8), sessions_errored AggregateFunction(uniqIf, UUID, UInt8), sessions_errored_preaggr AggregateFunction(sumIf, UInt32, UInt8), users AggregateFunction(uniqIf, UUID, UInt8), users_crashed AggregateFunction(uniqIf, UUID, UInt8), users_abnormal AggregateFunction(uniqIf, UUID, UInt8), users_errored AggregateFunction(uniqIf, UUID, UInt8)) AS 
SELECT
    org_id,
    project_id,
    toStartOfHour(started) as started,
    release,
    environment,
    user_agent,
    os,

    -- pre-aggregated sessions dont have a duration, so no more filtering is needed.
    -- we would have liked to changed the quantiles here, but the data structure allows
    -- querying arbitrary quantiles from the `quantilesState`:
    quantilesIfState(0.5, 0.9)(
        duration,
        duration <> 4294967295 AND status == 1
    ) as duration_quantiles,

    -- this is new, and similarly as above pre-aggregated sessions dont have a duration:
    avgIfState(duration, duration <> 4294967295 AND status == 1) as duration_avg,

    -- `sum` the session counts based on the new `quantity`:
    sumIfState(quantity, seq == 0) as sessions_preaggr,
    sumIfState(quantity, status == 2) as sessions_crashed_preaggr,
    sumIfState(quantity, status == 3) as sessions_abnormal_preaggr,

    -- individual session updates keep using the uniq session_id as before:
    uniqIfState(session_id, errors > 0 AND session_id != '00000000-0000-0000-0000-000000000000') as sessions_errored,

    -- pre-aggregated counts use sum. by definition, `crashed` and `abnormal` are errored:
    sumIfState(quantity, status IN (2, 3, 4) AND session_id == '00000000-0000-0000-0000-000000000000') as sessions_errored_preaggr,

    -- users counts will additionally be constrained for the distinct-id:
    uniqIfState(distinct_id, distinct_id != '00000000-0000-0000-0000-000000000000') as users,
    uniqIfState(distinct_id, status == 2 AND distinct_id != '00000000-0000-0000-0000-000000000000') as users_crashed,
    uniqIfState(distinct_id, status == 3 AND distinct_id != '00000000-0000-0000-0000-000000000000') as users_abnormal,
    uniqIfState(distinct_id, errors > 0 AND distinct_id != '00000000-0000-0000-0000-000000000000') as users_errored

FROM
     sessions_raw_local
GROUP BY
     org_id, project_id, started, release, environment, user_agent, os
;


Dist operations:
n/a
-- end migration sessions : 0003_sessions_matview
-- migration spans_experimental : 0001_spans_experimental
Local operations:
CREATE TABLE IF NOT EXISTS spans_experimental_local (project_id UInt64, transaction_id UUID, trace_id UUID, transaction_span_id UInt64, span_id UInt64, parent_span_id Nullable(UInt64), transaction_name LowCardinality(String), description String, op LowCardinality(String), status UInt8 DEFAULT 2, start_ts DateTime, start_ns UInt32, finish_ts DateTime, finish_ns UInt32, duration_ms UInt32, tags Nested(key String, value String), retention_days UInt16, deleted UInt8) ENGINE ReplicatedReplacingMergeTree('/clickhouse/tables/transactions/{shard}/default/spans_experimental_local', '{replica}', deleted) ORDER BY (project_id, toStartOfDay(finish_ts), transaction_name, cityHash64(transaction_span_id), op, cityHash64(trace_id), cityHash64(span_id)) PARTITION BY (toMonday(finish_ts)) SAMPLE BY cityHash64(span_id) TTL finish_ts + toIntervalDay(retention_days) SETTINGS index_granularity=8192;
ALTER TABLE spans_experimental_local ADD COLUMN IF NOT EXISTS _tags_hash_map Array(UInt64) MATERIALIZED arrayMap((k, v) -> cityHash64(concat(replaceRegexpAll(k, '(\\=|\\\\)', '\\\\\\1'), '=', v)), tags.key, tags.value) AFTER tags.value;


Dist operations:
CREATE TABLE IF NOT EXISTS spans_experimental_dist (project_id UInt64, transaction_id UUID, trace_id UUID, transaction_span_id UInt64, span_id UInt64, parent_span_id Nullable(UInt64), transaction_name LowCardinality(String), description String, op LowCardinality(String), status UInt8 DEFAULT 2, start_ts DateTime, start_ns UInt32, finish_ts DateTime, finish_ns UInt32, duration_ms UInt32, tags Nested(key String, value String), retention_days UInt16, deleted UInt8) ENGINE Distributed(cluster_one_sh, default, spans_experimental_local, cityHash64(transaction_span_id));
ALTER TABLE spans_experimental_dist ADD COLUMN IF NOT EXISTS _tags_hash_map Array(UInt64) MATERIALIZED arrayMap((k, v) -> cityHash64(concat(replaceRegexpAll(k, '(\\=|\\\\)', '\\\\\\1'), '=', v)), tags.key, tags.value) AFTER tags.value;
-- end migration spans_experimental : 0001_spans_experimental
-- migration system : 0001_migrations
Local operations:
CREATE TABLE IF NOT EXISTS migrations_local (group String, migration_id String, timestamp DateTime, status Enum('completed' = 0, 'in_progress' = 1, 'not_started' = 2), version UInt64 DEFAULT 1) ENGINE ReplicatedReplacingMergeTree('/clickhouse/tables/migrations/{shard}/default/migrations_local', '{replica}', version) ORDER BY (group, migration_id);


Dist operations:
CREATE TABLE IF NOT EXISTS migrations_dist (group String, migration_id String, timestamp DateTime, status Enum('completed' = 0, 'in_progress' = 1, 'not_started' = 2), version UInt64 DEFAULT 1) ENGINE Distributed(cluster_one_sh, default, migrations_local, cityHash64(group));
-- end migration system : 0001_migrations
-- migration transactions : 0001_transactions
Local operations:
CREATE TABLE IF NOT EXISTS transactions_local (project_id UInt64, event_id UUID, trace_id UUID, span_id UInt64, transaction_name LowCardinality(String), transaction_hash UInt64 MATERIALIZED cityHash64(transaction_name), transaction_op LowCardinality(String), transaction_status UInt8 DEFAULT 2, start_ts DateTime, start_ms UInt16, finish_ts DateTime, finish_ms UInt16, duration UInt32, platform LowCardinality(String), environment LowCardinality(Nullable(String)), release LowCardinality(Nullable(String)), dist LowCardinality(Nullable(String)), ip_address_v4 Nullable(IPv4), ip_address_v6 Nullable(IPv6), user String DEFAULT '', user_hash UInt64 MATERIALIZED cityHash64(user), user_id Nullable(String), user_name Nullable(String), user_email Nullable(String), sdk_name LowCardinality(String) DEFAULT '', sdk_version LowCardinality(String) DEFAULT '', tags Nested(key String, value String), _tags_flattened String, contexts Nested(key String, value String), _contexts_flattened String, partition UInt16, offset UInt64, message_timestamp DateTime, retention_days UInt16, deleted UInt8) ENGINE ReplicatedReplacingMergeTree('/clickhouse/tables/transactions/{shard}/default/transactions_local', '{replica}', deleted) ORDER BY (project_id, toStartOfDay(finish_ts), transaction_name, cityHash64(span_id)) PARTITION BY (retention_days, toMonday(finish_ts)) SAMPLE BY cityHash64(span_id) TTL finish_ts + toIntervalDay(retention_days) SETTINGS index_granularity=8192;


Dist operations:
CREATE TABLE IF NOT EXISTS transactions_dist (project_id UInt64, event_id UUID, trace_id UUID, span_id UInt64, transaction_name LowCardinality(String), transaction_hash UInt64 MATERIALIZED cityHash64(transaction_name), transaction_op LowCardinality(String), transaction_status UInt8 DEFAULT 2, start_ts DateTime, start_ms UInt16, finish_ts DateTime, finish_ms UInt16, duration UInt32, platform LowCardinality(String), environment LowCardinality(Nullable(String)), release LowCardinality(Nullable(String)), dist LowCardinality(Nullable(String)), ip_address_v4 Nullable(IPv4), ip_address_v6 Nullable(IPv6), user String DEFAULT '', user_hash UInt64 MATERIALIZED cityHash64(user), user_id Nullable(String), user_name Nullable(String), user_email Nullable(String), sdk_name LowCardinality(String) DEFAULT '', sdk_version LowCardinality(String) DEFAULT '', tags Nested(key String, value String), _tags_flattened String, contexts Nested(key String, value String), _contexts_flattened String, partition UInt16, offset UInt64, message_timestamp DateTime, retention_days UInt16, deleted UInt8) ENGINE Distributed(cluster_one_sh, default, transactions_local, cityHash64(span_id));
-- end migration transactions : 0001_transactions
-- migration transactions : 0007_transactions_add_discover_cols
Local operations:
ALTER TABLE transactions_local ADD COLUMN IF NOT EXISTS type LowCardinality(String) MATERIALIZED 'transaction' AFTER deleted;
ALTER TABLE transactions_local ADD COLUMN IF NOT EXISTS message LowCardinality(String) MATERIALIZED transaction_name AFTER type;
ALTER TABLE transactions_local ADD COLUMN IF NOT EXISTS title LowCardinality(String) MATERIALIZED transaction_name AFTER message;
ALTER TABLE transactions_local ADD COLUMN IF NOT EXISTS timestamp DateTime MATERIALIZED finish_ts AFTER title;


Dist operations:
ALTER TABLE transactions_dist ADD COLUMN IF NOT EXISTS type LowCardinality(String) MATERIALIZED 'transaction' AFTER deleted;
ALTER TABLE transactions_dist ADD COLUMN IF NOT EXISTS message LowCardinality(String) MATERIALIZED transaction_name AFTER type;
ALTER TABLE transactions_dist ADD COLUMN IF NOT EXISTS title LowCardinality(String) MATERIALIZED transaction_name AFTER message;
ALTER TABLE transactions_dist ADD COLUMN IF NOT EXISTS timestamp DateTime MATERIALIZED finish_ts AFTER title;
-- end migration transactions : 0007_transactions_add_discover_cols
-- migration transactions : 0009_transactions_fix_title_and_message
Local operations:
ALTER TABLE transactions_local MODIFY COLUMN title String MATERIALIZED transaction_name;
ALTER TABLE transactions_local MODIFY COLUMN message String MATERIALIZED transaction_name;


Dist operations:
ALTER TABLE transactions_dist MODIFY COLUMN title String MATERIALIZED transaction_name;
ALTER TABLE transactions_dist MODIFY COLUMN message String MATERIALIZED transaction_name;
-- end migration transactions : 0009_transactions_fix_title_and_message
-- migration transactions : 0010_transactions_nullable_trace_id
Local operations:
ALTER TABLE transactions_local MODIFY COLUMN trace_id Nullable(UUID);


Dist operations:
ALTER TABLE transactions_dist MODIFY COLUMN trace_id Nullable(UUID);
-- end migration transactions : 0010_transactions_nullable_trace_id

@evanh
Copy link
Member

evanh commented Apr 14, 2022

I'm a little confused, what is the actual change here? Even ignoring whitespace this PR is >2000 lines long. Are all the reformatting changes because of a new version of a linter? If so, can you configure black to not make all these changes?

@asottile-sentry
Copy link
Member Author

I'm a little confused, what is the actual change here? Even ignoring whitespace this PR is >2000 lines long. Are all the reformatting changes because of a new version of a linter? If so, can you configure black to not make all these changes?

the linters are "configured" but aren't being enforced in CI -- this PR fixes them to be enforced in CI (the fourth commit) and resolves the drift from un-linted/un-formatted changes (second and third commits) which have been introduced due to the CI check erroneously passing.

@evanh
Copy link
Member

evanh commented Apr 14, 2022

OK I understand what is being changed.

which have been introduced due to the CI check erroneously passing.

The pre-commit linters were still happening locally though. My suggestion here would be to fix the CI check, but don't lint the entire code base. The files that are currently not linted correctly will get fixed if/when they get changed in subsequent PRs.

@asottile-sentry
Copy link
Member Author

OK I understand what is being changed.

which have been introduced due to the CI check erroneously passing.

The pre-commit linters were still happening locally though. My suggestion here would be to fix the CI check, but don't lint the entire code base. The files that are currently not linted correctly will get fixed if/when they get changed in subsequent PRs.

they were happening when people used them, but as far as I can tell they weren't running in the general case (black for instance in its current configuration cannot run successfully at all! the version configured always crashes with the common _unicodefun error: psf/black#2964)

generally I recommend doing it all in one PR otherwise other PRs will end up with unrelated linter/formatter noise which makes it more difficult to accurately review changes being made. I specifically separated the noisy commits into separately reviewable patches for this reason

Copy link
Member

@evanh evanh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I meant to say, I think this would have been working when it was run as part of the pre-commit locally on git commit. However this is fine, I'll approve before you get more merge conflicts.

mypy is tested via `make backend-typing`
previously the check in CI was always failing due to `master` not being
available with `actions/checkout@v2`:

```
fatal: ambiguous argument 'master': unknown revision or path not in the working tree.
Use '--' to separate paths from revisions, like this:
'git <command> [<revision>...] -- [<file>...]'
```
@asottile-sentry asottile-sentry merged commit 14ba5af into master Apr 19, 2022
@asottile-sentry asottile-sentry deleted the asottile-fix-pre-commit branch April 19, 2022 16:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants