You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We currently do something like this in our recording rules:
# get usage metric per bucket
max by (bucket_name, location) (
last_over_time((stackdriver_gcs_bucket_storage_googleapis_com_storage_total_bytes > 0)[3h:1m])
)
/ 1024^3
# joint storage class there
* on (bucket_name) group_left (storage_class) (
max by (bucket_name, storage_class) (
label_replace(gcp_gcs_bucket_info, "storage_class", "REGIONAL", "storage_class", "STANDARD")
)
)
# multiply with the location/storage_class metric
* on (location, storage_class) group_left
max by (location, storage_class) (
last_over_time(gcp_gcs_storage_hourly_cost[15m])
* on (location, storage_class)
(1 - gcp_gcs_storage_discount)
) / 60 # hourly_cost -> cost_per_minute
The metric gcp_gcs_bucket_info and gcp_gcs_storage_hourly_cost are emitted by cloudcost exporter. Is it better to just emit a cost metric per bucket and provide the joining already inside of cloudcost_exporter?
This would simplify the PromQL to:
# get usage metric per bucket
max by (bucket_name) (
last_over_time((stackdriver_gcs_bucket_storage_googleapis_com_storage_total_bytes > 0)[3h:1m])
)
/ 1024^3
)
# multiply with the bucket cost metric
* on (bucket_name) group_left
max by (bucket_name) (
last_over_time(gcp_gcs_storage_hourly_cost[15m])
* on (bucket_name)
(1 - gcp_gcs_storage_discount)
) / 60 # hourly_cost -> cost_per_minute
The text was updated successfully, but these errors were encountered:
the-it
changed the title
Make bucket to region matching in cloudcost exporter instead of in PromQL
Make bucket to region matching in cloudcost exporter instead of at PromQL level
Jan 17, 2024
I'm in favor of this simplification, with one caveat: The few times I've gone down this path I've struggled with the mapping of stackdriver_exporter labels to cloudcost_exporter. So we just need to tread carefully and ensure the joins work as expected and can validate the data is the same.
We currently do something like this in our recording rules:
The metric
gcp_gcs_bucket_info
andgcp_gcs_storage_hourly_cost
are emitted by cloudcost exporter. Is it better to just emit a cost metric per bucket and provide the joining already inside of cloudcost_exporter?This would simplify the PromQL to:
The text was updated successfully, but these errors were encountered: