Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat!: update helm chart kube-prometheus-stack to 43.0.0 #2744

Merged
merged 1 commit into from
Dec 16, 2022

Conversation

bloopy-boi[bot]
Copy link
Contributor

@bloopy-boi bloopy-boi bot commented Dec 13, 2022

This PR contains the following updates:

Package Update Change
kube-prometheus-stack (source) major 42.3.0 -> 43.0.0

⚠ Dependency Lookup Warnings ⚠

Warnings were logged while processing this repo. Please check the Dependency Dashboard for more information.


Release Notes

prometheus-community/helm-charts

v43.0.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.


Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about these updates again.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by Renovate Bot.

@bloopy-boi bloopy-boi bot requested a review from h3mmy as a code owner December 13, 2022 12:39
@bloopy-boi bloopy-boi bot added renovate/helm type/major area/cluster Changes made in the cluster directory size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. labels Dec 13, 2022
@bloopy-boi
Copy link
Contributor Author

bloopy-boi bot commented Dec 13, 2022

Path: cluster/core/kube-prometheus-stack/helm-release.yaml
Version: 42.3.0 -> 43.1.0

@@ -10271,7 +10271,7 @@
 "steppedLine": false,
 "targets": [
 {
- "expr": "ceil(sum by(namespace) (rate(container_fs_reads_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", container!=\"\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)\", cluster=\"$cluster\", namespace!=\"\"}[$__rate_interval]) + rate(container_fs_writes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", container!=\"\", cluster=\"$cluster\", namespace!=\"\"}[$__rate_interval])))",
+ "expr": "ceil(sum by(namespace) (rate(container_fs_reads_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", container!=\"\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|md.+|dasd.+)\", cluster=\"$cluster\", namespace!=\"\"}[$__rate_interval]) + rate(container_fs_writes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", container!=\"\", cluster=\"$cluster\", namespace!=\"\"}[$__rate_interval])))",
 "format": "time_series",
 "intervalFactor": 2,
 "legendFormat": "{{namespace}}",
@@ -10360,7 +10360,7 @@
 "steppedLine": false,
 "targets": [
 {
- "expr": "sum by(namespace) (rate(container_fs_reads_bytes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", container!=\"\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)\", cluster=\"$cluster\", namespace!=\"\"}[$__rate_interval]) + rate(container_fs_writes_bytes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", container!=\"\", cluster=\"$cluster\", namespace!=\"\"}[$__rate_interval]))",
+ "expr": "sum by(namespace) (rate(container_fs_reads_bytes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", container!=\"\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|md.+|dasd.+)\", cluster=\"$cluster\", namespace!=\"\"}[$__rate_interval]) + rate(container_fs_writes_bytes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", container!=\"\", cluster=\"$cluster\", namespace!=\"\"}[$__rate_interval]))",
 "format": "time_series",
 "intervalFactor": 2,
 "legendFormat": "{{namespace}}",
@@ -10621,7 +10621,7 @@
 ],
 "targets": [
 {
- "expr": "sum by(namespace) (rate(container_fs_reads_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace!=\"\"}[$__rate_interval]))",
+ "expr": "sum by(namespace) (rate(container_fs_reads_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|md.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace!=\"\"}[$__rate_interval]))",
 "format": "table",
 "instant": true,
 "intervalFactor": 2,
@@ -10630,7 +10630,7 @@
 "step": 10
 },
 {
- "expr": "sum by(namespace) (rate(container_fs_writes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace!=\"\"}[$__rate_interval]))",
+ "expr": "sum by(namespace) (rate(container_fs_writes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|md.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace!=\"\"}[$__rate_interval]))",
 "format": "table",
 "instant": true,
 "intervalFactor": 2,
@@ -10639,7 +10639,7 @@
 "step": 10
 },
 {
- "expr": "sum by(namespace) (rate(container_fs_reads_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace!=\"\"}[$__rate_interval]) + rate(container_fs_writes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace!=\"\"}[$__rate_interval]))",
+ "expr": "sum by(namespace) (rate(container_fs_reads_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|md.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace!=\"\"}[$__rate_interval]) + rate(container_fs_writes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|md.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace!=\"\"}[$__rate_interval]))",
 "format": "table",
 "instant": true,
 "intervalFactor": 2,
@@ -10648,7 +10648,7 @@
 "step": 10
 },
 {
- "expr": "sum by(namespace) (rate(container_fs_reads_bytes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace!=\"\"}[$__rate_interval]))",
+ "expr": "sum by(namespace) (rate(container_fs_reads_bytes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|md.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace!=\"\"}[$__rate_interval]))",
 "format": "table",
 "instant": true,
 "intervalFactor": 2,
@@ -10657,7 +10657,7 @@
 "step": 10
 },
 {
- "expr": "sum by(namespace) (rate(container_fs_writes_bytes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace!=\"\"}[$__rate_interval]))",
+ "expr": "sum by(namespace) (rate(container_fs_writes_bytes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|md.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace!=\"\"}[$__rate_interval]))",
 "format": "table",
 "instant": true,
 "intervalFactor": 2,
@@ -10666,7 +10666,7 @@
 "step": 10
 },
 {
- "expr": "sum by(namespace) (rate(container_fs_reads_bytes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace!=\"\"}[$__rate_interval]) + rate(container_fs_writes_bytes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace!=\"\"}[$__rate_interval]))",
+ "expr": "sum by(namespace) (rate(container_fs_reads_bytes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|md.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace!=\"\"}[$__rate_interval]) + rate(container_fs_writes_bytes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|md.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace!=\"\"}[$__rate_interval]))",
 "format": "table",
 "instant": true,
 "intervalFactor": 2,
@@ -13036,7 +13036,7 @@
 "steppedLine": false,
 "targets": [
 {
- "expr": "ceil(sum by(pod) (rate(container_fs_reads_total{container!=\"\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)\", cluster=\"$cluster\", namespace=\"$namespace\"}[$__rate_interval]) + rate(container_fs_writes_total{container!=\"\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)\", cluster=\"$cluster\", namespace=\"$namespace\"}[$__rate_interval])))",
+ "expr": "ceil(sum by(pod) (rate(container_fs_reads_total{container!=\"\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|md.+|dasd.+)\", cluster=\"$cluster\", namespace=\"$namespace\"}[$__rate_interval]) + rate(container_fs_writes_total{container!=\"\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|md.+|dasd.+)\", cluster=\"$cluster\", namespace=\"$namespace\"}[$__rate_interval])))",
 "format": "time_series",
 "intervalFactor": 2,
 "legendFormat": "{{pod}}",
@@ -13125,7 +13125,7 @@
 "steppedLine": false,
 "targets": [
 {
- "expr": "sum by(pod) (rate(container_fs_reads_bytes_total{container!=\"\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)\", cluster=\"$cluster\", namespace=\"$namespace\"}[$__rate_interval]) + rate(container_fs_writes_bytes_total{container!=\"\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)\", cluster=\"$cluster\", namespace=\"$namespace\"}[$__rate_interval]))",
+ "expr": "sum by(pod) (rate(container_fs_reads_bytes_total{container!=\"\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|md.+|dasd.+)\", cluster=\"$cluster\", namespace=\"$namespace\"}[$__rate_interval]) + rate(container_fs_writes_bytes_total{container!=\"\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|md.+|dasd.+)\", cluster=\"$cluster\", namespace=\"$namespace\"}[$__rate_interval]))",
 "format": "time_series",
 "intervalFactor": 2,
 "legendFormat": "{{pod}}",
@@ -13386,7 +13386,7 @@
 ],
 "targets": [
 {
- "expr": "sum by(pod) (rate(container_fs_reads_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace=\"$namespace\"}[$__rate_interval]))",
+ "expr": "sum by(pod) (rate(container_fs_reads_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|md.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace=\"$namespace\"}[$__rate_interval]))",
 "format": "table",
 "instant": true,
 "intervalFactor": 2,
@@ -13395,7 +13395,7 @@
 "step": 10
 },
 {
- "expr": "sum by(pod) (rate(container_fs_writes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace=\"$namespace\"}[$__rate_interval]))",
+ "expr": "sum by(pod) (rate(container_fs_writes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|md.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace=\"$namespace\"}[$__rate_interval]))",
 "format": "table",
 "instant": true,
 "intervalFactor": 2,
@@ -13404,7 +13404,7 @@
 "step": 10
 },
 {
- "expr": "sum by(pod) (rate(container_fs_reads_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace=\"$namespace\"}[$__rate_interval]) + rate(container_fs_writes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace=\"$namespace\"}[$__rate_interval]))",
+ "expr": "sum by(pod) (rate(container_fs_reads_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|md.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace=\"$namespace\"}[$__rate_interval]) + rate(container_fs_writes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|md.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace=\"$namespace\"}[$__rate_interval]))",
 "format": "table",
 "instant": true,
 "intervalFactor": 2,
@@ -13413,7 +13413,7 @@
 "step": 10
 },
 {
- "expr": "sum by(pod) (rate(container_fs_reads_bytes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace=\"$namespace\"}[$__rate_interval]))",
+ "expr": "sum by(pod) (rate(container_fs_reads_bytes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|md.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace=\"$namespace\"}[$__rate_interval]))",
 "format": "table",
 "instant": true,
 "intervalFactor": 2,
@@ -13422,7 +13422,7 @@
 "step": 10
 },
 {
- "expr": "sum by(pod) (rate(container_fs_writes_bytes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace=\"$namespace\"}[$__rate_interval]))",
+ "expr": "sum by(pod) (rate(container_fs_writes_bytes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|md.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace=\"$namespace\"}[$__rate_interval]))",
 "format": "table",
 "instant": true,
 "intervalFactor": 2,
@@ -13431,7 +13431,7 @@
 "step": 10
 },
 {
- "expr": "sum by(pod) (rate(container_fs_reads_bytes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace=\"$namespace\"}[$__rate_interval]) + rate(container_fs_writes_bytes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace=\"$namespace\"}[$__rate_interval]))",
+ "expr": "sum by(pod) (rate(container_fs_reads_bytes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|md.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace=\"$namespace\"}[$__rate_interval]) + rate(container_fs_writes_bytes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|md.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace=\"$namespace\"}[$__rate_interval]))",
 "format": "table",
 "instant": true,
 "intervalFactor": 2,
@@ -16287,7 +16287,7 @@
 "steppedLine": false,
 "targets": [
 {
- "expr": "ceil(sum by(pod) (rate(container_fs_reads_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace=\"$namespace\", pod=~\"$pod\"}[$__rate_interval])))",
+ "expr": "ceil(sum by(pod) (rate(container_fs_reads_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|md.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace=\"$namespace\", pod=~\"$pod\"}[$__rate_interval])))",
 "format": "time_series",
 "intervalFactor": 2,
 "legendFormat": "Reads",
@@ -16295,7 +16295,7 @@
 "step": 10
 },
 {
- "expr": "ceil(sum by(pod) (rate(container_fs_writes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)\", container!=\"\", cluster=\"$cluster\",namespace=\"$namespace\", pod=~\"$pod\"}[$__rate_interval])))",
+ "expr": "ceil(sum by(pod) (rate(container_fs_writes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|md.+|dasd.+)\", container!=\"\", cluster=\"$cluster\",namespace=\"$namespace\", pod=~\"$pod\"}[$__rate_interval])))",
 "format": "time_series",
 "intervalFactor": 2,
 "legendFormat": "Writes",
@@ -16384,7 +16384,7 @@
 "steppedLine": false,
 "targets": [
 {
- "expr": "sum by(pod) (rate(container_fs_reads_bytes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace=\"$namespace\", pod=~\"$pod\"}[$__rate_interval]))",
+ "expr": "sum by(pod) (rate(container_fs_reads_bytes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|md.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace=\"$namespace\", pod=~\"$pod\"}[$__rate_interval]))",
 "format": "time_series",
 "intervalFactor": 2,
 "legendFormat": "Reads",
@@ -16392,7 +16392,7 @@
 "step": 10
 },
 {
- "expr": "sum by(pod) (rate(container_fs_writes_bytes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace=\"$namespace\", pod=~\"$pod\"}[$__rate_interval]))",
+ "expr": "sum by(pod) (rate(container_fs_writes_bytes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|md.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace=\"$namespace\", pod=~\"$pod\"}[$__rate_interval]))",
 "format": "time_series",
 "intervalFactor": 2,
 "legendFormat": "Writes",
@@ -16844,7 +16844,7 @@
 ],
 "targets": [
 {
- "expr": "sum by(container) (rate(container_fs_reads_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace=\"$namespace\", pod=\"$pod\"}[$__rate_interval]))",
+ "expr": "sum by(container) (rate(container_fs_reads_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|md.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace=\"$namespace\", pod=\"$pod\"}[$__rate_interval]))",
 "format": "table",
 "instant": true,
 "intervalFactor": 2,
@@ -16853,7 +16853,7 @@
 "step": 10
 },
 {
- "expr": "sum by(container) (rate(container_fs_writes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\",device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace=\"$namespace\", pod=\"$pod\"}[$__rate_interval]))",
+ "expr": "sum by(container) (rate(container_fs_writes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\",device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|md.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace=\"$namespace\", pod=\"$pod\"}[$__rate_interval]))",
 "format": "table",
 "instant": true,
 "intervalFactor": 2,
@@ -16862,7 +16862,7 @@
 "step": 10
 },
 {
- "expr": "sum by(container) (rate(container_fs_reads_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace=\"$namespace\", pod=\"$pod\"}[$__rate_interval]) + rate(container_fs_writes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace=\"$namespace\", pod=\"$pod\"}[$__rate_interval]))",
+ "expr": "sum by(container) (rate(container_fs_reads_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|md.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace=\"$namespace\", pod=\"$pod\"}[$__rate_interval]) + rate(container_fs_writes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|md.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace=\"$namespace\", pod=\"$pod\"}[$__rate_interval]))",
 "format": "table",
 "instant": true,
 "intervalFactor": 2,
@@ -16871,7 +16871,7 @@
 "step": 10
 },
 {
- "expr": "sum by(container) (rate(container_fs_reads_bytes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace=\"$namespace\", pod=\"$pod\"}[$__rate_interval]))",
+ "expr": "sum by(container) (rate(container_fs_reads_bytes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|md.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace=\"$namespace\", pod=\"$pod\"}[$__rate_interval]))",
 "format": "table",
 "instant": true,
 "intervalFactor": 2,
@@ -16880,7 +16880,7 @@
 "step": 10
 },
 {
- "expr": "sum by(container) (rate(container_fs_writes_bytes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace=\"$namespace\", pod=\"$pod\"}[$__rate_interval]))",
+ "expr": "sum by(container) (rate(container_fs_writes_bytes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|md.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace=\"$namespace\", pod=\"$pod\"}[$__rate_interval]))",
 "format": "table",
 "instant": true,
 "intervalFactor": 2,
@@ -16889,7 +16889,7 @@
 "step": 10
 },
 {
- "expr": "sum by(container) (rate(container_fs_reads_bytes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace=\"$namespace\", pod=\"$pod\"}[$__rate_interval]) + rate(container_fs_writes_bytes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace=\"$namespace\", pod=\"$pod\"}[$__rate_interval]))",
+ "expr": "sum by(container) (rate(container_fs_reads_bytes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|md.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace=\"$namespace\", pod=\"$pod\"}[$__rate_interval]) + rate(container_fs_writes_bytes_total{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|md.+|dasd.+)\", container!=\"\", cluster=\"$cluster\", namespace=\"$namespace\", pod=\"$pod\"}[$__rate_interval]))",
 "format": "table",
 "instant": true,
 "intervalFactor": 2,
@@ -27646,7 +27646,7 @@
 "steppedLine": false,
 "targets": [
 {
- "expr": "sum without (device) (\n max without (fstype, mountpoint) ((\n node_filesystem_size_bytes{job=\"node-exporter\", fstype!=\"\", cluster=\"$cluster\"}\n -\n node_filesystem_avail_bytes{job=\"node-exporter\", fstype!=\"\", cluster=\"$cluster\"}\n ) != 0)\n)\n/ scalar(sum(max without (fstype, mountpoint) (node_filesystem_size_bytes{job=\"node-exporter\", fstype!=\"\", cluster=\"$cluster\"})))\n",
+ "expr": "sum without (device) (\n max without (fstype, mountpoint) ((\n node_filesystem_size_bytes{job=\"node-exporter\", fstype!=\"\", mountpoint!=\"\", cluster=\"$cluster\"}\n -\n node_filesystem_avail_bytes{job=\"node-exporter\", fstype!=\"\", mountpoint!=\"\", cluster=\"$cluster\"}\n ) != 0)\n)\n/ scalar(sum(max without (fstype, mountpoint) (node_filesystem_size_bytes{job=\"node-exporter\", fstype!=\"\", mountpoint!=\"\", cluster=\"$cluster\"})))\n",
 "format": "time_series",
 "intervalFactor": 2,
 "legendFormat": "{{instance}}",
@@ -29365,21 +29365,21 @@
 "steppedLine": false,
 "targets": [
 {
- "expr": "rate(node_disk_read_bytes_total{job=\"node-exporter\", instance=\"$instance\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)\"}[$__rate_interval])",
+ "expr": "rate(node_disk_read_bytes_total{job=\"node-exporter\", instance=\"$instance\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|md.+|dasd.+)\"}[$__rate_interval])",
 "format": "time_series",
 "intervalFactor": 1,
 "legendFormat": "{{device}} read",
 "refId": "A"
 },
 {
- "expr": "rate(node_disk_written_bytes_total{job=\"node-exporter\", instance=\"$instance\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)\"}[$__rate_interval])",
+ "expr": "rate(node_disk_written_bytes_total{job=\"node-exporter\", instance=\"$instance\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|md.+|dasd.+)\"}[$__rate_interval])",
 "format": "time_series",
 "intervalFactor": 1,
 "legendFormat": "{{device}} written",
 "refId": "B"
 },
 {
- "expr": "rate(node_disk_io_time_seconds_total{job=\"node-exporter\", instance=\"$instance\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)\"}[$__rate_interval])",
+ "expr": "rate(node_disk_io_time_seconds_total{job=\"node-exporter\", instance=\"$instance\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|md.+|dasd.+)\"}[$__rate_interval])",
 "format": "time_series",
 "intervalFactor": 1,
 "legendFormat": "{{device}} io time",
@@ -29533,14 +29533,14 @@
 "span": 6,
 "targets": [
 {
- "expr": "max by (mountpoint) (node_filesystem_size_bytes{job=\"node-exporter\", instance=\"$instance\", fstype!=\"\"})\n",
+ "expr": "max by (mountpoint) (node_filesystem_size_bytes{job=\"node-exporter\", instance=\"$instance\", fstype!=\"\", mountpoint!=\"\"})\n",
 "format": "table",
 "instant": true,
 "intervalFactor": 2,
 "legendFormat": ""
 },
 {
- "expr": "max by (mountpoint) (node_filesystem_avail_bytes{job=\"node-exporter\", instance=\"$instance\", fstype!=\"\"})\n",
+ "expr": "max by (mountpoint) (node_filesystem_avail_bytes{job=\"node-exporter\", instance=\"$instance\", fstype!=\"\", mountpoint!=\"\"})\n",
 "format": "table",
 "instant": true,
 "intervalFactor": 2,
@@ -30426,21 +30426,21 @@
 "steppedLine": false,
 "targets": [
 {
- "expr": "rate(node_disk_read_bytes_total{job=\"node-exporter\", instance=\"$instance\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)\"}[$__rate_interval])",
+ "expr": "rate(node_disk_read_bytes_total{job=\"node-exporter\", instance=\"$instance\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|md.+|dasd.+)\"}[$__rate_interval])",
 "format": "time_series",
 "intervalFactor": 1,
 "legendFormat": "{{device}} read",
 "refId": "A"
 },
 {
- "expr": "rate(node_disk_written_bytes_total{job=\"node-exporter\", instance=\"$instance\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)\"}[$__rate_interval])",
+ "expr": "rate(node_disk_written_bytes_total{job=\"node-exporter\", instance=\"$instance\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|md.+|dasd.+)\"}[$__rate_interval])",
 "format": "time_series",
 "intervalFactor": 1,
 "legendFormat": "{{device}} written",
 "refId": "B"
 },
 {
- "expr": "rate(node_disk_io_time_seconds_total{job=\"node-exporter\", instance=\"$instance\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)\"}[$__rate_interval])",
+ "expr": "rate(node_disk_io_time_seconds_total{job=\"node-exporter\", instance=\"$instance\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|md.+|dasd.+)\"}[$__rate_interval])",
 "format": "time_series",
 "intervalFactor": 1,
 "legendFormat": "{{device}} io time",
@@ -30594,14 +30594,14 @@
 "span": 6,
 "targets": [
 {
- "expr": "max by (mountpoint) (node_filesystem_size_bytes{job=\"node-exporter\", instance=\"$instance\", fstype!=\"\"})\n",
+ "expr": "max by (mountpoint) (node_filesystem_size_bytes{job=\"node-exporter\", instance=\"$instance\", fstype!=\"\", mountpoint!=\"\"})\n",
 "format": "table",
 "instant": true,
 "intervalFactor": 2,
 "legendFormat": ""
 },
 {
- "expr": "max by (mountpoint) (node_filesystem_avail_bytes{job=\"node-exporter\", instance=\"$instance\", fstype!=\"\"})\n",
+ "expr": "max by (mountpoint) (node_filesystem_avail_bytes{job=\"node-exporter\", instance=\"$instance\", fstype!=\"\", mountpoint!=\"\"})\n",
 "format": "table",
 "instant": true,
 "intervalFactor": 2,
@@ -32922,7 +32922,7 @@
 
 ],
 "type": "number",
- "unit": "short"
+ "unit": "s"
 },
 {
 "alias": "Instance",
@@ -36165,17 +36165,17 @@
 spec:
 containers:
 - name: kube-prometheus-stack
- image: "quay.io/prometheus-operator/prometheus-operator:v0.60.1"
+ image: "quay.io/prometheus-operator/prometheus-operator:v0.61.1"
 imagePullPolicy: "IfNotPresent"
 args:
 - --kubelet-service=kube-system/prometheus-kubelet
 - --localhost=127.0.0.1
- - --prometheus-config-reloader=quay.io/prometheus-operator/prometheus-config-reloader:v0.60.1
+ - --prometheus-config-reloader=quay.io/prometheus-operator/prometheus-config-reloader:v0.61.1
 - --config-reloader-cpu-request=200m
 - --config-reloader-cpu-limit=200m
 - --config-reloader-memory-request=50Mi
 - --config-reloader-memory-limit=50Mi
- - --thanos-default-base-image=quay.io/thanos/thanos:v0.28.1
+ - --thanos-default-base-image=quay.io/thanos/thanos:v0.29.0
 - --web.enable-tls=true
 - --web.cert-file=/cert/cert
 - --web.key-file=/cert/key
@@ -36293,8 +36293,8 @@
 port: http-web
 pathPrefix: "/"
 apiVersion: v2
- image: "quay.io/prometheus/prometheus:v2.39.1"
- version: v2.39.1
+ image: "quay.io/prometheus/prometheus:v2.40.5"
+ version: v2.40.5
 externalUrl: http://prometheus-prometheus.default:9090
 paused: false
 replicas: 1
@@ -38370,9 +38370,9 @@
 record: instance:node_memory_utilisation:ratio
 - expr: rate(node_vmstat_pgmajfault{job="node-exporter"}[5m])
 record: instance:node_vmstat_pgmajfault:rate5m
- - expr: rate(node_disk_io_time_seconds_total{job="node-exporter", device=~"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)"}[5m])
+ - expr: rate(node_disk_io_time_seconds_total{job="node-exporter", device=~"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|md.+|dasd.+)"}[5m])
 record: instance_device:node_disk_io_time_seconds:rate5m
- - expr: rate(node_disk_io_time_weighted_seconds_total{job="node-exporter", device=~"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)"}[5m])
+ - expr: rate(node_disk_io_time_weighted_seconds_total{job="node-exporter", device=~"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|md.+|dasd.+)"}[5m])
 record: instance_device:node_disk_io_time_weighted_seconds:rate5m
 - expr: |-
 sum without (device) (
@@ -38419,11 +38419,11 @@
 summary: Filesystem is predicted to run out of space within the next 24 hours.
 expr: |-
 (
- node_filesystem_avail_bytes{job="node-exporter",fstype!=""} / node_filesystem_size_bytes{job="node-exporter",fstype!=""} Dockerfiles LICENSE README.md SECURITY.md Taskfile.yml charts cluster configure.sh container_images_in_use.txt container_images_with_arm64.txt container_images_without_arm64.txt default docs find_containers_by_uid.sh fix-crd.sh infrastructure k8s mkdocs.yaml package.json provision static 100 < 15
+ node_filesystem_avail_bytes{job="node-exporter",fstype!="",mountpoint!=""} / node_filesystem_size_bytes{job="node-exporter",fstype!="",mountpoint!=""} Dockerfiles LICENSE README.md SECURITY.md Taskfile.yml charts cluster configure.sh container_images_in_use.txt container_images_with_arm64.txt container_images_without_arm64.txt default docs find_containers_by_uid.sh fix-crd.sh infrastructure k8s mkdocs.yaml package.json provision static 100 < 15
 and
- predict_linear(node_filesystem_avail_bytes{job="node-exporter",fstype!=""}[6h], 24*60*60) < 0
+ predict_linear(node_filesystem_avail_bytes{job="node-exporter",fstype!="",mountpoint!=""}[6h], 24*60*60) < 0
 and
- node_filesystem_readonly{job="node-exporter",fstype!=""} == 0
+ node_filesystem_readonly{job="node-exporter",fstype!="",mountpoint!=""} == 0
 )
 for: 1h
 labels:
@@ -38435,11 +38435,11 @@
 summary: Filesystem is predicted to run out of space within the next 4 hours.
 expr: |-
 (
- node_filesystem_avail_bytes{job="node-exporter",fstype!=""} / node_filesystem_size_bytes{job="node-exporter",fstype!=""} Dockerfiles LICENSE README.md SECURITY.md Taskfile.yml charts cluster configure.sh container_images_in_use.txt container_images_with_arm64.txt container_images_without_arm64.txt default docs find_containers_by_uid.sh fix-crd.sh infrastructure k8s mkdocs.yaml package.json provision static 100 < 10
+ node_filesystem_avail_bytes{job="node-exporter",fstype!="",mountpoint!=""} / node_filesystem_size_bytes{job="node-exporter",fstype!="",mountpoint!=""} Dockerfiles LICENSE README.md SECURITY.md Taskfile.yml charts cluster configure.sh container_images_in_use.txt container_images_with_arm64.txt container_images_without_arm64.txt default docs find_containers_by_uid.sh fix-crd.sh infrastructure k8s mkdocs.yaml package.json provision static 100 < 10
 and
- predict_linear(node_filesystem_avail_bytes{job="node-exporter",fstype!=""}[6h], 4*60*60) < 0
+ predict_linear(node_filesystem_avail_bytes{job="node-exporter",fstype!="",mountpoint!=""}[6h], 4*60*60) < 0
 and
- node_filesystem_readonly{job="node-exporter",fstype!=""} == 0
+ node_filesystem_readonly{job="node-exporter",fstype!="",mountpoint!=""} == 0
 )
 for: 1h
 labels:
@@ -38451,9 +38451,9 @@
 summary: Filesystem has less than 5% space left.
 expr: |-
 (
- node_filesystem_avail_bytes{job="node-exporter",fstype!=""} / node_filesystem_size_bytes{job="node-exporter",fstype!=""} Dockerfiles LICENSE README.md SECURITY.md Taskfile.yml charts cluster configure.sh container_images_in_use.txt container_images_with_arm64.txt container_images_without_arm64.txt default docs find_containers_by_uid.sh fix-crd.sh infrastructure k8s mkdocs.yaml package.json provision static 100 < 5
+ node_filesystem_avail_bytes{job="node-exporter",fstype!="",mountpoint!=""} / node_filesystem_size_bytes{job="node-exporter",fstype!="",mountpoint!=""} Dockerfiles LICENSE README.md SECURITY.md Taskfile.yml charts cluster configure.sh container_images_in_use.txt container_images_with_arm64.txt container_images_without_arm64.txt default docs find_containers_by_uid.sh fix-crd.sh infrastructure k8s mkdocs.yaml package.json provision static 100 < 5
 and
- node_filesystem_readonly{job="node-exporter",fstype!=""} == 0
+ node_filesystem_readonly{job="node-exporter",fstype!="",mountpoint!=""} == 0
 )
 for: 30m
 labels:
@@ -38465,9 +38465,9 @@
 summary: Filesystem has less than 3% space left.
 expr: |-
 (
- node_filesystem_avail_bytes{job="node-exporter",fstype!=""} / node_filesystem_size_bytes{job="node-exporter",fstype!=""} Dockerfiles LICENSE README.md SECURITY.md Taskfile.yml charts cluster configure.sh container_images_in_use.txt container_images_with_arm64.txt container_images_without_arm64.txt default docs find_containers_by_uid.sh fix-crd.sh infrastructure k8s mkdocs.yaml package.json provision static 100 < 3
+ node_filesystem_avail_bytes{job="node-exporter",fstype!="",mountpoint!=""} / node_filesystem_size_bytes{job="node-exporter",fstype!="",mountpoint!=""} Dockerfiles LICENSE README.md SECURITY.md Taskfile.yml charts cluster configure.sh container_images_in_use.txt container_images_with_arm64.txt container_images_without_arm64.txt default docs find_containers_by_uid.sh fix-crd.sh infrastructure k8s mkdocs.yaml package.json provision static 100 < 3
 and
- node_filesystem_readonly{job="node-exporter",fstype!=""} == 0
+ node_filesystem_readonly{job="node-exporter",fstype!="",mountpoint!=""} == 0
 )
 for: 30m
 labels:
@@ -38479,11 +38479,11 @@
 summary: Filesystem is predicted to run out of inodes within the next 24 hours.
 expr: |-
 (
- node_filesystem_files_free{job="node-exporter",fstype!=""} / node_filesystem_files{job="node-exporter",fstype!=""} Dockerfiles LICENSE README.md SECURITY.md Taskfile.yml charts cluster configure.sh container_images_in_use.txt container_images_with_arm64.txt container_images_without_arm64.txt default docs find_containers_by_uid.sh fix-crd.sh infrastructure k8s mkdocs.yaml package.json provision static 100 < 40
+ node_filesystem_files_free{job="node-exporter",fstype!="",mountpoint!=""} / node_filesystem_files{job="node-exporter",fstype!="",mountpoint!=""} Dockerfiles LICENSE README.md SECURITY.md Taskfile.yml charts cluster configure.sh container_images_in_use.txt container_images_with_arm64.txt container_images_without_arm64.txt default docs find_containers_by_uid.sh fix-crd.sh infrastructure k8s mkdocs.yaml package.json provision static 100 < 40
 and
- predict_linear(node_filesystem_files_free{job="node-exporter",fstype!=""}[6h], 24*60*60) < 0
+ predict_linear(node_filesystem_files_free{job="node-exporter",fstype!="",mountpoint!=""}[6h], 24*60*60) < 0
 and
- node_filesystem_readonly{job="node-exporter",fstype!=""} == 0
+ node_filesystem_readonly{job="node-exporter",fstype!="",mountpoint!=""} == 0
 )
 for: 1h
 labels:
@@ -38495,11 +38495,11 @@
 summary: Filesystem is predicted to run out of inodes within the next 4 hours.
 expr: |-
 (
- node_filesystem_files_free{job="node-exporter",fstype!=""} / node_filesystem_files{job="node-exporter",fstype!=""} Dockerfiles LICENSE README.md SECURITY.md Taskfile.yml charts cluster configure.sh container_images_in_use.txt container_images_with_arm64.txt container_images_without_arm64.txt default docs find_containers_by_uid.sh fix-crd.sh infrastructure k8s mkdocs.yaml package.json provision static 100 < 20
+ node_filesystem_files_free{job="node-exporter",fstype!="",mountpoint!=""} / node_filesystem_files{job="node-exporter",fstype!="",mountpoint!=""} Dockerfiles LICENSE README.md SECURITY.md Taskfile.yml charts cluster configure.sh container_images_in_use.txt container_images_with_arm64.txt container_images_without_arm64.txt default docs find_containers_by_uid.sh fix-crd.sh infrastructure k8s mkdocs.yaml package.json provision static 100 < 20
 and
- predict_linear(node_filesystem_files_free{job="node-exporter",fstype!=""}[6h], 4*60*60) < 0
+ predict_linear(node_filesystem_files_free{job="node-exporter",fstype!="",mountpoint!=""}[6h], 4*60*60) < 0
 and
- node_filesystem_readonly{job="node-exporter",fstype!=""} == 0
+ node_filesystem_readonly{job="node-exporter",fstype!="",mountpoint!=""} == 0
 )
 for: 1h
 labels:
@@ -38511,9 +38511,9 @@
 summary: Filesystem has less than 5% inodes left.
 expr: |-
 (
- node_filesystem_files_free{job="node-exporter",fstype!=""} / node_filesystem_files{job="node-exporter",fstype!=""} Dockerfiles LICENSE README.md SECURITY.md Taskfile.yml charts cluster configure.sh container_images_in_use.txt container_images_with_arm64.txt container_images_without_arm64.txt default docs find_containers_by_uid.sh fix-crd.sh infrastructure k8s mkdocs.yaml package.json provision static 100 < 5
+ node_filesystem_files_free{job="node-exporter",fstype!="",mountpoint!=""} / node_filesystem_files{job="node-exporter",fstype!="",mountpoint!=""} Dockerfiles LICENSE README.md SECURITY.md Taskfile.yml charts cluster configure.sh container_images_in_use.txt container_images_with_arm64.txt container_images_without_arm64.txt default docs find_containers_by_uid.sh fix-crd.sh infrastructure k8s mkdocs.yaml package.json provision static 100 < 5
 and
- node_filesystem_readonly{job="node-exporter",fstype!=""} == 0
+ node_filesystem_readonly{job="node-exporter",fstype!="",mountpoint!=""} == 0
 )
 for: 1h
 labels:
@@ -38525,9 +38525,9 @@
 summary: Filesystem has less than 3% inodes left.
 expr: |-
 (
- node_filesystem_files_free{job="node-exporter",fstype!=""} / node_filesystem_files{job="node-exporter",fstype!=""} Dockerfiles LICENSE README.md SECURITY.md Taskfile.yml charts cluster configure.sh container_images_in_use.txt container_images_with_arm64.txt container_images_without_arm64.txt default docs find_containers_by_uid.sh fix-crd.sh infrastructure k8s mkdocs.yaml package.json provision static 100 < 3
+ node_filesystem_files_free{job="node-exporter",fstype!="",mountpoint!=""} / node_filesystem_files{job="node-exporter",fstype!="",mountpoint!=""} Dockerfiles LICENSE README.md SECURITY.md Taskfile.yml charts cluster configure.sh container_images_in_use.txt container_images_with_arm64.txt container_images_without_arm64.txt default docs find_containers_by_uid.sh fix-crd.sh infrastructure k8s mkdocs.yaml package.json provision static 100 < 3
 and
- node_filesystem_readonly{job="node-exporter",fstype!=""} == 0
+ node_filesystem_readonly{job="node-exporter",fstype!="",mountpoint!=""} == 0
 )
 for: 1h
 labels:
@@ -38603,7 +38603,7 @@
 description: RAID array '{{ $labels.device }}' on {{ $labels.instance }} is in degraded state due to one or more disks failures. Number of spare drives is insufficient to fix issue automatically.
 runbook_url: https://runbooks.prometheus-operator.dev/runbooks/node/noderaiddegraded
 summary: RAID Array is degraded
- expr: node_md_disks_required{job="node-exporter",device=~"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)"} - ignoring (state) (node_md_disks{state="active",job="node-exporter",device=~"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)"}) > 0
+ expr: node_md_disks_required{job="node-exporter",device=~"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|md.+|dasd.+)"} - ignoring (state) (node_md_disks{state="active",job="node-exporter",device=~"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|md.+|dasd.+)"}) > 0
 for: 15m
 labels:
 severity: critical
@@ -38612,7 +38612,7 @@
 description: At least one device in RAID array on {{ $labels.instance }} failed. Array '{{ $labels.device }}' needs attention and possibly a disk swap.
 runbook_url: https://runbooks.prometheus-operator.dev/runbooks/node/noderaiddiskfailure
 summary: Failed device in RAID array
- expr: node_md_disks{state="failed",job="node-exporter",device=~"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)"} > 0
+ expr: node_md_disks{state="failed",job="node-exporter",device=~"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|md.+|dasd.+)"} > 0
 labels:
 severity: warning
 - alert: NodeFileDescriptorLimit
@@ -38691,11 +38691,11 @@
 ))
 record: 'node_namespace_pod:kube_pod_info:'
 - expr: |-
- count by (cluster, node) (sum by (node, cpu) (
- node_cpu_seconds_total{job="node-exporter"}
- Dockerfiles LICENSE README.md SECURITY.md Taskfile.yml charts cluster configure.sh container_images_in_use.txt container_images_with_arm64.txt container_images_without_arm64.txt default docs find_containers_by_uid.sh fix-crd.sh infrastructure k8s mkdocs.yaml package.json provision static on (namespace, pod) group_left(node)
+ count by (cluster, node) (
+ node_cpu_seconds_total{mode="idle",job="node-exporter"}
+ Dockerfiles LICENSE README.md SECURITY.md Taskfile.yml charts cluster configure.sh container_images_in_use.txt container_images_with_arm64.txt container_images_without_arm64.txt default docs find_containers_by_uid.sh fix-crd.sh infrastructure k8s mkdocs.yaml package.json provision static on (namespace, pod) group_left(node)
 topk by(namespace, pod) (1, node_namespace_pod:kube_pod_info:)
- ))
+ )
 record: node:node_num_cpu:sum
 - expr: |-
 sum(
@@ -38709,8 +38709,16 @@
 ) by (cluster)
 record: :node_memory_MemAvailable_bytes:sum
 - expr: |-
- sum(rate(node_cpu_seconds_total{job="node-exporter",mode!="idle",mode!="iowait",mode!="steal"}[5m])) /
- count(sum(node_cpu_seconds_total{job="node-exporter"}) by (cluster, instance, cpu))
+ avg by (cluster, node) (
+ sum without (mode) (
+ rate(node_cpu_seconds_total{mode!="idle",mode!="iowait",mode!="steal",job="node-exporter"}[5m])
+ )
+ )
+ record: node:node_cpu_utilization:ratio_rate5m
+ - expr: |-
+ avg by (cluster) (
+ node:node_cpu_utilization:ratio_rate5m
+ )
 record: cluster:node_cpu:ratio_rate5m
 ---
 # Source: kube-prometheus-stack/templates/prometheus/rules-1.14/prometheus-operator.yaml

@bloopy-boi
Copy link
Contributor Author

bloopy-boi bot commented Dec 13, 2022

🦙 MegaLinter status: ✅ SUCCESS

Descriptor Linter Files Fixed Errors Elapsed time
✅ COPYPASTE jscpd yes no 1.45s
✅ YAML prettier 2 0 0 0.59s
✅ YAML yamllint 2 0 0.28s

See errors details in artifact MegaLinter reports on CI Job page
Set VALIDATE_ALL_CODEBASE: true in mega-linter.yml to validate all sources, not only the diff

MegaLinter is graciously provided by OX Security

@bloopy-boi bloopy-boi bot force-pushed the renovate/kube-prometheus-stack-43.x branch from 177c1e8 to 322839f Compare December 15, 2022 12:39
@h3mmy h3mmy merged commit 6635786 into main Dec 16, 2022
@h3mmy h3mmy deleted the renovate/kube-prometheus-stack-43.x branch December 16, 2022 05:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/cluster Changes made in the cluster directory renovate/helm size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. type/major
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant