Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat!: update helm chart kube-prometheus-stack to 40.0.0 #1515

Merged
merged 1 commit into from
Sep 17, 2022

Conversation

bloopy-boi[bot]
Copy link
Contributor

@bloopy-boi bloopy-boi bot commented Sep 14, 2022

This PR contains the following updates:

Package Update Change
kube-prometheus-stack (source) major 39.13.3 -> 40.0.0

⚠ Dependency Lookup Warnings ⚠

Warnings were logged while processing this repo. Please check the Dependency Dashboard for more information.


Release Notes

prometheus-community/helm-charts

v40.0.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.


Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about these updates again.


  • If you want to rebase/retry this PR, click this checkbox.

This PR has been generated by Renovate Bot.

@bloopy-boi bloopy-boi bot requested a review from h3mmy as a code owner September 14, 2022 08:33
@bloopy-boi bloopy-boi bot added renovate/helm type/major area/cluster Changes made in the cluster directory size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. labels Sep 14, 2022
@bloopy-boi
Copy link
Contributor Author

bloopy-boi bot commented Sep 14, 2022

Path: cluster/apps/monitoring/kube-prometheus-stack/helm-release.yaml
Version: 39.13.3 -> 40.0.2

@@ -21,9 +21,13 @@
 name: node-exporter
 namespace: default
 labels:
- app: prometheus-node-exporter
- release: "kube-prometheus-stack"
- heritage: "Helm"
+ app.kubernetes.io/managed-by: Helm
+ app.kubernetes.io/component: metrics
+ app.kubernetes.io/part-of: prometheus-node-exporter
+ app.kubernetes.io/instance: kube-prometheus-stack
+ app.kubernetes.io/name: prometheus-node-exporter
+ jobLabel: node-exporter
+ release: kube-prometheus-stack
 annotations: {}
 imagePullSecrets: []
 ---
@@ -22105,7 +22109,7 @@
 "steppedLine": false,
 "targets": [
 {
- "expr": "histogram_quantile(0.99, sum(rate(kubelet_pod_start_duration_seconds_count{cluster=\"$cluster\",job=\"kubelet\", metrics_path=\"/metrics\",instance=~\"$instance\"}[$__rate_interval])) by (instance, le))",
+ "expr": "histogram_quantile(0.99, sum(rate(kubelet_pod_start_duration_seconds_bucket{cluster=\"$cluster\",job=\"kubelet\", metrics_path=\"/metrics\",instance=~\"$instance\"}[$__rate_interval])) by (instance, le))",
 "format": "time_series",
 "intervalFactor": 2,
 "legendFormat": "{{instance}} pod",
@@ -29365,21 +29369,21 @@
 {
 "expr": "rate(node_disk_read_bytes_total{job=\"node-exporter\", instance=\"$instance\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)\"}[$__rate_interval])",
 "format": "time_series",
- "intervalFactor": 2,
+ "intervalFactor": 1,
 "legendFormat": "{{device}} read",
 "refId": "A"
 },
 {
 "expr": "rate(node_disk_written_bytes_total{job=\"node-exporter\", instance=\"$instance\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)\"}[$__rate_interval])",
 "format": "time_series",
- "intervalFactor": 2,
+ "intervalFactor": 1,
 "legendFormat": "{{device}} written",
 "refId": "B"
 },
 {
 "expr": "rate(node_disk_io_time_seconds_total{job=\"node-exporter\", instance=\"$instance\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)\"}[$__rate_interval])",
 "format": "time_series",
- "intervalFactor": 2,
+ "intervalFactor": 1,
 "legendFormat": "{{device}} io time",
 "refId": "C"
 }
@@ -29407,7 +29411,7 @@
 },
 "yaxes": [
 {
- "format": "bytes",
+ "format": "Bps",
 "label": null,
 "logBase": 1,
 "max": null,
@@ -29415,7 +29419,7 @@
 "show": true
 },
 {
- "format": "s",
+ "format": "percentunit",
 "label": null,
 "logBase": 1,
 "max": null,
@@ -29664,6 +29668,7 @@
 "dashLength": 10,
 "dashes": false,
 "datasource": "$datasource",
+ "description": "Network received (bits/s)",
 "fill": 0,
 "fillGradient": 0,
 "gridPos": {
@@ -29702,9 +29707,9 @@
 "steppedLine": false,
 "targets": [
 {
- "expr": "rate(node_network_receive_bytes_total{job=\"node-exporter\", instance=\"$instance\", device!=\"lo\"}[$__rate_interval])",
+ "expr": "rate(node_network_receive_bytes_total{job=\"node-exporter\", instance=\"$instance\", device!=\"lo\"}[$__rate_interval]) Dockerfiles LICENSE README.md SECURITY.md Taskfile.yml charts cluster configure.sh container_images_in_use.txt container_images_with_arm64.txt container_images_without_arm64.txt default docs fix-crd.sh k8s mkdocs.yaml package.json provision static 8",
 "format": "time_series",
- "intervalFactor": 2,
+ "intervalFactor": 1,
 "legendFormat": "{{device}}",
 "refId": "A"
 }
@@ -29732,7 +29737,7 @@
 },
 "yaxes": [
 {
- "format": "bytes",
+ "format": "bps",
 "label": null,
 "logBase": 1,
 "max": null,
@@ -29740,7 +29745,7 @@
 "show": true
 },
 {
- "format": "bytes",
+ "format": "bps",
 "label": null,
 "logBase": 1,
 "max": null,
@@ -29757,6 +29762,7 @@
 "dashLength": 10,
 "dashes": false,
 "datasource": "$datasource",
+ "description": "Network transmitted (bits/s)",
 "fill": 0,
 "fillGradient": 0,
 "gridPos": {
@@ -29795,9 +29801,9 @@
 "steppedLine": false,
 "targets": [
 {
- "expr": "rate(node_network_transmit_bytes_total{job=\"node-exporter\", instance=\"$instance\", device!=\"lo\"}[$__rate_interval])",
+ "expr": "rate(node_network_transmit_bytes_total{job=\"node-exporter\", instance=\"$instance\", device!=\"lo\"}[$__rate_interval]) Dockerfiles LICENSE README.md SECURITY.md Taskfile.yml charts cluster configure.sh container_images_in_use.txt container_images_with_arm64.txt container_images_without_arm64.txt default docs fix-crd.sh k8s mkdocs.yaml package.json provision static 8",
 "format": "time_series",
- "intervalFactor": 2,
+ "intervalFactor": 1,
 "legendFormat": "{{device}}",
 "refId": "A"
 }
@@ -29825,7 +29831,7 @@
 },
 "yaxes": [
 {
- "format": "bytes",
+ "format": "bps",
 "label": null,
 "logBase": 1,
 "max": null,
@@ -29833,7 +29839,7 @@
 "show": true
 },
 {
- "format": "bytes",
+ "format": "bps",
 "label": null,
 "logBase": 1,
 "max": null,
@@ -30424,21 +30430,21 @@
 {
 "expr": "rate(node_disk_read_bytes_total{job=\"node-exporter\", instance=\"$instance\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)\"}[$__rate_interval])",
 "format": "time_series",
- "intervalFactor": 2,
+ "intervalFactor": 1,
 "legendFormat": "{{device}} read",
 "refId": "A"
 },
 {
 "expr": "rate(node_disk_written_bytes_total{job=\"node-exporter\", instance=\"$instance\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)\"}[$__rate_interval])",
 "format": "time_series",
- "intervalFactor": 2,
+ "intervalFactor": 1,
 "legendFormat": "{{device}} written",
 "refId": "B"
 },
 {
 "expr": "rate(node_disk_io_time_seconds_total{job=\"node-exporter\", instance=\"$instance\", device=~\"(/dev/)?(mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+)\"}[$__rate_interval])",
 "format": "time_series",
- "intervalFactor": 2,
+ "intervalFactor": 1,
 "legendFormat": "{{device}} io time",
 "refId": "C"
 }
@@ -30466,7 +30472,7 @@
 },
 "yaxes": [
 {
- "format": "bytes",
+ "format": "Bps",
 "label": null,
 "logBase": 1,
 "max": null,
@@ -30474,7 +30480,7 @@
 "show": true
 },
 {
- "format": "s",
+ "format": "percentunit",
 "label": null,
 "logBase": 1,
 "max": null,
@@ -30723,6 +30729,7 @@
 "dashLength": 10,
 "dashes": false,
 "datasource": "$datasource",
+ "description": "Network received (bits/s)",
 "fill": 0,
 "fillGradient": 0,
 "gridPos": {
@@ -30761,9 +30768,9 @@
 "steppedLine": false,
 "targets": [
 {
- "expr": "rate(node_network_receive_bytes_total{job=\"node-exporter\", instance=\"$instance\", device!=\"lo\"}[$__rate_interval])",
+ "expr": "rate(node_network_receive_bytes_total{job=\"node-exporter\", instance=\"$instance\", device!=\"lo\"}[$__rate_interval]) Dockerfiles LICENSE README.md SECURITY.md Taskfile.yml charts cluster configure.sh container_images_in_use.txt container_images_with_arm64.txt container_images_without_arm64.txt default docs fix-crd.sh k8s mkdocs.yaml package.json provision static 8",
 "format": "time_series",
- "intervalFactor": 2,
+ "intervalFactor": 1,
 "legendFormat": "{{device}}",
 "refId": "A"
 }
@@ -30791,7 +30798,7 @@
 },
 "yaxes": [
 {
- "format": "bytes",
+ "format": "bps",
 "label": null,
 "logBase": 1,
 "max": null,
@@ -30799,7 +30806,7 @@
 "show": true
 },
 {
- "format": "bytes",
+ "format": "bps",
 "label": null,
 "logBase": 1,
 "max": null,
@@ -30816,6 +30823,7 @@
 "dashLength": 10,
 "dashes": false,
 "datasource": "$datasource",
+ "description": "Network transmitted (bits/s)",
 "fill": 0,
 "fillGradient": 0,
 "gridPos": {
@@ -30854,9 +30862,9 @@
 "steppedLine": false,
 "targets": [
 {
- "expr": "rate(node_network_transmit_bytes_total{job=\"node-exporter\", instance=\"$instance\", device!=\"lo\"}[$__rate_interval])",
+ "expr": "rate(node_network_transmit_bytes_total{job=\"node-exporter\", instance=\"$instance\", device!=\"lo\"}[$__rate_interval]) Dockerfiles LICENSE README.md SECURITY.md Taskfile.yml charts cluster configure.sh container_images_in_use.txt container_images_with_arm64.txt container_images_without_arm64.txt default docs fix-crd.sh k8s mkdocs.yaml package.json provision static 8",
 "format": "time_series",
- "intervalFactor": 2,
+ "intervalFactor": 1,
 "legendFormat": "{{device}}",
 "refId": "A"
 }
@@ -30884,7 +30892,7 @@
 },
 "yaxes": [
 {
- "format": "bytes",
+ "format": "bps",
 "label": null,
 "logBase": 1,
 "max": null,
@@ -30892,7 +30900,7 @@
 "show": true
 },
 {
- "format": "bytes",
+ "format": "bps",
 "label": null,
 "logBase": 1,
 "max": null,
@@ -35797,13 +35805,16 @@
 metadata:
 name: node-exporter
 namespace: default
- annotations:
- prometheus.io/scrape: "true"
 labels:
- app: prometheus-node-exporter
- heritage: Helm
- release: kube-prometheus-stack
+ app.kubernetes.io/managed-by: Helm
+ app.kubernetes.io/component: metrics
+ app.kubernetes.io/part-of: prometheus-node-exporter
+ app.kubernetes.io/instance: kube-prometheus-stack
+ app.kubernetes.io/name: prometheus-node-exporter
 jobLabel: node-exporter
+ release: kube-prometheus-stack
+ annotations:
+ prometheus.io/scrape: "true"
 spec:
 type: ClusterIP
 ports:
@@ -35812,8 +35823,8 @@
 protocol: TCP
 name: http-metrics
 selector:
- app: prometheus-node-exporter
- release: kube-prometheus-stack
+ app.kubernetes.io/instance: kube-prometheus-stack
+ app.kubernetes.io/name: prometheus-node-exporter
 ---
 # Source: kube-prometheus-stack/templates/alertmanager/service.yaml
 apiVersion: v1
@@ -35882,9 +35893,9 @@
 clusterIP: None
 ports:
 - name: http-metrics
- port: 2379
+ port: 2381
 protocol: TCP
- targetPort: 2379
+ targetPort: 2381
 selector:
 component: etcd
 type: ClusterIP
@@ -35944,15 +35955,18 @@
 name: node-exporter
 namespace: default
 labels:
- app: prometheus-node-exporter
- heritage: Helm
- release: kube-prometheus-stack
+ app.kubernetes.io/managed-by: Helm
+ app.kubernetes.io/component: metrics
+ app.kubernetes.io/part-of: prometheus-node-exporter
+ app.kubernetes.io/instance: kube-prometheus-stack
+ app.kubernetes.io/name: prometheus-node-exporter
 jobLabel: node-exporter
+ release: kube-prometheus-stack
 spec:
 selector:
 matchLabels:
- app: prometheus-node-exporter
- release: kube-prometheus-stack
+ app.kubernetes.io/instance: kube-prometheus-stack
+ app.kubernetes.io/name: prometheus-node-exporter
 updateStrategy:
 rollingUpdate:
 maxUnavailable: 1
@@ -35960,10 +35974,13 @@
 template:
 metadata:
 labels:
- app: prometheus-node-exporter
- heritage: Helm
- release: kube-prometheus-stack
+ app.kubernetes.io/managed-by: Helm
+ app.kubernetes.io/component: metrics
+ app.kubernetes.io/part-of: prometheus-node-exporter
+ app.kubernetes.io/instance: kube-prometheus-stack
+ app.kubernetes.io/name: prometheus-node-exporter
 jobLabel: node-exporter
+ release: kube-prometheus-stack
 annotations:
 cluster-autoscaler.kubernetes.io/safe-to-evict: "true"
 spec:
@@ -36098,9 +36115,8 @@
 - --port=8080
 - --resources=certificatesigningrequests,configmaps,cronjobs,daemonsets,deployments,endpoints,horizontalpodautoscalers,ingresses,jobs,limitranges,mutatingwebhookconfigurations,namespaces,networkpolicies,nodes,persistentvolumeclaims,persistentvolumes,poddisruptionbudgets,pods,replicasets,replicationcontrollers,resourcequotas,secrets,services,statefulsets,storageclasses,validatingwebhookconfigurations,volumeattachments
 - --metric-labels-allowlist=persistentvolumeclaims=[*]
- - --telemetry-port=8081
 imagePullPolicy: IfNotPresent
- image: "registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.5.0"
+ image: "registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.6.0"
 ports:
 - containerPort: 8080
 name: "http"
@@ -36148,17 +36164,17 @@
 spec:
 containers:
 - name: kube-prometheus-stack
- image: "quay.io/prometheus-operator/prometheus-operator:v0.58.0"
+ image: "quay.io/prometheus-operator/prometheus-operator:v0.59.1"
 imagePullPolicy: "IfNotPresent"
 args:
 - --kubelet-service=kube-system/prometheus-kubelet
 - --localhost=127.0.0.1
- - --prometheus-config-reloader=quay.io/prometheus-operator/prometheus-config-reloader:v0.58.0
+ - --prometheus-config-reloader=quay.io/prometheus-operator/prometheus-config-reloader:v0.59.1
 - --config-reloader-cpu-request=200m
 - --config-reloader-cpu-limit=200m
 - --config-reloader-memory-request=50Mi
 - --config-reloader-memory-limit=50Mi
- - --thanos-default-base-image=quay.io/thanos/thanos:v0.27.0
+ - --thanos-default-base-image=quay.io/thanos/thanos:v0.28.0
 - --web.enable-tls=true
 - --web.cert-file=/cert/cert
 - --web.key-file=/cert/key
@@ -36276,8 +36292,8 @@
 port: http-web
 pathPrefix: "/"
 apiVersion: v2
- image: "quay.io/prometheus/prometheus:v2.37.0"
- version: v2.37.0
+ image: "quay.io/prometheus/prometheus:v2.38.0"
+ version: v2.38.0
 externalUrl: http://prometheus-prometheus.default:9090
 paused: false
 replicas: 1
@@ -37786,7 +37802,7 @@
 annotations:
 description: HPA {{ $labels.namespace }}/{{ $labels.horizontalpodautoscaler }} has not matched the desired number of replicas for longer than 15 minutes.
 runbook_url: https://runbooks.prometheus-operator.dev/runbooks/kubernetes/kubehpareplicasmismatch
- summary: HPA has not matched descired number of replicas.
+ summary: HPA has not matched desired number of replicas.
 expr: |-
 (kube_horizontalpodautoscaler_status_desired_replicas{job="kube-state-metrics", namespace=~".*"}
 !=
@@ -39080,16 +39096,19 @@
 name: node-exporter
 namespace: default
 labels:
- app: prometheus-node-exporter
- heritage: Helm
- release: kube-prometheus-stack
+ app.kubernetes.io/managed-by: Helm
+ app.kubernetes.io/component: metrics
+ app.kubernetes.io/part-of: prometheus-node-exporter
+ app.kubernetes.io/instance: kube-prometheus-stack
+ app.kubernetes.io/name: prometheus-node-exporter
 jobLabel: node-exporter
+ release: kube-prometheus-stack
 spec:
 jobLabel: jobLabel
 selector:
 matchLabels:
- app: prometheus-node-exporter
- release: kube-prometheus-stack
+ app.kubernetes.io/instance: kube-prometheus-stack
+ app.kubernetes.io/name: prometheus-node-exporter
 endpoints:
 - port: http-metrics
 scheme: http
@@ -39544,7 +39563,7 @@
 spec:
 containers:
 - name: create
- image: k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.2.0
+ image: k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.3.0
 imagePullPolicy: IfNotPresent
 args:
 - create
@@ -39590,7 +39609,7 @@
 spec:
 containers:
 - name: patch
- image: k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.2.0
+ image: k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.3.0
 imagePullPolicy: IfNotPresent
 args:
 - patch

@bloopy-boi
Copy link
Contributor Author

bloopy-boi bot commented Sep 14, 2022

🦙 MegaLinter status: ✅ SUCCESS

Descriptor Linter Files Fixed Errors Elapsed time
✅ COPYPASTE jscpd yes no 1.88s
✅ YAML prettier 2 0 0 0.91s
✅ YAML yamllint 2 0 0.37s

See errors details in artifact MegaLinter reports on CI Job page
Set VALIDATE_ALL_CODEBASE: true in mega-linter.yml to validate all sources, not only the diff

MegaLinter is graciously provided by OX Security

@bloopy-boi bloopy-boi bot force-pushed the renovate/kube-prometheus-stack-40.x branch from 120dbcb to 5efc97d Compare September 16, 2022 09:29
@h3mmy h3mmy merged commit d33796a into main Sep 17, 2022
@h3mmy h3mmy deleted the renovate/kube-prometheus-stack-40.x branch September 17, 2022 18:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/cluster Changes made in the cluster directory renovate/helm size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. type/major
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant