Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(container): update redis ( 19.3.4 → 19.4.0 ) #855

Merged
merged 1 commit into from
May 21, 2024

Conversation

renovate[bot]
Copy link
Contributor

@renovate renovate bot commented May 21, 2024

Mend Renovate

This PR contains the following updates:

Package Update Change
redis (source) minor 19.3.4 -> 19.4.0

Release Notes

bitnami/charts (redis)

v19.4.0

  • [bitnami/redis] feat: ✨ 🔒 Add warning when original images are replaced (#​26271)

Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by Mend Renovate. View repository job log here.

@renovate renovate bot requested a review from martinohmann as a code owner May 21, 2024 20:17
@renovate renovate bot added the type/minor label May 21, 2024
@github-actions github-actions bot added area/kubernetes Changes made in the kubernetes directory cluster/main labels May 21, 2024
Copy link

helmrelease changes in kubernetes/main

--- HelmRelease: kube-system/descheduler ConfigMap: kube-system/descheduler

+++ HelmRelease: kube-system/descheduler ConfigMap: kube-system/descheduler

@@ -7,27 +7,68 @@

   labels:
     app.kubernetes.io/name: descheduler
     app.kubernetes.io/instance: descheduler
     app.kubernetes.io/managed-by: Helm
 data:
   policy.yaml: |
-    apiVersion: "descheduler/v1alpha1"
+    apiVersion: "descheduler/v1alpha2"
     kind: "DeschedulerPolicy"
+    profiles:
+    - name: default
+      pluginConfig:
+      - args:
+          evictLocalStoragePods: true
+          ignorePvcPods: true
+        name: DefaultEvictor
+      - name: RemoveDuplicates
+      - args:
+          includingInitContainers: true
+          podRestartThreshold: 100
+        name: RemovePodsHavingTooManyRestarts
+      - args:
+          nodeAffinityType:
+          - requiredDuringSchedulingIgnoredDuringExecution
+        name: RemovePodsViolatingNodeTaints
+      - name: RemovePodsViolatingInterPodAntiAffinity
+      - args:
+          includeSoftConstraints: false
+        name: RemovePodsViolatingTopologySpreadConstraint
+      - args:
+          targetThresholds:
+            cpu: 50
+            memory: 50
+            pods: 50
+          thresholds:
+            cpu: 20
+            memory: 20
+            pods: 20
+        name: LowNodeUtilization
+      plugins:
+        balance:
+          enabled:
+          - RemoveDuplicates
+          - RemovePodsViolatingNodeAffinity
+          - RemovePodsViolatingTopologySpreadConstraint
+          - LowNodeUtilization
+        deschedule:
+          enabled:
+          - RemovePodsHavingTooManyRestarts
+          - RemovePodsViolatingNodeTaints
+          - RemovePodsViolatingNodeAffinity
+          - RemovePodsViolatingInterPodAntiAffinity
     strategies:
       LowNodeUtilization:
         enabled: true
         params:
           nodeResourceUtilizationThresholds:
             targetThresholds:
               cpu: 40
               memory: 40
-              pods: 50
             thresholds:
               cpu: 35
               memory: 35
-              pods: 20
       PodLifeTime:
         enabled: true
         params:
           podLifeTime:
             maxPodLifeTimeSeconds: 3600
             states:
--- HelmRelease: kube-system/descheduler Deployment: kube-system/descheduler

+++ HelmRelease: kube-system/descheduler Deployment: kube-system/descheduler

@@ -21,13 +21,13 @@

         app.kubernetes.io/instance: descheduler
     spec:
       priorityClassName: system-cluster-critical
       serviceAccountName: descheduler
       containers:
       - name: descheduler
-        image: registry.k8s.io/descheduler/descheduler:v0.29.0
+        image: registry.k8s.io/descheduler/descheduler:v0.30.0
         imagePullPolicy: IfNotPresent
         command:
         - /bin/descheduler
         args:
         - --policy-config-file=/policy-dir/policy.yaml
         - --descheduling-interval=5m

Copy link

kustomization changes in kubernetes/main

--- kubernetes/main/apps/default/gitea/redis Kustomization: flux-system/gitea-redis HelmRelease: default/gitea-redis

+++ kubernetes/main/apps/default/gitea/redis Kustomization: flux-system/gitea-redis HelmRelease: default/gitea-redis

@@ -13,13 +13,13 @@

     spec:
       chart: redis
       sourceRef:
         kind: HelmRepository
         name: bitnami
         namespace: flux-system
-      version: 19.3.4
+      version: 19.4.0
   interval: 2h
   values:
     architecture: standalone
     auth:
       enabled: false
     commonConfiguration: |-
--- kubernetes/main/apps/kube-system/descheduler/app Kustomization: flux-system/descheduler HelmRelease: kube-system/descheduler

+++ kubernetes/main/apps/kube-system/descheduler/app Kustomization: flux-system/descheduler HelmRelease: kube-system/descheduler

@@ -13,13 +13,13 @@

     spec:
       chart: descheduler
       sourceRef:
         kind: HelmRepository
         name: descheduler
         namespace: flux-system
-      version: 0.29.0
+      version: 0.30.0
   install:
     remediation:
       retries: 3
   interval: 2h
   maxHistory: 2
   uninstall:

@martinohmann martinohmann merged commit 0ad4152 into main May 21, 2024
7 checks passed
@renovate renovate bot deleted the renovate/main-redis-19.x branch May 21, 2024 20:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant