ci(base-cluster/artifacthub-images): Update ArtifactHUB images#1587
ci(base-cluster/artifacthub-images): Update ArtifactHUB images#1587teutonet-bot merged 0 commit intomainfrom
Conversation
WalkthroughThis change updates the container image versions listed in the Changes
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
charts/base-cluster/Chart.yaml (1)
28-33: Grafana-Tempo lines: multiple duplicates – no issues, but consider deduplication
Six identical lines keep the 1-to-1 mapping to each workload, which is fine for ArtifactHUB reporting. If you want to shrink the annotation block, you could drop the duplicates without affecting functionality.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
charts/base-cluster/Chart.yaml(2 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: check licenses
- GitHub Check: lint helm chart (base-cluster)
- GitHub Check: wait-for-checks
🔇 Additional comments (3)
charts/base-cluster/Chart.yaml (3)
27-27: External-DNS image bump LGTM
The tag jump to0.18.0-debian-12-r1is only a patch update; no breaking CLI flag changes were introduced between 0.17 → 0.18.
41-41: Metrics-server 0.8.0: verify API flag set compatibility
v0.8 switches the default preferred address types; if your Helm values still set the--kubelet-insecure-tlsflag, behaviour is unchanged, otherwise the new defaults might block metrics collection on clusters with custom network setups.
63-63: Trivy-operator patch bump looks safe
0.27.3 only contains CVE database refreshes; no CRD/schema changes.
| - image: docker.io/bitnami/kubectl:1.33.3-debian-12-r1@sha256:cd354d5b25562b195b277125439c23e4046902d7f1abc0dc3c75aad04d298c17 # default/Job/flux-generate-gpg-key-secret-main.yaml | ||
| - image: docker.io/bitnami/kubectl:1.33.3-debian-12-r1@sha256:cd354d5b25562b195b277125439c23e4046902d7f1abc0dc3c75aad04d298c17 # default/Job/prevent-uninstallation.yaml |
There was a problem hiding this comment.
🛠️ Refactor suggestion
kubectl version drift within the same chart
You now list 1.33.3 for two jobs, but three earlier jobs (lines 34-36) are still at 1.32.3. Keeping a single minor version across all helper Jobs avoids subtly different behaviour (e.g. changed default server-side dry-run semantics in v1.33).
Diff suggestion to align everything to 1.33.3:
- - image: docker.io/bitnami/kubectl:1.32.3 # kyverno/HelmRelease/kyverno/kyverno/Job/kyverno-clean-reports.yaml
- - image: docker.io/bitnami/kubectl:1.32.3 # kyverno/HelmRelease/kyverno/kyverno/Job/kyverno-remove-configmap.yaml
- - image: docker.io/bitnami/kubectl:1.32.3 # kyverno/HelmRelease/kyverno/kyverno/Job/kyverno-scale-to-zero.yaml
+ - image: docker.io/bitnami/kubectl:1.33.3-debian-12-r1 # kyverno/HelmRelease/kyverno/kyverno/Job/kyverno-clean-reports.yaml
+ - image: docker.io/bitnami/kubectl:1.33.3-debian-12-r1 # kyverno/HelmRelease/kyverno/kyverno/Job/kyverno-remove-configmap.yaml
+ - image: docker.io/bitnami/kubectl:1.33.3-debian-12-r1 # kyverno/HelmRelease/kyverno/kyverno/Job/kyverno-scale-to-zero.yaml📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - image: docker.io/bitnami/kubectl:1.33.3-debian-12-r1@sha256:cd354d5b25562b195b277125439c23e4046902d7f1abc0dc3c75aad04d298c17 # default/Job/flux-generate-gpg-key-secret-main.yaml | |
| - image: docker.io/bitnami/kubectl:1.33.3-debian-12-r1@sha256:cd354d5b25562b195b277125439c23e4046902d7f1abc0dc3c75aad04d298c17 # default/Job/prevent-uninstallation.yaml | |
| - image: docker.io/bitnami/kubectl:1.33.3-debian-12-r1 # kyverno/HelmRelease/kyverno/kyverno/Job/kyverno-clean-reports.yaml | |
| - image: docker.io/bitnami/kubectl:1.33.3-debian-12-r1 # kyverno/HelmRelease/kyverno/kyverno/Job/kyverno-remove-configmap.yaml | |
| - image: docker.io/bitnami/kubectl:1.33.3-debian-12-r1 # kyverno/HelmRelease/kyverno/kyverno/Job/kyverno-scale-to-zero.yaml |
🤖 Prompt for AI Agents
In charts/base-cluster/Chart.yaml around lines 34 to 39, the kubectl image
versions are inconsistent, with some jobs using version 1.32.3 and others
1.33.3. To fix this, update the image tags for all kubectl jobs to use the same
minor version, specifically 1.33.3, ensuring uniform behavior across all jobs by
replacing the 1.32.3 versions with 1.33.3.
578ec79 to
23a605e
Compare
23a605e to
82db2ab
Compare
Pull request was closed
82db2ab to
7b29989
Compare
Updates the ArtifactHUB images to the really deployed ones.
Summary by CodeRabbit