feat(base-cluster/backup)!: add k8up provider#1751
Conversation
There was a problem hiding this comment.
Pull Request Overview
This PR adds support for the k8up backup provider to the base-cluster chart, allowing users to choose between Velero and k8up for backup operations. The changes restructure the backup configuration to support multiple providers while maintaining backward compatibility.
- Adds k8up as a new backup provider option alongside Velero
- Restructures backup configuration to use a provider-based approach
- Updates validation and template logic to support the new provider structure
Reviewed Changes
Copilot reviewed 13 out of 13 changed files in this pull request and generated 1 comment.
Show a summary per file
| File | Description |
|---|---|
| values.yaml | Updates backup configuration structure to use provider-based approach and adds k8up helm repository |
| values.schema.json | Adds JSON schema validation for the new provider-based backup configuration |
| _grafana-config.yaml | Updates dashboard conditions to support both Velero and k8up providers |
| velero templates | Updates Velero templates to use the new nested provider configuration structure |
| k8up.yaml | Adds new k8up HelmRelease template for k8up backup provider |
| CI files | Updates test configuration files to match new provider structure |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. WalkthroughRestructures the backup configuration system from a flat structure to a provider-based architecture supporting multiple backup providers (Velero and K8up). Updates templates, helpers, validation, and schema to reflect the new provider nesting. Adds K8up HelmRelease and monitoring support. Renames nodeAgent to runners. Changes
Sequence DiagramsequenceDiagram
participant Config as Helm Values
participant Logic as Chart Logic
participant Velero as Velero<br/>Provider
participant K8up as K8up<br/>Provider
participant Monitor as Monitoring
Config->>Logic: backup.provider.velero config
Config->>Logic: backup.provider.k8up config
alt Velero Enabled
Logic->>Velero: Render HelmRelease
Velero->>Velero: Load backupStorageLocations<br/>from provider.velero
Velero->>Monitor: Include Velero dashboard<br/>(gnetId 11055)
else K8up Enabled
Logic->>K8up: Render HelmRelease
K8up->>K8up: Apply monitoring config
K8up->>Monitor: Include K8up dashboard<br/>(gnetId 20166)
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
charts/base-cluster/values.yaml (1)
186-195: Update velero helmRepository condition to use new provider path.Line 195 still references the old
backupStorageLocationspath for the velero helmRepository condition, while the k8up repository (line 190) and all other backup configuration use the newbackup.providerstructure. This inconsistency will cause velero to not be enabled when using the new provider-based configuration.Apply this diff to align with the new provider structure:
velero: url: https://vmware-tanzu.github.io/helm-charts charts: velero: 7.2.2 - condition: "{{ not (empty .Values.backup.backupStorageLocations) }}" + condition: "{{ not (empty (.Values.backup.provider).velero) }}"This matches the pattern used for k8up and ensures velero deploys when configured under the new
backup.provider.velerostructure.
♻️ Duplicate comments (1)
charts/base-cluster/templates/backup/velero/credentials.yaml (1)
1-3: Fix inconsistent path references.Line 1 checks the new provider path
(.Values.backup.provider).velero.backupStorageLocations, but line 3 iterates over the old path.Values.backup.backupStorageLocations. This mismatch will cause credential generation to fail or behave unexpectedly.Apply this diff to fix the path inconsistency:
{{- if ((.Values.backup.provider).velero).backupStorageLocations }} {{- $providerMap := dict "minio" "accessKeyID" -}} -{{- range $name, $spec := .Values.backup.backupStorageLocations -}} +{{- range $name, $spec := .Values.backup.provider.velero.backupStorageLocations -}}This aligns the iteration path with the condition check and the new provider-based structure.
🧹 Nitpick comments (1)
charts/base-cluster/values.schema.json (1)
1666-1683: Consider simplifying the defaultLocation constraint.The current schema requires
defaultLocationwheneverbackupStorageLocationshas any entries, even when there's only one location. While this works correctly and enforces explicit configuration, you might consider auto-defaulting to the single location when only one exists to reduce configuration verbosity.However, the current explicit approach has the benefit of being unambiguous and preventing assumptions.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (13)
charts/base-cluster/ci/artifacthub-values-3.yaml(1 hunks)charts/base-cluster/ci/artifacthub-values.yaml(1 hunks)charts/base-cluster/ci/velero-backupStorageLocations-gen-values.sh(1 hunks)charts/base-cluster/templates/backup/k8up/k8up.yaml(1 hunks)charts/base-cluster/templates/backup/migrations/velero-4-to-5.yaml(0 hunks)charts/base-cluster/templates/backup/validation.yaml(0 hunks)charts/base-cluster/templates/backup/velero/_helpers.tpl(3 hunks)charts/base-cluster/templates/backup/velero/credentials.yaml(2 hunks)charts/base-cluster/templates/backup/velero/validation.yaml(1 hunks)charts/base-cluster/templates/backup/velero/velero.yaml(2 hunks)charts/base-cluster/templates/monitoring/kube-prometheus-stack/_grafana-config.yaml(1 hunks)charts/base-cluster/values.schema.json(2 hunks)charts/base-cluster/values.yaml(3 hunks)
💤 Files with no reviewable changes (2)
- charts/base-cluster/templates/backup/validation.yaml
- charts/base-cluster/templates/backup/migrations/velero-4-to-5.yaml
🧰 Additional context used
🧬 Code graph analysis (1)
charts/base-cluster/ci/artifacthub-values-3.yaml (1)
.github/scripts/extract-artifacthub-images.sh (1)
updateChartYaml(37-64)
🪛 YAMLlint (1.37.1)
charts/base-cluster/templates/backup/velero/validation.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
charts/base-cluster/templates/backup/k8up/k8up.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
charts/base-cluster/templates/backup/velero/credentials.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
charts/base-cluster/templates/backup/velero/velero.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: check licenses
- GitHub Check: lint helm chart (base-cluster)
🔇 Additional comments (18)
charts/base-cluster/ci/artifacthub-values-3.yaml (1)
1-7: LGTM!The new artifacthub-values fragment correctly demonstrates the k8up provider configuration structure introduced by this PR.
charts/base-cluster/ci/velero-backupStorageLocations-gen-values.sh (1)
19-19: LGTM!The script correctly adapts the output to the new provider-based backup structure. The nested path hierarchy is consistent with the overall migration.
charts/base-cluster/templates/backup/velero/validation.yaml (1)
1-5: LGTM!The validation logic correctly enforces that velero's defaultLocation must exist within the configured backupStorageLocations under the new provider structure. The YAMLlint warning appears to be a false positive for pure Helm template code.
charts/base-cluster/values.yaml (1)
460-475: LGTM!The backup provider structure correctly shifts to
provider: nullas the new configuration anchor, and therunnerssection (renamed fromnodeAgent) maintains the same resource configuration with correct naming alignment.charts/base-cluster/ci/artifacthub-values.yaml (1)
9-21: LGTM!The backup configuration is correctly migrated to the new provider-based structure under
backup.provider.velero, with all minio provider details properly nested and preserved.charts/base-cluster/templates/monitoring/kube-prometheus-stack/_grafana-config.yaml (1)
174-185: LGTM!The dashboard conditions are correctly updated to check for the presence of each backup provider under the new
backup.providerstructure. The dashboard IDs are appropriate for each provider's monitoring needs.charts/base-cluster/templates/backup/velero/velero.yaml (1)
1-108: LGTM!The Velero HelmRelease template correctly uses the new provider-based configuration paths throughout (
backup.provider.velero.defaultLocation,backup.provider.velero.backupStorageLocations), and properly references velero-namespaced helpers. The nodeAgent to runners transition is correctly applied on line 58.charts/base-cluster/templates/backup/k8up/k8up.yaml (6)
1-1: Dismiss the static analysis error - valid Helm syntax.The YAMLlint syntax error is a false positive. The
{{- if ... -}}construct is valid Helm template syntax for conditional rendering.
2-14: LGTM!The HelmRelease metadata, chart specification, and drift detection configuration are properly set up.
15-19: LGTM!The conditional dependency on kube-prometheus-stack ensures proper installation order when monitoring is enabled, preventing CRD-related issues.
29-42: Excellent security posture!The security contexts follow Kubernetes restricted pod security standards with appropriate hardening: non-root execution, read-only root filesystem, all capabilities dropped, and seccomp RuntimeDefault profile.
43-46: LGTM!Resource configuration correctly references the schema-defined paths (
.Values.backupand.Values.backup.runners), andskipWithoutAnnotation: trueprovides a safe default requiring explicit opt-in for backups.
48-60: LGTM!Monitoring configuration is properly conditional and includes appropriate labels for ServiceMonitor and GrafanaDashboard discovery by the monitoring stack.
charts/base-cluster/templates/backup/velero/_helpers.tpl (1)
1-46: LGTM!The refactor to namespace helpers under
base-cluster.backup.velero.*is well-executed. All four helper functions are consistently renamed, and internal references are correctly updated to use the new velero-prefixed names. This organizational change supports the provider-based architecture introduced in this PR.charts/base-cluster/values.schema.json (4)
1136-1137: LGTM!Adding
additionalProperties: falseto the cloudflare DNS provider enforces schema strictness and helps catch configuration errors.
1546-1704: LGTM!The provider-based backup configuration restructuring is well-designed. The
oneOfconstraint ensures exclusive provider selection, thenulloption maintains backward compatibility, andadditionalProperties: falseat multiple levels prevents misconfiguration.
1693-1696: LGTM - k8up as placeholder provider.The k8up provider schema is minimal (empty object only), which aligns with the template's simple nil check. This appears to be an initial integration with detailed configuration to be added in future work.
1705-1716: LGTM!The
runnerssection (renamed fromnodeAgent) provides resource configuration for backup runners. The generic naming works well for both Velero and k8up providers.
🤖 I have created a release *beep* *boop* --- ## [10.0.0](base-cluster-v9.4.0...base-cluster-v10.0.0) (2025-10-23) ### ⚠ BREAKING CHANGES * **base-cluster/backup:** add k8up provider ([#1751](#1751)) ### Features * **base-cluster/backup:** add k8up provider ([#1751](#1751)) ([0f36225](0f36225)) ### Bug Fixes * **base-cluster/kyverno:** change kubectl image ([#1734](#1734)) ([cb42f26](cb42f26)) * **base-cluster:** conditions must the `true`, not just truthy ([#1738](#1738)) ([7f46f4e](7f46f4e)) * **base-cluster:** migrate promtail leftovers to alloy ([#1720](#1720)) ([8b7d062](8b7d062)) ### Miscellaneous Chores * **base-cluster/external-dns:** migrate domainFilters syntax ([#1681](#1681)) ([51a42a2](51a42a2)) * **base-cluster/kdave:** remove kdave ([#1724](#1724)) ([723c049](723c049)) * **base-cluster/logs:** only delete volumes on deletion ([#1721](#1721)) ([36b657a](36b657a)) * **base-cluster/logs:** optimize volume chown; this speeds up startup ([36b657a](36b657a)) * **base-cluster/traces:** delete tempo volumes on deletion ([#1722](#1722)) ([0afce96](0afce96)) * **base-cluster:** use upstream kubectl image instead of rancher ([#1718](#1718)) ([d4daf94](d4daf94)) --- This PR was generated with [Release Please](https://github.com/googleapis/release-please). See [documentation](https://github.com/googleapis/release-please#release-please). <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit ## Release Notes - Version 10.0.0 * **Breaking Changes** * base-cluster/backup provider modifications require attention during upgrade. * **New Features** * base-cluster/backup enhancements. * **Bug Fixes** * Kyverno configuration improvements. * kubectl image handling optimizations. * Boolean condition evaluation corrections. * Promtail migration cleanup. * Tempo volume deletion fixes. * **Chores** * Infrastructure syntax and dependency updates. <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Co-authored-by: github-actions <41898282+github-actions[bot]@users.noreply.github.com>
Summary by CodeRabbit
New Features
Chores