Conversation
|
Warning Gemini encountered an error creating the summary. You can try again by commenting |
📝 WalkthroughWalkthroughUpdated the TeutonetesCloud lookup API version from v1alpha1 to v1beta1 in the OpenStack cluster template and replaced two static teutostack StorageClass templates with a new dynamic Helm template that generates StorageClass resources from the TeutonetesCloud spec. Changes
Sequence Diagram(s)sequenceDiagram
participant Helm as Helm template renderer
participant CR as TeutonetesCloud (CR)
participant K8s as Kubernetes API
Helm->>CR: lookup TeutonetesCloud (v1beta1)
CR-->>Helm: return cloud spec (volumeTypes, defaults)
Helm->>Helm: iterate volume type mappings, build StorageClass docs
Helm->>K8s: apply generated StorageClass resources
K8s-->>Helm: acknowledge resource creation
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Pull request overview
This PR refactors storage class configuration by extracting it from hardcoded YAML files specific to TeutoStack into a dynamic system that reads storage class definitions from a TeutonetesCloud CRD. This enables different cloud providers to define their own storage classes via the CRD specification.
Changes:
- Removed hardcoded TeutoStack-specific storage class definitions (ssd and hdd)
- Added dynamic storage class generation based on TeutonetesCloud CRD's volumeTypes specification
- Updated TeutonetesCloud API version from v1alpha1 to v1beta1 for consistency
Reviewed changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 4 comments.
| File | Description |
|---|---|
| charts/t8s-cluster/templates/workload-cluster/storageclass/teutostack/teutostack_ssd.yaml | Deleted hardcoded SSD storage class definition |
| charts/t8s-cluster/templates/workload-cluster/storageclass/teutostack/teutostack_hdd.yaml | Deleted hardcoded HDD storage class definition |
| charts/t8s-cluster/templates/workload-cluster/storageClasses.yaml | Added dynamic storage class generation from TeutonetesCloud CRD volumeTypes |
| charts/t8s-cluster/templates/management-cluster/clusterClass/openStackClusterTemplate/_openStackClusterTemplateSpec.yaml | Updated TeutonetesCloud API version to v1beta1 |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
57c38c0 to
1c75f69
Compare
…netesCloud That way each cloud can define its own StorageClasses
1c75f69 to
acaefad
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
charts/t8s-cluster/templates/workload-cluster/storageClasses.yaml (1)
28-32:availability: novais hardcoded — consider sourcing it from the cloud spec.All generated StorageClasses will unconditionally target the
novaavailability zone. If aTeutonetesCloudspec ever exposes per-volume-type availability zone data, this static value would need to be revisited. At minimum, document the assumption.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@charts/t8s-cluster/templates/workload-cluster/storageClasses.yaml` around lines 28 - 32, The StorageClass template currently hardcodes availability: nova; change it to read the availability zone from the cloud spec/Helm values (e.g. a field like .Values.cloud.availability or a TeutonetesCloud spec value) and fall back to "nova" only as a default, so generated StorageClasses target the configured zone; update the template that sets the availability parameter and document the assumption in chart values/README (reference the template parameters `availability` and `type` and the StorageClass template) so future per-volume-type availability data can be wired in.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@charts/t8s-cluster/templates/workload-cluster/storageClasses.yaml`:
- Around line 8-14: The template currently always calls include
"t8s-cluster.helm.resourceIntoCluster" even when $storageClasses is empty; add a
guard so you only call include "t8s-cluster.helm.resourceIntoCluster" when
$storageClasses is non-empty (e.g., wrap the include call in an {{- if
$storageClasses }} ... {{- end }}), keeping the existing logic that builds
$storageClasses from range $cloud.spec.volumeTypes.types and referencing the
same include "t8s-cluster.storageclass" items.
- Around line 17-23: The helper common.helm.labels is being called with an empty
dict so it cannot access .Release/.Chart; update the StorageClass template
("t8s-cluster.storageclass") to call the labels helper with the root chart
context (e.g. change include "common.helm.labels" (dict) to include
"common.helm.labels" $ | nindent 4) and ensure any callers that include
"t8s-cluster.storageclass" also forward the root context (pass $ when calling
include) so .Release and .Chart are available inside the helper; apply the same
change pattern to the other affected templates (certificates.yaml,
_policyTemplate.yaml, clusterissuer.yaml, ciliumNetworkPolicy.yaml) where
common.helm.labels or t8s-cluster.storageclass is invoked.
---
Nitpick comments:
In `@charts/t8s-cluster/templates/workload-cluster/storageClasses.yaml`:
- Around line 28-32: The StorageClass template currently hardcodes availability:
nova; change it to read the availability zone from the cloud spec/Helm values
(e.g. a field like .Values.cloud.availability or a TeutonetesCloud spec value)
and fall back to "nova" only as a default, so generated StorageClasses target
the configured zone; update the template that sets the availability parameter
and document the assumption in chart values/README (reference the template
parameters `availability` and `type` and the StorageClass template) so future
per-volume-type availability data can be wired in.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
charts/t8s-cluster/templates/management-cluster/clusterClass/openStackClusterTemplate/_openStackClusterTemplateSpec.yamlcharts/t8s-cluster/templates/workload-cluster/storageClasses.yamlcharts/t8s-cluster/templates/workload-cluster/storageclass/teutostack/teutostack_hdd.yamlcharts/t8s-cluster/templates/workload-cluster/storageclass/teutostack/teutostack_ssd.yaml
💤 Files with no reviewable changes (2)
- charts/t8s-cluster/templates/workload-cluster/storageclass/teutostack/teutostack_hdd.yaml
- charts/t8s-cluster/templates/workload-cluster/storageclass/teutostack/teutostack_ssd.yaml
Not during testing
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (2)
charts/t8s-cluster/templates/workload-cluster/storageClasses.yaml (2)
11-17: Add a guard before callingresourceIntoClusterwhen$storageClassesis empty.This was flagged in a previous review — wrapping the
resourceIntoClustercall with{{- if $storageClasses }}avoids creating an empty HelmRelease when no volume types are defined.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@charts/t8s-cluster/templates/workload-cluster/storageClasses.yaml` around lines 11 - 17, The template calls include "t8s-cluster.helm.resourceIntoCluster" with ($storageClasses | join "\n---\n") even when $storageClasses is empty, producing an empty HelmRelease; wrap that include call in a guard like {{- if $storageClasses }} ... {{- end }} so the resourceIntoCluster invocation (the include "t8s-cluster.helm.resourceIntoCluster" line) only runs when $storageClasses has elements.
20-26:common.helm.labelsreceives an empty dict —.Releaseand.Chartare lost.This was flagged in a previous review. When the context is overridden by passing a custom variable (e.g.,
(dict ...)), access to all default global objects (.Release,.Chart,.Values, etc.) is lost inside the called template. Thecommon.helm.labelshelper needs these to producehelm.sh/chart,app.kubernetes.io/managed-by, etc.The fix requires forwarding the root context — either by adding a
"context"key to the dict passed tot8s-cluster.storageclassat the call site (line 14) and then forwarding it intocommon.helm.labels, or by restructuringt8s-cluster.storageclassto accept a"context"key.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@charts/t8s-cluster/templates/workload-cluster/storageClasses.yaml` around lines 20 - 26, The helper call to common.helm.labels inside the t8s-cluster.storageclass template is given an empty dict which loses the root Helm context (.Release, .Chart, .Values); update the include to forward the root context so the helper can access those globals — e.g., change the include to pass a dict containing "context": . (or simply pass .) so common.helm.labels receives the original context; ensure t8s-cluster.storageclass forwards that context into the include (or accept a "context" key and then pass it through) so labels like helm.sh/chart and app.kubernetes.io/managed-by are generated correctly.
🧹 Nitpick comments (2)
charts/t8s-cluster/templates/workload-cluster/storageClasses.yaml (2)
20-38: Consider movingt8s-cluster.storageclassto a helpers file.Defining named templates inside a rendered template file is valid Helm, but files whose name begins with an underscore are assumed to not have a manifest inside and are available everywhere within other chart templates for use — the conventional place for reusable
defineblocks. Movingt8s-cluster.storageclassto_helpers.tpl(or a dedicated_storageClasses.tpl) improves discoverability and keepsstorageClasses.yamlfocused on rendering logic.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@charts/t8s-cluster/templates/workload-cluster/storageClasses.yaml` around lines 20 - 38, Move the reusable template definition named "t8s-cluster.storageclass" out of the rendered manifest file and into a helpers file (e.g., _helpers.tpl or a dedicated _storageClasses.tpl); update storageClasses.yaml to call the define via include (keeping rendering/manifest-only content there) so the define lives in the underscore helpers file for discoverability and reuse, ensure the template name "t8s-cluster.storageclass" is preserved and any callers use include "t8s-cluster.storageclass" (or the new helper file) to render the StorageClass manifest.
31-35:availability: novais hardcoded — inconsistent with other availability zone configurations.The chart exposes
availabilityZonefor cluster templates and bastion configuration, but the Cinder CSI StorageClass parameter hardcodesavailability: nova. This creates inconsistency on non-standard OpenStack deployments with custom availability zones. Consider sourcing this from the TeutonetesCloud spec or making it configurable to align with the rest of the configuration pattern.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@charts/t8s-cluster/templates/workload-cluster/storageClasses.yaml` around lines 31 - 35, The StorageClass hardcodes parameters.availability: nova which breaks non-standard AZs; update the template to source availability from chart values (e.g. use {{ .Values.availabilityZone }} or a namespaced value like {{ .Values.openstack.availabilityZone }}), add that key to values.yaml, and keep a sensible default/fallback (e.g. fallback to "nova" if the value is empty) so parameters.availability uses the configured availability zone instead of the hardcoded "nova"; modify the parameters block in storageClasses.yaml (the parameters.availability line) accordingly and ensure any related documentation or README references the new values key.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@charts/t8s-cluster/templates/workload-cluster/storageClasses.yaml`:
- Around line 3-7: The template currently skips the fatal fail via the check on
.Release.Namespace which can silently mask real deployments to the "default"
namespace; change the guard on the fail in the storageClasses template to use an
explicit testing flag or the cloud value instead (e.g., check
.Values.global.testing == true OR test that .Values.cloud is empty) rather than
.Release.Namespace. Update the conditional around the fail (the block containing
fail (printf "Cloud %s not found" .Values.cloud)) to only bypass the fail when a
clear, intentional flag (.Values.global.testing) is set or when .Values.cloud is
empty, and add a brief comment explaining the bypass.
---
Duplicate comments:
In `@charts/t8s-cluster/templates/workload-cluster/storageClasses.yaml`:
- Around line 11-17: The template calls include
"t8s-cluster.helm.resourceIntoCluster" with ($storageClasses | join "\n---\n")
even when $storageClasses is empty, producing an empty HelmRelease; wrap that
include call in a guard like {{- if $storageClasses }} ... {{- end }} so the
resourceIntoCluster invocation (the include
"t8s-cluster.helm.resourceIntoCluster" line) only runs when $storageClasses has
elements.
- Around line 20-26: The helper call to common.helm.labels inside the
t8s-cluster.storageclass template is given an empty dict which loses the root
Helm context (.Release, .Chart, .Values); update the include to forward the root
context so the helper can access those globals — e.g., change the include to
pass a dict containing "context": . (or simply pass .) so common.helm.labels
receives the original context; ensure t8s-cluster.storageclass forwards that
context into the include (or accept a "context" key and then pass it through) so
labels like helm.sh/chart and app.kubernetes.io/managed-by are generated
correctly.
---
Nitpick comments:
In `@charts/t8s-cluster/templates/workload-cluster/storageClasses.yaml`:
- Around line 20-38: Move the reusable template definition named
"t8s-cluster.storageclass" out of the rendered manifest file and into a helpers
file (e.g., _helpers.tpl or a dedicated _storageClasses.tpl); update
storageClasses.yaml to call the define via include (keeping
rendering/manifest-only content there) so the define lives in the underscore
helpers file for discoverability and reuse, ensure the template name
"t8s-cluster.storageclass" is preserved and any callers use include
"t8s-cluster.storageclass" (or the new helper file) to render the StorageClass
manifest.
- Around line 31-35: The StorageClass hardcodes parameters.availability: nova
which breaks non-standard AZs; update the template to source availability from
chart values (e.g. use {{ .Values.availabilityZone }} or a namespaced value like
{{ .Values.openstack.availabilityZone }}), add that key to values.yaml, and keep
a sensible default/fallback (e.g. fallback to "nova" if the value is empty) so
parameters.availability uses the configured availability zone instead of the
hardcoded "nova"; modify the parameters block in storageClasses.yaml (the
parameters.availability line) accordingly and ensure any related documentation
or README references the new values key.
🤖 I have created a release *beep* *boop* --- ## [9.7.0](t8s-cluster-v9.6.0...t8s-cluster-v9.7.0) (2026-03-18) ### Features * **t8s-cluster/workload-cluster:** extract storageClasses from TeutonetesCloud ([#2004](#2004)) ([86570be](86570be)) ### Bug Fixes * **t8s-cluster:** use plain values for autoScaling ([#2002](#2002)) ([5a28b1a](5a28b1a)) --- This PR was generated with [Release Please](https://github.com/googleapis/release-please). See [documentation](https://github.com/googleapis/release-please#release-please). --------- Co-authored-by: github-actions <41898282+github-actions[bot]@users.noreply.github.com>
That way each cloud can define its own StorageClasses
Summary by CodeRabbit
Updates
Refactor