Skip to content

Conversation

@rouja
Copy link
Collaborator

@rouja rouja commented Sep 2, 2025

Use the helm dev-backend chart to share the backend across all projects

@rouja rouja force-pushed the use-dev-backends-helm-chart branch from 423a98c to 986bc68 Compare September 2, 2025 14:52
Use the helm dev-backend chart to share the backend across all projects
@rouja rouja force-pushed the use-dev-backends-helm-chart branch from 986bc68 to 3f996aa Compare September 2, 2025 14:53
@sonarqubecloud
Copy link

sonarqubecloud bot commented Sep 2, 2025

@lebaudantoine
Copy link
Collaborator

@coderabbitai full review

@coderabbitai
Copy link

coderabbitai bot commented Sep 2, 2025

✅ Actions performed

Full review triggered.

@coderabbitai
Copy link

coderabbitai bot commented Sep 2, 2025

Walkthrough

This change migrates local/dev infrastructure to a unified “dev-backend” Helm chart. It removes bespoke Helm templates for Redis, PostgreSQL, Keycloak, and MinIO, and replaces inline services with a new dev-backend release defined in helmfile.yaml (repo added). Multiple environment values files update Redis host/username, switch DB credentials to Kubernetes secrets referencing dev-backend-postgres, and update S3 endpoints/access keys to dev-backend-minio. The Tiltfile drops old resources and introduces dev-focused resources (dev-backend-redis/postgres/minio/keycloak, livekit server/egress) with adjusted dependencies. Keycloak-specific in-cluster Postgres resources are removed.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

‼️ IMPORTANT
Auto-reply has been disabled for this repository in the CodeRabbit settings. The CodeRabbit bot will not respond to your replies unless it is explicitly tagged.

  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore or @coderabbit ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (13)
src/helm/env.d/dev-keycloak/values.egress.yaml.gotmpl (1)

20-25: Point S3 endpoint to dev-backend-minio

This still targets the old MinIO service. Update to the new shared backend.

-    endpoint: http://minio:9000
+    endpoint: http://dev-backend-minio:9000
src/helm/env.d/dev-keycloak/values.livekit.yaml.gotmpl (1)

40-43: Certificate secret name looks env-mismatched

Secret 'livekit-dinum-cert' under dev-keycloak env is confusing. Consider renaming to an env-specific name to avoid operational mistakes.

src/helm/env.d/dev-dinum/values.egress.yaml.gotmpl (1)

20-25: Update S3 endpoint to dev-backend-minio

Keep egress aligned with the consolidated backend.

-    endpoint: http://minio:9000
+    endpoint: http://dev-backend-minio:9000
src/helm/env.d/dev-dinum/values.meet.yaml.gotmpl (1)

219-220: Update serviceMedia and ingress annotations for MinIO migration

The serviceMedia and ingress annotations still reference the old MinIO service. These should be updated to use the new dev-backend-minio service.

Apply this diff to update the MinIO references:

     nginx.ingress.kubernetes.io/auth-response-headers: "Authorization, X-Amz-Date, X-Amz-Content-SHA256"
-    nginx.ingress.kubernetes.io/upstream-vhost: minio.meet.svc.cluster.local:9000
+    nginx.ingress.kubernetes.io/upstream-vhost: dev-backend-minio.meet.svc.cluster.local:9000
     nginx.ingress.kubernetes.io/rewrite-target: /meet-media-storage/$1

 serviceMedia:
-  host: minio.meet.svc.cluster.local
+  host: dev-backend-minio.meet.svc.cluster.local
   port: 9000

Also applies to: 223-224

src/helm/env.d/dev/values.meet.yaml.gotmpl (2)

180-181: Critical: Update Redis and S3 configurations for summary and celery services

Similar to the dev-dinum environment, the summary and celery services still reference old infrastructure:

  • Redis URLs (lines 190-191, 223-224) point to redis://default:pass@redis-master:6379/1
  • S3 endpoints (lines 180, 213) use minio.meet.svc.cluster.local:9000
  • S3 access keys (lines 181, 214) use meet instead of dinum

Apply these fixes to ensure service connectivity:

For summary service:

-    AWS_S3_ENDPOINT_URL: minio.meet.svc.cluster.local:9000
-    AWS_S3_ACCESS_KEY_ID: meet
+    AWS_S3_ENDPOINT_URL: http://dev-backend-minio.meet.svc.cluster.local:9000
+    AWS_S3_ACCESS_KEY_ID: dinum
     AWS_S3_SECRET_ACCESS_KEY: password
     # ... other configs ...
-    CELERY_BROKER_URL: redis://default:pass@redis-master:6379/1
-    CELERY_RESULT_BACKEND: redis://default:pass@redis-master:6379/1
+    CELERY_BROKER_URL: redis://user:pass@dev-backend-redis:6379/1
+    CELERY_RESULT_BACKEND: redis://user:pass@dev-backend-redis:6379/1

For celery service:

-    AWS_S3_ENDPOINT_URL: minio.meet.svc.cluster.local:9000
-    AWS_S3_ACCESS_KEY_ID: meet
+    AWS_S3_ENDPOINT_URL: http://dev-backend-minio.meet.svc.cluster.local:9000
+    AWS_S3_ACCESS_KEY_ID: dinum
     AWS_S3_SECRET_ACCESS_KEY: password
     # ... other configs ...
-    CELERY_BROKER_URL: redis://default:pass@redis-master:6379/1
-    CELERY_RESULT_BACKEND: redis://default:pass@redis-master:6379/1
+    CELERY_BROKER_URL: redis://user:pass@dev-backend-redis:6379/1
+    CELERY_RESULT_BACKEND: redis://user:pass@dev-backend-redis:6379/1

Also applies to: 213-214, 223-224


246-247: Update serviceMedia configuration for consistency

The ingress annotations and serviceMedia still reference the old MinIO service location.

Apply this diff to complete the migration:

     nginx.ingress.kubernetes.io/auth-response-headers: "Authorization, X-Amz-Date, X-Amz-Content-SHA256"
-    nginx.ingress.kubernetes.io/upstream-vhost: minio.meet.svc.cluster.local:9000
+    nginx.ingress.kubernetes.io/upstream-vhost: dev-backend-minio.meet.svc.cluster.local:9000
     nginx.ingress.kubernetes.io/rewrite-target: /meet-media-storage/$1

 serviceMedia:
-  host: minio.meet.svc.cluster.local
+  host: dev-backend-minio.meet.svc.cluster.local
   port: 9000

Also applies to: 250-251

src/helm/env.d/dev/values.egress.yaml.gotmpl (1)

20-24: S3 configuration needs update for dev-backend-minio

The S3 configuration in the egress service still references the old MinIO setup with incorrect credentials.

Apply this diff to align with the dev-backend configuration:

   s3:
-    access_key: meet
+    access_key: dinum
     secret: password
     region: local
     bucket: meet-media-storage
-    endpoint: http://minio:9000
+    endpoint: http://dev-backend-minio:9000
     force_path_style: true
src/helm/env.d/dev-keycloak/values.meet.yaml.gotmpl (6)

159-161: Summary S3 still points to removed MinIO and old AKID

Update to dev-backend MinIO and include scheme:

-    AWS_S3_ENDPOINT_URL: minio.meet.svc.cluster.local:9000
-    AWS_S3_ACCESS_KEY_ID: meet
+    AWS_S3_ENDPOINT_URL: http://dev-backend-minio.meet.svc.cluster.local:9000
+    AWS_S3_ACCESS_KEY_ID: dinum

193-195: Celery S3 still points to removed MinIO and old AKID

Mirror backend settings:

-    AWS_S3_ENDPOINT_URL: minio.meet.svc.cluster.local:9000
-    AWS_S3_ACCESS_KEY_ID: meet
+    AWS_S3_ENDPOINT_URL: http://dev-backend-minio.meet.svc.cluster.local:9000
+    AWS_S3_ACCESS_KEY_ID: dinum

224-229: Ingress media upstream still points to old MinIO Service

Update upstream-vhost to dev-backend MinIO:

-    nginx.ingress.kubernetes.io/upstream-vhost: minio.meet.svc.cluster.local:9000
+    nginx.ingress.kubernetes.io/upstream-vhost: dev-backend-minio.meet.svc.cluster.local:9000

230-232: ServiceMedia host should target dev-backend MinIO

-  host: minio.meet.svc.cluster.local
+  host: dev-backend-minio.meet.svc.cluster.local

203-205: Update Celery Redis URLs to dev-backend-redis
In src/helm/env.d/dev-keycloak/values.meet.yaml.gotmpl (lines 203–205), change

redis://default:pass@redis-master:6379/1

to

redis://user:pass@dev-backend-redis:6379/1

to match the REDIS_URL.


169-171: Switch Celery broker, result backend & task tracker URLs to dev-backend-redis
Instances in src/helm/env.d/dev-keycloak/values.meet.yaml.gotmpl at lines 169–171 and 203–205 still point to redis-master; update them to:
redis://user:pass@dev-backend-redis:6379/1

📜 Review details

Configuration used: CodeRabbit UI

Review profile: ASSERTIVE

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled
  • Linear integration is disabled

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 51ed277 and 3f996aa.

📒 Files selected for processing (16)
  • bin/Tiltfile (1 hunks)
  • src/helm/env.d/dev-dinum/values.egress.yaml.gotmpl (1 hunks)
  • src/helm/env.d/dev-dinum/values.livekit.yaml.gotmpl (1 hunks)
  • src/helm/env.d/dev-dinum/values.meet.yaml.gotmpl (2 hunks)
  • src/helm/env.d/dev-keycloak/values.egress.yaml.gotmpl (1 hunks)
  • src/helm/env.d/dev-keycloak/values.livekit.yaml.gotmpl (1 hunks)
  • src/helm/env.d/dev-keycloak/values.meet.yaml.gotmpl (2 hunks)
  • src/helm/env.d/dev/values.egress.yaml.gotmpl (1 hunks)
  • src/helm/env.d/dev/values.livekit.yaml.gotmpl (1 hunks)
  • src/helm/env.d/dev/values.meet.yaml.gotmpl (2 hunks)
  • src/helm/extra/templates/kc-postgresql-sts.yaml (0 hunks)
  • src/helm/extra/templates/keycloak-sts.yaml (0 hunks)
  • src/helm/extra/templates/minio.yaml (0 hunks)
  • src/helm/extra/templates/postgresql-sts.yaml (0 hunks)
  • src/helm/extra/templates/redis.yaml (0 hunks)
  • src/helm/helmfile.yaml (2 hunks)
💤 Files with no reviewable changes (5)
  • src/helm/extra/templates/kc-postgresql-sts.yaml
  • src/helm/extra/templates/redis.yaml
  • src/helm/extra/templates/postgresql-sts.yaml
  • src/helm/extra/templates/minio.yaml
  • src/helm/extra/templates/keycloak-sts.yaml
🔇 Additional comments (6)
src/helm/env.d/dev-keycloak/values.egress.yaml.gotmpl (1)

16-19: Parameterize Redis ACL username and set default to default
Hardcoding username: user in all env.d values will likely fail—most Redis charts use the default ACL user. Make this value configurable (e.g. via secret) with a default of default, update each occurrence, and verify the actual ACL user in the dev-backend-redis chart.

src/helm/env.d/dev-dinum/values.meet.yaml.gotmpl (1)

38-52: Skip database config for summary and celery
Neither service connects to the database; no additional secret refs required.

Likely an incorrect or invalid review comment.

src/helm/helmfile.yaml (1)

107-128: Confirm Keycloak database isolation
The db: block in the Keycloak values (lines 117–122) spins up its own Postgres instance. Verify this is intentional and that you aren’t meant to use the shared dev-backend-postgres database; if not, remove or reconfigure these settings to point at the existing DB.

bin/Tiltfile (1)

88-94: Well-structured resource dependencies for dev-backend migration

The Tiltfile correctly establishes the dependency chain for the new dev-backend infrastructure. The resource dependencies ensure proper startup order.

src/helm/env.d/dev-keycloak/values.meet.yaml.gotmpl (2)

52-52: Redis URL: confirm ACL username

dev-backend Redis often uses ACL user "default". You set "user". Verify the actual username emitted by the chart/secret; otherwise connections will fail.

-    REDIS_URL: redis://user:pass@dev-backend-redis:6379/1
+    REDIS_URL: redis://default:pass@dev-backend-redis:6379/1

38-50: No changes needed for DB secret refs The meet.env.transformDict helper automatically wraps any map (including secretKeyRef) under valueFrom, producing valid valueFrom.secretKeyRef entries.

k8s_resource('minio-bucket', resource_deps=['minio'])
k8s_resource('meet-backend', resource_deps=['postgresql', 'minio', 'redis', 'livekit-livekit-server'])
k8s_resource('meet-backend-migrate', resource_deps=['meet-backend'])
k8s_resource('meet-backend', resource_deps=['dev-backend-redis', 'dev-backend-keycloak', 'dev-backend-postgres', 'livekit-livekit-server', 'dev-backend-minio:statefulset'])
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Verify if meet-summary and meet-celery resources need explicit dependencies

The meet-backend resource has comprehensive dependencies, but there's no explicit resource definition for meet-summary and meet-celery services that also depend on Redis and MinIO.


🏁 Script executed:

#!/bin/bash
# Description: Check if summary and celery resources are defined elsewhere

# Search for k8s_resource definitions for summary and celery
echo "=== Searching for summary and celery resource definitions ==="
rg "k8s_resource.*summary|k8s_resource.*celery" bin/

# Check if they are created as part of the helm template
echo -e "\n=== Checking helm templates for summary/celery deployments ==="
fd -e yaml . src/helm/meet/templates --exec grep -l "summary\|celery" {} \;

Length of output: 545


Add Tilt k8s_resource definitions for meet-summary and meet-celery

bin/Tiltfile lacks k8s_resource entries for meet-summary and meet-celery even though their helm templates exist. Both depend on Redis and MinIO—add:

  • k8s_resource('meet-summary', resource_deps=['dev-backend-redis', 'dev-backend-minio:statefulset'])
  • k8s_resource('meet-celery', resource_deps=['dev-backend-redis', 'dev-backend-minio:statefulset'])
🤖 Prompt for AI Agents
In bin/Tiltfile around line 88, the file is missing k8s_resource entries for
meet-summary and meet-celery; add two k8s_resource declarations for these
services with the correct dependencies—each should list dev-backend-redis and
dev-backend-minio:statefulset as resource_deps so Tilt manages their watch/order
alongside existing resources.

Comment on lines +16 to 19
address: dev-backend-redis:6379
password: pass
username: user
s3:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Redis ACL username likely incorrect; use 'default' or parametrize via secret

Mirror the shared dev-backend Redis ACL user.

-    username: user
+    username: default
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
address: dev-backend-redis:6379
password: pass
username: user
s3:
address: dev-backend-redis:6379
password: pass
username: default
s3:
🤖 Prompt for AI Agents
In src/helm/env.d/dev-dinum/values.egress.yaml.gotmpl around lines 16 to 19, the
Redis ACL username is set to "user" which likely doesn't match the shared
dev-backend Redis ACL; update the username to the correct ACL user (e.g.,
"default") or parameterize it so it is sourced from a secret/value (add a
template variable like .Values.redis.username or reference a Kubernetes secret
key) and ensure the chart values or secret for dev-backend provide the mirrored
ACL username.

Comment on lines +17 to +19
address: dev-backend-redis:6379
password: pass
username: user
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Redis ACL username should match dev-backend-redis (likely 'default')

To avoid NOAUTH errors, align with the Redis chart’s ACL user (commonly 'default').

-    username: user
+    username: default
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
address: dev-backend-redis:6379
password: pass
username: user
address: dev-backend-redis:6379
password: pass
username: default
🤖 Prompt for AI Agents
In src/helm/env.d/dev-dinum/values.livekit.yaml.gotmpl around lines 17-19, the
Redis ACL username currently is "user" which likely mismatches the Redis chart's
ACL (commonly "default"); update the username entry to match the chart (e.g.,
change username: user to username: default) so the deployed LiveKit uses the
same Redis ACL user and avoids NOAUTH errors.

key: password
DB_PORT: 5432
REDIS_URL: redis://default:pass@redis-master:6379/1
REDIS_URL: redis://user:pass@dev-backend-redis:6379/1
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Redis configuration inconsistency detected

The Redis URL format differs across services:

  • Backend (line 52): Uses redis://user:pass@dev-backend-redis:6379/1
  • Summary/Celery (lines 163-164, 196-197): Still reference old Redis with redis://default:pass@redis-master:6379/1

This inconsistency will cause connection failures for the summary and celery services.

Apply this diff to fix the Redis URL inconsistency:

-    CELERY_BROKER_URL: redis://default:pass@redis-master:6379/1
-    CELERY_RESULT_BACKEND: redis://default:pass@redis-master:6379/1
+    CELERY_BROKER_URL: redis://user:pass@dev-backend-redis:6379/1
+    CELERY_RESULT_BACKEND: redis://user:pass@dev-backend-redis:6379/1

Also applies to: 163-164, 196-197

🤖 Prompt for AI Agents
In src/helm/env.d/dev-dinum/values.meet.yaml.gotmpl around line 52 (and also at
lines 163-164 and 196-197), the Redis URLs are inconsistent—backend uses
redis://user:pass@dev-backend-redis:6379/1 while summary/celery still point to
redis://default:pass@redis-master:6379/1; update the Redis URL values at lines
163-164 and 196-197 to match the backend URL
(redis://user:pass@dev-backend-redis:6379/1) so all services use the same host,
user, password and DB index, and verify there are no other stale redis URLs
elsewhere in this template.

Comment on lines +70 to 72
AWS_S3_ENDPOINT_URL: http://dev-backend-minio.meet.svc.cluster.local:9000
AWS_S3_ACCESS_KEY_ID: dinum
AWS_S3_SECRET_ACCESS_KEY: password
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

S3 configuration inconsistency across services

The S3 configuration is partially updated:

  • Backend (lines 70-72): Correctly uses dev-backend-minio with dinum access key
  • Summary/Celery (lines 153-155, 186-188): Still use old minio.meet.svc.cluster.local:9000 endpoint and meet access key

This will cause S3 connection failures for summary and celery services.

Apply this diff to fix the S3 configuration:

For summary service (around lines 153-155):

-    AWS_S3_ENDPOINT_URL: minio.meet.svc.cluster.local:9000
-    AWS_S3_ACCESS_KEY_ID: meet
+    AWS_S3_ENDPOINT_URL: http://dev-backend-minio.meet.svc.cluster.local:9000
+    AWS_S3_ACCESS_KEY_ID: dinum

For celery service (around lines 186-188):

-    AWS_S3_ENDPOINT_URL: minio.meet.svc.cluster.local:9000
-    AWS_S3_ACCESS_KEY_ID: meet
+    AWS_S3_ENDPOINT_URL: http://dev-backend-minio.meet.svc.cluster.local:9000
+    AWS_S3_ACCESS_KEY_ID: dinum

Also applies to: 153-155, 186-188

🤖 Prompt for AI Agents
In src/helm/env.d/dev-dinum/values.meet.yaml.gotmpl around lines 70-72, 153-155
and 186-188, the summary and celery S3 settings still point to the old minio
host and use the `meet` access key causing connection failures; update the
AWS_S3_ENDPOINT_URL for summary (lines ~153-155) and celery (lines ~186-188) to
use http://dev-backend-minio.meet.svc.cluster.local:9000 and set
AWS_S3_ACCESS_KEY_ID to dinum and AWS_S3_SECRET_ACCESS_KEY to password so they
match the backend configuration used at lines 70-72.

Comment on lines +17 to +19
address: dev-backend-redis:6379
password: pass
username: user
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Redis ACL username should match dev-backend-redis (likely 'default')

Same rationale as other env files; keep credentials consistent with the shared Redis.

-    username: user
+    username: default
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
address: dev-backend-redis:6379
password: pass
username: user
address: dev-backend-redis:6379
password: pass
username: default
🤖 Prompt for AI Agents
In src/helm/env.d/dev-keycloak/values.livekit.yaml.gotmpl around lines 17 to 19,
the Redis ACL username is set to "user" but should match the shared
dev-backend-redis ACL (likely "default"); change the username value to "default"
(or the exact ACL principal used by dev-backend-redis) so credentials are
consistent with other environment files and the shared Redis instance.

Comment on lines +69 to 71
AWS_S3_ENDPOINT_URL: http://dev-backend-minio.meet.svc.cluster.local:9000
AWS_S3_ACCESS_KEY_ID: dinum
AWS_S3_SECRET_ACCESS_KEY: password
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Align MinIO endpoint/keys with dev-backend across all components

Backend now targets dev-backend-minio with AKID=dinum. Summary and Celery still point to old MinIO/creds (see Lines 159-161 and 193-195) and will break after consolidation. Please update them too (separate comments below).


🏁 Script executed:

#!/bin/bash
set -e

# locate the template file
echo "Locating values.meet.yaml.gotmpl:"
fd values.meet.yaml.gotmpl

# show all AWS_S3_* occurrences with context
echo -e "\nAWS_S3_* entries with context:"
rg -n 'AWS_S3_' -C2 src/helm/env.d/dev-keycloak/values.meet.yaml.gotmpl

# locate Summary block around AWS_S3 entries
echo -e "\nContext around 'Summary':"
rg -n 'Summary' -C3 src/helm/env.d/dev-keycloak/values.meet.yaml.gotmpl

# locate Celery block around AWS_S3 entries
echo -e "\nContext around 'Celery':"
rg -n 'Celery' -C3 src/helm/env.d/dev-keycloak/values.meet.yaml.gotmpl

Length of output: 2207


Align Summary & Celery MinIO config

Update AWS_S3_ENDPOINT_URL, AWS_S3_ACCESS_KEY_ID and AWS_S3_SECRET_ACCESS_KEY in src/helm/env.d/dev-keycloak/values.meet.yaml.gotmpl to match the backend’s dev-backend-minio settings:

  • Summary block (lines 159–161): replace
    minio.meet.svc.cluster.local:9000 / meet / password
    with
    http://dev-backend-minio.meet.svc.cluster.local:9000 / dinum / password
  • Celery block (lines 193–195): apply the same replacements
🤖 Prompt for AI Agents
In src/helm/env.d/dev-keycloak/values.meet.yaml.gotmpl (current snippet lines
~69–71) and specifically update the Summary block at lines 159–161 and the
Celery block at lines 193–195: replace the S3 endpoint, access key and secret to
match the backend dev-backend-minio settings by using AWS_S3_ENDPOINT_URL =
http://dev-backend-minio.meet.svc.cluster.local:9000, AWS_S3_ACCESS_KEY_ID =
dinum, and AWS_S3_SECRET_ACCESS_KEY = password so both Summary and Celery blocks
match the backend configuration.

Comment on lines +17 to +19
address: dev-backend-redis:6379
password: pass
username: user
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Redis ACL username should match dev-backend-redis (likely 'default')

Ensure LiveKit can authenticate against the new Redis instance.

-    username: user
+    username: default
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
address: dev-backend-redis:6379
password: pass
username: user
address: dev-backend-redis:6379
password: pass
username: default
🤖 Prompt for AI Agents
In src/helm/env.d/dev/values.livekit.yaml.gotmpl around lines 17 to 19, the
Redis ACL username is set to "user" but the dev-backend-redis instance expects
the default ACL username (likely "default"); update the username field to
"default" (or to the exact ACL username configured on dev-backend-redis) so
LiveKit can authenticate, and verify the password value matches the Redis ACL
credential.

Comment on lines +72 to +128
- name: dev-backend
namespace: {{ .Namespace }}
chart: dev-backends/dev-backend
version: 0.0.1
values:
- postgres:
enabled: true
name: postgres
#serviceNameOverride: postgres
image: postgres:16-alpine
username: dinum
password: pass
database: dinum
size: 1Gi
- redis:
enabled: true
name: redis
#serviceNameOverride: redis
image: redis:8.2-alpine
username: user
password: pass
- minio:
enabled: true
image: minio/minio
name: minio
#serviceNameOverride: minio
hostname: minio.127.0.0.1.nip.io
tls:
enabled: true
secretName: meet-tls
username: dinum
password: password
bucket: meet-media-storage
size: 1Gi
- keycloak:
enabled: true
image: quay.io/keycloak/keycloak:20.0.1
name: keycloak
#serviceNameOverride: keycloak
hostname: keycloak.127.0.0.1.nip.io
username: admin
password: pass
tls:
enabled: true
secretName: meet-tls
db:
username: dinum
password: pass
database: keycloak
size: 1Gi
image: postgres:16-alpine
realm:
name: meet
username: meet
password: meet
email: [email protected]

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Consider production-ready configurations for dev-backend services

The dev-backend configuration uses hardcoded passwords and basic settings. While acceptable for development, consider:

  1. Using stronger passwords even in dev environments
  2. Documenting that this is strictly for development use
  3. Adding resource limits to prevent resource exhaustion

Consider adding resource limits and a warning comment:

  - name: dev-backend
    namespace: {{ .Namespace }}
    chart: dev-backends/dev-backend
    version: 0.0.1
    values:
      # WARNING: This configuration is for development only
      # Do not use these settings in production environments
      - postgres:
          enabled: true
          name: postgres
          image: postgres:16-alpine
          username: dinum
          password: pass  # TODO: Use secret management for better security
          database: dinum
          size: 1Gi
          resources:
            limits:
              memory: 512Mi
              cpu: 500m
            requests:
              memory: 256Mi
              cpu: 250m
🤖 Prompt for AI Agents
In src/helm/helmfile.yaml around lines 72 to 128, the dev-backend block uses
hardcoded credentials and lacks resource limits or a dev-warning; update the
YAML to: add a top-line WARNING comment stating this config is for development
only, replace hardcoded passwords with references to Kubernetes Secrets or
placeholders and add TODO notes to use secret management, and add
resources.requests and resources.limits for each service (postgres, redis,
minio, keycloak) with conservative values (e.g., cpu/memory request/limit
entries) so each chart has basic resource constraints to prevent exhaustion.

image: quay.io/keycloak/keycloak:20.0.1
name: keycloak
#serviceNameOverride: keycloak
hostname: keycloak.127.0.0.1.nip.io
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should add a suffix here to not collapse with other projects running in a different namespace.

Also, we should find a solution to have one keycloak for all our projects in the dev stack ? Maybe running it in a dedicated namespace ? And we use a convention to use this namespace accross all our projects ? We will need this once we will implement the resource server protocol on our projects.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants