Conversation
|
Important Review skippedAuto incremental reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Organization UI Review profile: ASSERTIVE Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
WalkthroughMigrates ClusterIssuer provisioning from a static Kubernetes manifest to a Helm chart and Helm release, adds external-dns Helm release and Terraform time provider, updates Helm/GitLab values and Terraform resource wiring (DNS, Postgres, namespaces, secrets), and enables ignoring Terraform plan files. Changes
Sequence DiagramsequenceDiagram
participant TF as Terraform
participant Helm as Helm
participant K8s as Kubernetes API
participant CM as cert-manager
participant DNS as DigitalOcean DNS
TF->>Helm: deploy cert-manager release
Helm->>K8s: create cert-manager resources
K8s->>CM: cert-manager initializes
TF->>Helm: deploy cluster-issuer (depends_on cert-manager)
Helm->>K8s: create ClusterIssuer resource
K8s->>CM: register ClusterIssuer
TF->>Helm: deploy ingress-nginx
Helm->>K8s: create ingress-nginx service (LoadBalancer)
TF->>TF: time_sleep.wait_for_lb (120s)
TF->>K8s: query ingress-nginx service for LB IP
K8s-->>TF: return LB IP
TF->>DNS: create/update DNS record with LB IP
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 34d071bd-d435-43bb-9c6c-0f16346c708e
📒 Files selected for processing (14)
.gitignorehelm/cluster-issuer/chart/.helmignorehelm/cluster-issuer/chart/Chart.yamlhelm/cluster-issuer/chart/templates/_helpers.tplhelm/cluster-issuer/chart/templates/cluster-issuer.yamlhelm/cluster-issuer/chart/values.yamlhelm/cluster-issuer/values.yamlhelm/gitlab/values.yamlkubernetes/cluster-issuer.yamlterraform/.terraform.lock.hclterraform/dns.tfterraform/helm.tfterraform/kubernetes.tfterraform/versions.tf
💤 Files with no reviewable changes (1)
- kubernetes/cluster-issuer.yaml
📜 Review details
🧰 Additional context used
🪛 YAMLlint (1.38.0)
helm/cluster-issuer/chart/templates/cluster-issuer.yaml
[error] 4-4: too many spaces inside braces
(braces)
[error] 4-4: too many spaces inside braces
(braces)
[error] 7-7: too many spaces inside braces
(braces)
[error] 7-7: too many spaces inside braces
(braces)
[error] 8-8: too many spaces inside braces
(braces)
[error] 8-8: too many spaces inside braces
(braces)
[error] 10-10: too many spaces inside braces
(braces)
[error] 10-10: too many spaces inside braces
(braces)
[error] 14-14: too many spaces inside braces
(braces)
[error] 14-14: too many spaces inside braces
(braces)
🔇 Additional comments (14)
.gitignore (1)
38-38: Good ignore rule for Terraform plan artifacts.This correctly prevents plan output files from being committed.
helm/cluster-issuer/chart/.helmignore (1)
1-23: Solid packaging hygiene defaults.The ignore set is appropriate for keeping chart packages clean and reproducible.
helm/gitlab/values.yaml (1)
66-67: Configuration changes are consistent with the new deployment model.These values align with disabling in-chart components while relying on Terraform-managed cert-manager and issuer resources.
Also applies to: 79-80, 85-85
helm/cluster-issuer/chart/templates/_helpers.tpl (1)
4-62: Helpers look clean and idiomatic.The naming/label helpers are well-structured and reusable.
helm/cluster-issuer/values.yaml (1)
1-1: Templated email input looks correct.This keeps issuer contact configuration environment-driven.
terraform/versions.tf (1)
21-24: Provider addition is appropriate.Adding
hashicorp/timeis consistent with the new orchestration dependency.helm/cluster-issuer/chart/values.yaml (1)
1-5: Default chart values are coherent.The ACME and solver defaults are well-formed for initial chart rendering.
terraform/dns.tf (1)
10-15: The readiness gating is already properly implemented.
local.lb_ipusestry(..., null)and thecounton thedigitalocean_recordresource is already conditionally set tocount = local.lb_ip != null ? 1 : 0, which prevents creation until the ingress controller's load balancer IP is available. No changes needed.> Likely an incorrect or invalid review comment.terraform/.terraform.lock.hcl (1)
87-106: LGTM!The lock entry for
hashicorp/timev0.13.1 is correctly added and aligns with the provider declaration interraform/versions.tf. This supports the newtime_sleep.wait_for_lbresource inkubernetes.tf.helm/cluster-issuer/chart/templates/cluster-issuer.yaml (1)
1-14: LGTM!The ClusterIssuer template is correctly structured for cert-manager's ACME HTTP-01 challenge workflow. All referenced values (
.Values.name,.Values.server,.Values.email,.Values.privateKeySecretName,.Values.ingressClass) are properly defined inhelm/cluster-issuer/chart/values.yaml.Note: The YAMLlint errors about "too many spaces inside braces" are false positives—the
{{ .Values.* }}syntax with spaces is standard Go/Helm templating convention.terraform/kubernetes.tf (1)
138-138: LGTM!The dependency chain is correctly updated:
helm_release.ingress_nginx→time_sleep.wait_for_lb→kubernetes_service_v1data source lookup. This ensures the service query occurs after the wait period.terraform/helm.tf (3)
1-11: LGTM!The cert-manager Helm release is correctly configured with the OCI registry path and version 1.20.2.
49-51: LGTM!The
timeout = 1800(30 minutes),wait = true, andwait_for_jobs = truesettings are appropriate for the GitLab Helm chart, which is large and has multiple jobs that need to complete during installation.
68-79: LGTM!The expanded
depends_onlist correctly captures all prerequisite resources: database clusters/users, secrets, cluster-issuer, and DNS records. This ensures GitLab installation only proceeds when all dependencies are ready.
| # incremented each time you make changes to the application. Versions are not expected to | ||
| # follow Semantic Versioning. They should reflect the version the application is using. | ||
| # It is recommended to use it with quotes. | ||
| appVersion: "1.16.0" |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider aligning appVersion with cert-manager or removing it.
The appVersion: "1.16.0" doesn't correspond to the cert-manager version (1.20.2 per helm.tf) being deployed. Since this chart deploys a ClusterIssuer CRD instance rather than a versioned application, consider either updating to match cert-manager's version or omitting appVersion entirely.
| values = [ | ||
| file("${path.module}/../helm/cert-manager/values.yaml") | ||
| ] | ||
| depends_on = [helm_release.cert_manager] |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Unnecessary dependency: ingress-nginx doesn't require cert-manager.
ingress-nginx and cert-manager are independent components that can be installed in parallel. This dependency adds unnecessary sequencing to deployments. Consider removing it unless there's a specific ordering requirement.
♻️ Remove unnecessary dependency
resource "helm_release" "ingress_nginx" {
name = "ingress-nginx"
...
- depends_on = [helm_release.cert_manager]
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| depends_on = [helm_release.cert_manager] |
| resource "time_sleep" "wait_for_lb" { | ||
| depends_on = [ helm_release.ingress_nginx ] | ||
| create_duration = "120s" | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Hardcoded 120s wait duration may be fragile.
The time_sleep is a pragmatic workaround for the LoadBalancer IP assignment race condition, but the 120-second duration is a magic number that:
- May not be sufficient if DigitalOcean LB provisioning is slow
- Unnecessarily delays deployments when the LB is ready sooner
Consider adding a comment explaining why 120s was chosen, or consider a more robust approach using a null_resource with a retry loop that polls for the LB IP.
📝 Suggested documentation
resource "time_sleep" "wait_for_lb" {
+ # Wait for DigitalOcean to provision the LoadBalancer and assign an external IP.
+ # 120s is typically sufficient for DO LB provisioning; adjust if timeouts occur.
depends_on = [ helm_release.ingress_nginx ]
create_duration = "120s"
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| resource "time_sleep" "wait_for_lb" { | |
| depends_on = [ helm_release.ingress_nginx ] | |
| create_duration = "120s" | |
| } | |
| resource "time_sleep" "wait_for_lb" { | |
| # Wait for DigitalOcean to provision the LoadBalancer and assign an external IP. | |
| # 120s is typically sufficient for DO LB provisioning; adjust if timeouts occur. | |
| depends_on = [ helm_release.ingress_nginx ] | |
| create_duration = "120s" | |
| } |
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
terraform/dns.tf (1)
1-16:⚠️ Potential issue | 🟠 MajorUnconditional
count = 1combined withtry(..., null)will apply a null value.
local.lb_ipusestry(data.kubernetes_service_v1.ingress_nginx.status[0].load_balancer[0].ingress[0].ip, null). Previously the conditionalcountguarded against a null result; now thatcount = 1is unconditional, if the LB IP is not yet populated (e.g.,time_sleep.wait_for_lbexpires before DigitalOcean assigns the IP, or the status path is momentarily missing), Terraform will attempt to create a DNS A record withvalue = nulland fail with a confusing provider error instead of surfacing the root cause.Either drop the
nullfallback so Terraform fails fast on the data source with a clear message, or reintroduce a precondition:🛠 Proposed fix
locals { - lb_ip = try(data.kubernetes_service_v1.ingress_nginx.status[0].load_balancer[0].ingress[0].ip, null) + lb_ip = data.kubernetes_service_v1.ingress_nginx.status[0].load_balancer[0].ingress[0].ip }Or add a lifecycle precondition on
digitalocean_record.gitlab:lifecycle { precondition { condition = local.lb_ip != null error_message = "ingress-nginx LoadBalancer IP is not yet assigned; increase time_sleep.wait_for_lb." } }helm/gitlab/values.yaml (1)
63-82:⚠️ Potential issue | 🔴 CriticalUse
certmanager.install: falseinstead ofinstallCertmanager: falseto properly disable the bundled cert-manager subchart.The top-level
installCertmanager: falseis not a recognized chart value and will be silently ignored, causing the bundled cert-manager to install alongside your ownhelm_release.cert_manager, potentially racing for CRDs and ClusterIssuers.The correct key is
certmanager.install: false(the existingglobal.ingress.configureCertmanager: falseis also correct).🛠 Proposed fix
nginx-ingress: enabled: false -installCertmanager: false +certmanager: + install: false
♻️ Duplicate comments (1)
terraform/kubernetes.tf (1)
148-160:⚠️ Potential issue | 🟠 MajorReplace the fixed sleep with an actual LoadBalancer IP guard.
A 120s sleep still doesn’t guarantee
status.loadBalancer.ingress[0].ipexists. Sinceterraform/dns.tfnow createsdigitalocean_record.gitlabunconditionally fromlocal.lb_ip, a slow DigitalOcean LB can leave the DNS record with a null/invalid value or fail the apply. Prefer active polling/readiness or restore a guard around DNS creation.Safer DNS guard option
# terraform/dns.tf resource "digitalocean_record" "gitlab" { - count = 1 + count = local.lb_ip != null ? 1 : 0 domain = digitalocean_domain.main.name type = "A" name = var.gitlab_host value = local.lb_ip ttl = 300 }Terraform time provider time_sleep create_duration behavior when only depends_on resources update; Kubernetes LoadBalancer IP readiness patterns in Terraform
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 7ec81fc8-6be2-4972-9112-09e2a72b2263
📒 Files selected for processing (19)
.gitignorehelm/cluster-issuer/chart/.helmignorehelm/cluster-issuer/chart/Chart.yamlhelm/cluster-issuer/chart/templates/_helpers.tplhelm/cluster-issuer/chart/templates/cluster-issuer.yamlhelm/cluster-issuer/chart/values.yamlhelm/cluster-issuer/values.yamlhelm/external-dns/values.yamlhelm/gitlab/values.yamlhelm/ingress-nginx/values.yamlkubernetes/cluster-issuer.yamlterraform/.terraform.lock.hclterraform/dns.tfterraform/helm.tfterraform/kubernetes.tfterraform/postgres.tfterraform/spaces.tfterraform/variables.tfterraform/versions.tf
💤 Files with no reviewable changes (1)
- kubernetes/cluster-issuer.yaml
📜 Review details
🧰 Additional context used
🪛 YAMLlint (1.38.0)
helm/cluster-issuer/chart/templates/cluster-issuer.yaml
[error] 4-4: too many spaces inside braces
(braces)
[error] 4-4: too many spaces inside braces
(braces)
[error] 7-7: too many spaces inside braces
(braces)
[error] 7-7: too many spaces inside braces
(braces)
[error] 8-8: too many spaces inside braces
(braces)
[error] 8-8: too many spaces inside braces
(braces)
[error] 10-10: too many spaces inside braces
(braces)
[error] 10-10: too many spaces inside braces
(braces)
[error] 14-14: too many spaces inside braces
(braces)
[error] 14-14: too many spaces inside braces
(braces)
🔇 Additional comments (14)
helm/cluster-issuer/chart/Chart.yaml (1)
24-24:appVersionstill doesn't reflect the deployed cert-manager version.The chart renders a
cert-manager.io/v1ClusterIssuer for cert-manager 1.20.2 (perterraform/helm.tf), butappVersion: "1.16.0"doesn't track that. Align it with cert-manager, or dropappVersionsince this chart only ships a CRD instance.terraform/helm.tf (2)
25-38:cluster-issuervalues.yaml still needs${email}interpolation for thistemplatefilecall to take effect.The
templatefile(.../cluster-issuer/values.yaml, { email = var.email })call on line 31 only substitutes if the values file uses${email}. If it still contains a literalemail: webmaster@example.com,var.emailis silently ignored.
40-50: LGTM — prior unnecessarydepends_ononcert_managerwas correctly dropped.
ingress-nginxandcert-managercan install in parallel; removing the ordering constraint here is the right call.terraform/versions.tf (1)
21-24: LGTM!The
hashicorp/timeprovider addition is consistent with the newtime_sleep.wait_for_lbusage interraform/kubernetes.tfand the new lock entry in.terraform.lock.hcl.terraform/.terraform.lock.hcl (1)
88-106: LGTM — auto-generated lock entry matches declared constraint.The
hashicorp/timelock entry (version = "0.13.1",constraints = "~> 0.13.1") is consistent withterraform/versions.tf.terraform/variables.tf (2)
6-9: LGTM — separate scoped token for ExternalDNS.Using a dedicated
do_dns_token(instead of reusingvar.do_token) is good least-privilege practice — the ExternalDNS webhook only needs DNS-scoped permissions.
22-40: Verify default changes are intentional and documented.Changing
region(fra1→ams3) andnode_size(s-2vcpu-4gb→s-4vcpu-8gb) defaults will silently force a cluster rebuild and ~2× droplet cost for anyone not overriding these. Confirm this is intended, and consider noting the change in the repo README/CHANGELOG so existing consumers aren't surprised on the nextterraform apply.terraform/postgres.tf (2)
4-4: Confirm engine version rollback 18 → 17 is intentional.Downgrading a running DO managed Postgres cluster is not supported in-place — if a cluster was previously provisioned at
18, this change will fail the plan or force a destroy/recreate (data loss). If this is a net-new deployment, ignore; otherwise plan a managed migration path. GitLab 17.x (chart 9.10.3) is tested against PG 14–16; PG 17 support was added in recent GitLab versions — verify compatibility with your chart version.
11-14: LGTM —max_locks_per_transaction = 256is the documented GitLab requirement.Good addition; GitLab recommends raising this from the PG default of 64 to avoid lock-table exhaustion during large backup/restore and schema operations.
terraform/kubernetes.tf (4)
1-7: External DNS namespace wiring looks good.The namespace is explicitly ordered after the cluster and is referenced by the new external-dns secret.
37-48: The configuration is correct. The Helm chart inhelm/external-dns/values.yaml(lines 7-11) explicitly maps theDO_TOKENenvironment variable to thepasswordkey in theexternal-dns-do-secretsecret, exactly as defined in the Terraform code. No action required.
109-126: No active references found to the registry S3 secret.The search found zero active references to
gitlab-s3-registry-secretin the configuration. The object storage inhelm/gitlab/values.yamlis configured withgitlab-s3-main-secret, not the commented-out registry secret. Commenting out the resource is safe and will not cause registry pod failures.> Likely an incorrect or invalid review comment.
69-71: No action needed—username and password are already correctly paired.GitLab's database connection uses both the username and password from the cluster's default identity (
digitalocean_database_cluster.postgres.userand.password), so they originate from the same source. Authentication will not fail.Note: A dedicated
gitlabdatabase user is created but remains unused; this is wasteful but not a functional problem.> Likely an incorrect or invalid review comment.helm/cluster-issuer/chart/templates/_helpers.tpl (1)
1-62: Helm helper templates look good.The naming, label, selector-label, and service account helper definitions follow standard Helm chart conventions and should compose cleanly with chart templates.
| resource "digitalocean_database_connection_pool" "main" { | ||
| cluster_id = digitalocean_database_cluster.postgres.id | ||
| name = "gitlab" | ||
| mode = "transaction" | ||
| size = 10 | ||
| db_name = digitalocean_database_cluster.postgres.database | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cd /tmp && find . -type f -name "*.tf" | head -20Repository: clofour/gitlab-kubernetes
Length of output: 51
🏁 Script executed:
git ls-files | grep -E "\.tf$"Repository: clofour/gitlab-kubernetes
Length of output: 316
🏁 Script executed:
pwdRepository: clofour/gitlab-kubernetes
Length of output: 87
🏁 Script executed:
find . -type f -name "postgres.tf" -o -name "helm.tf"Repository: clofour/gitlab-kubernetes
Length of output: 112
🏁 Script executed:
cat terraform/postgres.tfRepository: clofour/gitlab-kubernetes
Length of output: 1198
🏁 Script executed:
cat terraform/helm.tfRepository: clofour/gitlab-kubernetes
Length of output: 3056
🌐 Web query:
DigitalOcean Terraform provider digitalocean_database_cluster database attribute
💡 Result:
In the DigitalOcean Terraform provider, the digitalocean_database_cluster resource exports a read-only attribute named database. This attribute provides the name of the cluster's default database (typically "defaultdb" for new clusters). It is available after cluster creation and can be referenced in other resources or outputs. It is listed in the official Attributes Reference section of both the resource and data source documentation: - Resource: database - Name of the cluster's default database. - Data source: Same attribute for querying existing clusters. Note that database is not a configurable argument; the cluster creates a default database automatically. Additional databases can be created using the separate digitalocean_database_db resource, which requires cluster_id and name arguments. Example usage: resource "digitalocean_database_cluster" "example" { name = "example-cluster" engine = "pg" version = "15" size = "db-s-1vcpu-1gb" region = "nyc1" node_count = 1 } output "default_database" { value = digitalocean_database_cluster.example.database }
Citations:
- 1: https://registry.terraform.io/providers/digitalocean/digitalocean/latest/docs/resources/database_cluster
- 2: https://registry.terraform.io/providers/digitalocean/digitalocean/latest/docs/data-sources/database_cluster
- 3: https://docs.digitalocean.com/reference/terraform/reference/data-sources/database_cluster/
- 4: https://docs.digitalocean.com/reference/terraform/reference/resources/database_cluster
🏁 Script executed:
grep -r "database" terraform/ | grep -i "default\|cluster"Repository: clofour/gitlab-kubernetes
Length of output: 1763
🌐 Web query:
DigitalOcean PgBouncer connection pool db_name routing behavior PostgreSQL
💡 Result:
In DigitalOcean Managed PostgreSQL Databases, PgBouncer connection pools are configured for a specific target database (specified by the "db" or "db_name" parameter when creating the pool) and user. Each pool provides dedicated backend connections exclusively to that one target database within the cluster. Clients must connect to a specific pool using its unique endpoint, host, and port (typically ending in 2506x). The connection string provided in the pool's "Connection details" has "database" set to the pool name (e.g., postgres://doadmin:pass@pool-host:25061/my-pool?...), but PgBouncer ignores the client-supplied db_name in the connection string. Instead, it always routes all connections through that pool to the fixed target database configured for the pool (e.g., "defaultdb"). There is no support for routing to different databases based on the db_name in the connection string within a single pool. To access a different database, create a separate connection pool configured for that database and connect clients to that pool's endpoint instead. This design allows allocating specific connection limits per database. PgBouncer pooling modes (transaction, session, statement) affect connection reuse but not database routing, which remains fixed per pool. Note: In transaction mode (default), avoid prepared statements, pg_dump, or session-state features like advisory locks, as connections are recycled between transactions.
Citations:
- 1: https://docs.digitalocean.com/products/databases/postgresql/how-to/manage-connection-pools/
- 2: https://www.digitalocean.com/docs/databases/postgresql/how-to/manage-connection-pools
- 3: https://www.digitalocean.com/community/tutorials/managed-databases-connection-pools-and-postgresql-benchmarking-using-pgbench
- 4: https://docs.digitalocean.com/products/databases/postgresql/how-to/manage-connection-pools
- 5: https://docs.digitalocean.com/reference/terraform/reference/resources/database_connection_pool/
- 6: https://docs.digitalocean.com/reference/ansible/reference/modules/database_connection_pool/
🌐 Web query:
DigitalOcean digitalocean_database_connection_pool db_name parameter does PgBouncer override client dbname
💡 Result:
No, PgBouncer in DigitalOcean's database connection pools does not override the client-specified dbname. Clients connect to the pool using a connection string where the database parameter is set to the pool's name (e.g., "postgres://doadmin:pass@host:25061/my_pool"). The pool is configured with a specific backend database via the db_name (or db) parameter during creation (e.g., "defaultdb"), which determines the target database the pool connects to. The client dbname selects which pool to connect to, but once connected, the pool proxies to the fixed backend database specified at pool creation. Clients cannot override the backend db_name to connect to a different database through the same pool; a separate pool must be created for each target database.
Citations:
- 1: https://docs.digitalocean.com/products/databases/postgresql/how-to/manage-connection-pools/
- 2: https://www.digitalocean.com/docs/databases/postgresql/how-to/manage-connection-pools
- 3: https://docs.digitalocean.com/reference/ansible/reference/modules/database_connection_pool/
- 4: https://docs.digitalocean.com/reference/terraform/reference/resources/database_connection_pool/
- 5: https://docs.digitalocean.com/products/databases/postgresql/how-to/manage-connection-pools
🏁 Script executed:
grep -A 10 "resource \"digitalocean_database_connection_pool\"" terraform/postgres.tfRepository: clofour/gitlab-kubernetes
Length of output: 434
Critical: connection pool routes to the wrong database (defaultdb, not gitlab).
On digitalocean_database_cluster, the database attribute refers to the cluster's default database — defaultdb — not the gitlab database created via digitalocean_database_db.gitlab. As written, PgBouncer will forward all sessions from this pool to defaultdb, ignoring any client-supplied database specification.
GitLab's Helm values set postgres_database = digitalocean_database_db.gitlab.name, expecting connections to the gitlab database. However, DigitalOcean connection pools are fixed to a single backend database at the PgBouncer level — the pool's db_name determines the actual target and cannot be overridden by the client. Result: GitLab will connect to defaultdb instead of gitlab, causing migrations and database initialization to fail.
Fix: bind the pool to the gitlab database and optionally specify the user:
resource "digitalocean_database_connection_pool" "main" {
cluster_id = digitalocean_database_cluster.postgres.id
name = "gitlab"
mode = "transaction"
size = 10
- db_name = digitalocean_database_cluster.postgres.database
+ db_name = digitalocean_database_db.gitlab.name
+ user = digitalocean_database_user.gitlab.name
}Setting user explicitly binds the pool to the gitlab role for additional clarity and isolation; without it, the pool uses the inbound authenticating user (currently doadmin via Helm), which works but is less explicit.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| resource "digitalocean_database_connection_pool" "main" { | |
| cluster_id = digitalocean_database_cluster.postgres.id | |
| name = "gitlab" | |
| mode = "transaction" | |
| size = 10 | |
| db_name = digitalocean_database_cluster.postgres.database | |
| } | |
| resource "digitalocean_database_connection_pool" "main" { | |
| cluster_id = digitalocean_database_cluster.postgres.id | |
| name = "gitlab" | |
| mode = "transaction" | |
| size = 10 | |
| db_name = digitalocean_database_db.gitlab.name | |
| user = digitalocean_database_user.gitlab.name | |
| } |
| buckets = toset([ | ||
| "artifacts", "lfs", "uploads", "packages", | ||
| "registry", "backups", "tmp", "ci-secure-files", | ||
| "dependency-proxy", "terraform-state", "pages" | ||
| # "registry", "backups", "tmp", # "ci-secure-files", | ||
| # "dependency-proxy", # "terraform-state", "pages" | ||
| ]) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
rg -nP -C2 '\b(registry|backups)\b' helm/gitlab/values.yaml terraform/Repository: clofour/gitlab-kubernetes
Length of output: 1485
🏁 Script executed:
cat -n helm/gitlab/values.yaml | head -50Repository: clofour/gitlab-kubernetes
Length of output: 1370
🏁 Script executed:
rg -nP '(registry\.enabled|global\.hosts)' helm/gitlab/values.yamlRepository: clofour/gitlab-kubernetes
Length of output: 51
🏁 Script executed:
rg -nP 'registry' helm/gitlab/values.yaml | head -20Repository: clofour/gitlab-kubernetes
Length of output: 127
🏁 Script executed:
wc -l helm/gitlab/values.yamlRepository: clofour/gitlab-kubernetes
Length of output: 95
🏁 Script executed:
rg -nP 'registry' helm/gitlab/values.yamlRepository: clofour/gitlab-kubernetes
Length of output: 127
🏁 Script executed:
rg -nP 'enabled' helm/gitlab/values.yaml | grep -i registryRepository: clofour/gitlab-kubernetes
Length of output: 51
🏁 Script executed:
cat -n helm/gitlab/values.yamlRepository: clofour/gitlab-kubernetes
Length of output: 2114
🏁 Script executed:
sed -n '109,123p' terraform/kubernetes.tfRepository: clofour/gitlab-kubernetes
Length of output: 614
Disable the registry service explicitly or restore its S3 bucket and secret.
The registry bucket and S3 secret are commented out in terraform/spaces.tf and terraform/kubernetes.tf, but helm/gitlab/values.yaml still configures global.hosts.registry.name and does not explicitly disable the registry subchart (which defaults to enabled in the GitLab Helm chart). This creates an inconsistent state: the registry pod will be scheduled but will fail to mount object storage since the bucket no longer exists.
To resolve, either:
- Explicitly disable registry in the Helm values (
registry.enabled: falseand removeglobal.hosts.registry), following the same pattern used for other disabled services (postgresql, redis, gitlab-runner, prometheus) - Or uncomment the
registrybucket and S3 secret to restore the service
No description provided.