Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -22,3 +22,4 @@ dist
.react-email

.secrets.json
certs/
61 changes: 45 additions & 16 deletions deployment/docker-compose.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ services:
retries: 10
start_period: 40s
interval: 10s

planetscale:
container_name: planetscale
image: ghcr.io/mattrobenolt/ps-http-sim:v0.0.12
Expand All @@ -39,6 +40,7 @@ services:
condition: service_healthy
ports:
- 3900:3900

apiv2_lb:
container_name: apiv2_lb
image: nginx:1.29.0
Expand All @@ -49,6 +51,7 @@ services:
ports:
- 2112:2112
- 7070:7070

apiv2:
deploy:
replicas: 3
Expand All @@ -68,15 +71,13 @@ services:
UNKEY_DATABASE_PRIMARY: "unkey:password@tcp(mysql:3306)/unkey?parseTime=true"
UNKEY_CLICKHOUSE_URL: "clickhouse://default:password@clickhouse:9000?secure=false&skip_verify=true"
UNKEY_CHPROXY_AUTH_TOKEN: "chproxy-test-token-123"
UNKEY_OTEL: true
OTEL_EXPORTER_OTLP_ENDPOINT: "http://otel:4318"
OTEL_EXPORTER_OTLP_PROTOCOL: "http/protobuf"
UNKEY_OTEL: false
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Quote boolean env to avoid YAML pitfalls.

Make intent explicit.

Apply:

-      UNKEY_OTEL: false
+      UNKEY_OTEL: "false"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
UNKEY_OTEL: false
UNKEY_OTEL: "false"
🧰 Tools
🪛 Checkov (3.2.334)

[low] 73-74: Base64 High Entropy String

(CKV_SECRET_6)

🤖 Prompt for AI Agents
In deployment/docker-compose.yaml around line 74, the environment variable
UNKEY_OTEL is set to an unquoted boolean (false) which can be misinterpreted by
YAML parsers; update the value to a quoted string (e.g., "false" or 'false') to
make the intent explicit and avoid YAML boolean parsing pitfalls.

VAULT_S3_URL: "http://s3:3902"
VAULT_S3_BUCKET: "vault"
VAULT_S3_ACCESS_KEY_ID: "minio_root_user"
VAULT_S3_ACCESS_KEY_SECRET: "minio_root_password"
VAULT_MASTER_KEYS: "Ch9rZWtfMmdqMFBJdVhac1NSa0ZhNE5mOWlLSnBHenFPENTt7an5MRogENt9Si6wms4pQ2XIvqNSIgNpaBenJmXgcInhu6Nfv2U="
Comment on lines 75 to 79
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Unify S3 bucket name or pre‑create both to avoid 404s.

apiv2/agent use “vault” while gw/ctrl use “acme‑vault”. Either standardize or auto‑create both buckets at MinIO startup.

Apply this diff to standardize on “vault”:

-      VAULT_S3_BUCKET: "vault"
+      VAULT_S3_BUCKET: "vault"
@@
-      VAULT_S3_BUCKET: "vault"
+      VAULT_S3_BUCKET: "vault"
@@
-      UNKEY_VAULT_S3_BUCKET: "acme-vault"
+      UNKEY_VAULT_S3_BUCKET: "vault"
@@
-      UNKEY_VAULT_S3_BUCKET: "acme-vault"
+      UNKEY_VAULT_S3_BUCKET: "vault"

Alternatively, keep both names and pre‑create them at MinIO:

# in s3.environment (add)
MINIO_DEFAULT_BUCKETS: "vault,acme-vault"

Also applies to: 108-113, 228-233, 261-266

🧰 Tools
🪛 Gitleaks (8.27.2)

[high] 79-79: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)

🤖 Prompt for AI Agents
In deployment/docker-compose.yaml around lines 75 to 79 (and also at 108-113,
228-233, 261-266) the S3 bucket names differ between services (some use "vault",
others "acme-vault"), causing 404s; either standardize all VAULT_S3_BUCKET env
values to the same name (e.g., "vault") across those locations, or keep both
names and add a MinIO startup env var to pre-create both buckets (set
MINIO_DEFAULT_BUCKETS to "vault,acme-vault" in the s3 service environment
block); update every occurrence listed so all services reference existing
buckets.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

Hardcoded master keys/tokens in VCS (even for dev) — move to env files.

Store secrets in a local .env (gitignored) and reference via env_file or placeholders; avoid committing real values.

Minimal change:

-      VAULT_MASTER_KEYS: "Ch9rZ..."
+      VAULT_MASTER_KEYS: "${VAULT_MASTER_KEYS:?set in .env}"
@@
-      UNKEY_VAULT_MASTER_KEYS: "Ch9rZ..."
+      UNKEY_VAULT_MASTER_KEYS: "${UNKEY_VAULT_MASTER_KEYS:?set in .env}"

Add an example .env.example with dummy values and keep .env in .gitignore.

Also applies to: 232-232, 265-265

🧰 Tools
🪛 Gitleaks (8.27.2)

[high] 79-79: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)

🤖 Prompt for AI Agents
In deployment/docker-compose.yaml around line 79 (and also at lines 232 and
265), a real VAULT_MASTER_KEYS value is hardcoded into version control; remove
the secret value and replace it with an environment variable reference (e.g. use
${VAULT_MASTER_KEYS}) or load via an env_file. Add a gitignored .env file at the
repo root containing the actual secret for local use and commit a .env.example
with dummy placeholder values to show required keys. Update .gitignore to
include .env so secrets are not committed.

# UNKEY_PROMETHEUS_PORT: 2112

redis:
container_name: redis
image: redis:8.0
Expand All @@ -88,6 +89,7 @@ services:
retries: 5
start_period: 10s
interval: 5s

agent:
container_name: agent
command: ["/usr/local/bin/unkey", "agent", "--config", "config.docker.json"]
Expand All @@ -109,6 +111,7 @@ services:
VAULT_S3_ACCESS_KEY_SECRET: "minio_root_password"
VAULT_MASTER_KEYS: "Ch9rZWtfMmdqMFBJdVhac1NSa0ZhNE5mOWlLSnBHenFPENTt7an5MRogENt9Si6wms4pQ2XIvqNSIgNpaBenJmXgcInhu6Nfv2U="
CLICKHOUSE_URL: "clickhouse://default:password@clickhouse:9000"

clickhouse:
build:
context: ..
Expand Down Expand Up @@ -160,6 +163,7 @@ services:
retries: 10
start_period: 15s
interval: 5s

api:
container_name: api
build:
Expand Down Expand Up @@ -200,14 +204,32 @@ services:
container_name: gw
command: ["run", "gw"]
ports:
- "6060:6060"
- "80:80"
- "443:443"
depends_on:
Comment on lines +207 to 209
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Host ports 80/443 can collide with local services.

Consider mapping to 8080/8443 for local dev or make host ports configurable via .env.

Example:

-      - "80:80"
-      - "443:443"
+      - "${GW_HTTP_PORT:-8080}:80"
+      - "${GW_HTTPS_PORT:-8443}:443"

Also applies to: 214-221

🤖 Prompt for AI Agents
In deployment/docker-compose.yaml around lines 207-209 (and similarly lines
214-221) the compose file maps container ports to host ports 80 and 443 which
can collide with local services; update the port mappings to use non-privileged
defaults (for example map host 8080->container 80 and 8443->container 443) and
make them configurable via environment variables by replacing the literals with
variables (e.g. ${HTTP_PORT:-8080}:80 and ${HTTPS_PORT:-8443}:443), add those
variables to .env with the default values, and update any README/docs to note
the new configurable ports.

- mysql
volumes:
- ./certs:/certs
environment:
UNKEY_HTTP_PORT: 6060
UNKEY_DATABASE_PRIMARY: "unkey:password@tcp(mysql:3306)/partition_001?parseTime=true"
UNKEY_HTTP_PORT: 80
UNKEY_HTTPS_PORT: 443

UNKEY_TLS_ENABLED: true
UNKEY_DEFAULT_CERT_DOMAIN: "unkey.local"
UNKEY_MAIN_DOMAIN: "unkey.local"
UNKEY_CTRL_ADDR: "http://ctrl:7091"
UNKEY_REQUIRE_LOCAL_CERT: true

UNKEY_DATABASE_PRIMARY: "unkey:password@tcp(mysql:3306)/partition_001?parseTime=true&interpolateParams=true"
UNKEY_KEYS_DATABASE_PRIMARY: "unkey:password@tcp(mysql:3306)/unkey?parseTime=true&interpolateParams=true"
Comment on lines +223 to +224
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

DSNs reference databases that may not exist (partition_001, hydra).

Unless migrations create them, MySQL will error on connect. Either create DBs on init or point to “unkey”.

Apply this diff (Option A: single DB for local):

-      UNKEY_DATABASE_PRIMARY: "unkey:password@tcp(mysql:3306)/partition_001?parseTime=true&interpolateParams=true"
-      UNKEY_KEYS_DATABASE_PRIMARY: "unkey:password@tcp(mysql:3306)/unkey?parseTime=true&interpolateParams=true"
+      UNKEY_DATABASE_PRIMARY: "unkey:password@tcp(mysql:3306)/unkey?parseTime=true&interpolateParams=true"
+      UNKEY_KEYS_DATABASE_PRIMARY: "unkey:password@tcp(mysql:3306)/unkey?parseTime=true&interpolateParams=true"
@@
-      UNKEY_DATABASE_HYDRA: "unkey:password@tcp(mysql:3306)/hydra?parseTime=true&interpolateParams=true"
-      UNKEY_DATABASE_PARTITION: "unkey:password@tcp(mysql:3306)/partition_001?parseTime=true&interpolateParams=true"
+      UNKEY_DATABASE_HYDRA: "unkey:password@tcp(mysql:3306)/unkey?parseTime=true&interpolateParams=true"
+      UNKEY_DATABASE_PARTITION: "unkey:password@tcp(mysql:3306)/unkey?parseTime=true&interpolateParams=true"

Option B (keep DSNs): add an init script to create DBs:

-- deployment/mysql-init.sql
CREATE DATABASE IF NOT EXISTS hydra;
CREATE DATABASE IF NOT EXISTS partition_001;
GRANT ALL ON hydra.* TO 'unkey'@'%';
GRANT ALL ON partition_001.* TO 'unkey'@'%';

Then mount it under mysql:

  • ./deployment/mysql-init.sql:/docker-entrypoint-initdb.d/01-init.sql:ro

Also applies to: 250-252

🤖 Prompt for AI Agents
In deployment/docker-compose.yaml around lines 223-224 (and also check lines
250-252), the configured DSNs reference databases (partition_001, hydra) that
may not exist and will cause MySQL connection errors; either change the DSN
values to point to a single existing database (e.g., use "unkey" for both
UNKEY_DATABASE_PRIMARY and UNKEY_KEYS_DATABASE_PRIMARY) or add a MySQL init SQL
script that creates the required databases and grants permissions (CREATE
DATABASE IF NOT EXISTS hydra; CREATE DATABASE IF NOT EXISTS partition_001; GRANT
ALL ... to 'unkey'@'%') and mount it into the container under
/docker-entrypoint-initdb.d/01-init.sql:ro in the docker-compose service
definition so the DBs exist before the application connects.

UNKEY_CLICKHOUSE_URL: "clickhouse://default:password@clickhouse:9000?secure=false&skip_verify=true"
UNKEY_OTEL: true
UNKEY_REDIS_URL: "redis://redis:6379"

UNKEY_VAULT_S3_URL: "http://s3:3902"
UNKEY_VAULT_S3_BUCKET: "acme-vault"
UNKEY_VAULT_S3_ACCESS_KEY_ID: "minio_root_user"
UNKEY_VAULT_S3_ACCESS_KEY_SECRET: "minio_root_password"
UNKEY_VAULT_MASTER_KEYS: "Ch9rZWtfMmdqMFBJdVhac1NSa0ZhNE5mOWlLSnBHenFPENTt7an5MRogENt9Si6wms4pQ2XIvqNSIgNpaBenJmXgcInhu6Nfv2U="

ctrl:
build:
Expand All @@ -221,23 +243,27 @@ services:
- "7091:7091"
depends_on:
- mysql
# - metald-aio
- otel
- s3
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
# Database configuration - use existing mysql service
UNKEY_DATABASE_PRIMARY: "unkey:password@tcp(mysql:3306)/unkey?parseTime=true"
UNKEY_DATABASE_HYDRA: "unkey:password@tcp(mysql:3306)/hydra?parseTime=true"
UNKEY_DATABASE_PARTITION: "unkey:password@tcp(mysql:3306)/partition_001?parseTime=true"
UNKEY_DATABASE_PRIMARY: "unkey:password@tcp(mysql:3306)/unkey?parseTime=true&interpolateParams=true"
UNKEY_DATABASE_HYDRA: "unkey:password@tcp(mysql:3306)/hydra?parseTime=true&interpolateParams=true"
UNKEY_DATABASE_PARTITION: "unkey:password@tcp(mysql:3306)/partition_001?parseTime=true&interpolateParams=true"

# Control plane configuration
UNKEY_HTTP_PORT: "7091"
UNKEY_METALD_ADDRESS: "http://metald-aio:8080"
UNKEY_METALD_ADDRESS: "http://metald-aio:8090"
UNKEY_METALD_BACKEND: "docker"
UNKEY_DEFAULT_DOMAIN: "unkey.local"
UNKEY_DOCKER_RUNNING: "true"

UNKEY_VAULT_S3_URL: "http://s3:3902"
UNKEY_VAULT_S3_BUCKET: "vault"
UNKEY_VAULT_S3_BUCKET: "acme-vault"
UNKEY_VAULT_S3_ACCESS_KEY_ID: "minio_root_user"
UNKEY_VAULT_S3_ACCESS_KEY_SECRET: "minio_root_password"
UNKEY_VAULT_MASTER_KEYS: "Ch9rZWtfMmdqMFBJdVhac1NSa0ZhNE5mOWlLSnBHenFPENTt7an5MRogENt9Si6wms4pQ2XIvqNSIgNpaBenJmXgcInhu6Nfv2U="

otel:
image: grafana/otel-lgtm:0.11.7
container_name: otel
Expand All @@ -246,6 +272,7 @@ services:
- 3001:3000
- 4317:4317
- 4318:4318

prometheus:
image: prom/prometheus:v3.5.0
container_name: prometheus
Expand All @@ -255,6 +282,7 @@ services:
- ./config/prometheus.yml:/etc/prometheus/prometheus.yml
depends_on:
- apiv2

dashboard:
build:
context: ..
Expand All @@ -281,6 +309,7 @@ services:
NODE_ENV: "production"
# Bootstrap workspace/API IDs
# Reading from env file, no override necessary

# Unkey Deploy Services - All-in-one development container with all 4 services
#
#############################################################################
Expand Down
156 changes: 156 additions & 0 deletions deployment/setup-wildcard-dns.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,156 @@
#!/bin/bash

# Setup wildcard DNS for *.unkey.local using dnsmasq

set -e

# Detect OS
OS="unknown"
if [[ "$OSTYPE" == "darwin"* ]]; then
OS="macos"
elif [[ "$OSTYPE" == "linux-gnu"* ]]; then
OS="linux"
else
echo "Unsupported OS: $OSTYPE"
exit 1
fi

echo "Detected OS: $OS"
echo ""
echo "This script will set up dnsmasq to resolve *.unkey.local to 127.0.0.1"
echo "This allows you to use any subdomain like my-deployment.unkey.local"
echo ""

# Check if dnsmasq is installed
if command -v dnsmasq &> /dev/null; then
echo "dnsmasq is already installed"
else
echo "dnsmasq is not installed"
echo ""
if [[ "$OS" == "macos" ]]; then
echo "Would you like to install dnsmasq using Homebrew? (y/n)"
else
echo "Would you like to install dnsmasq using your package manager? (y/n)"
fi
read -r response
if [[ "$response" != "y" ]]; then
echo "Installation cancelled"
exit 1
fi

Comment on lines +30 to +40
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Make prompts scriptable and idempotent.

Accept y/yes/Y and support non-interactive mode via UNKEY_NONINTERACTIVE=1 to unblock CI/devcontainers.

-    read -r response
-    if [[ "$response" != "y" ]]; then
+    if [[ "${UNKEY_NONINTERACTIVE:-0}" == "1" ]]; then
+        response="y"
+    else
+        read -r response
+    fi
+    shopt -s nocasematch
+    if [[ ! "$response" =~ ^y(es)?$ ]]; then
         echo "Installation cancelled"
         exit 1
     fi
+    shopt -u nocasematch
@@
-    read -r response
+    if [[ "${UNKEY_NONINTERACTIVE:-0}" == "1" ]]; then response="y"; else read -r response; fi
@@
-    read -r response
+    if [[ "${UNKEY_NONINTERACTIVE:-0}" == "1" ]]; then response="y"; else read -r response; fi

Also applies to: 88-95, 118-129

🤖 Prompt for AI Agents
deployment/setup-wildcard-dns.sh around lines 30-40 (also apply the same changes
at 88-95 and 118-129): currently prompts are interactive and only accept a
single lowercase "y"; modify each prompt block to be scriptable and idempotent
by 1) treating UNKEY_NONINTERACTIVE=1 as non-interactive auto-accept (skip read
and proceed), 2) accepting "y", "Y", "yes", and "YES" as affirmative responses
when prompting, and 3) before prompting, check idempotency conditions (e.g.,
detect if dnsmasq is already installed or the configuration already applied) and
skip installation/configuration steps if already present. Ensure the read uses
read -r response and normalize response to lowercase for comparison when not in
non-interactive mode.

# Install dnsmasq based on OS
if [[ "$OS" == "macos" ]]; then
if ! command -v brew &> /dev/null; then
echo "Homebrew is not installed. Please install Homebrew first."
exit 1
fi
echo "Installing dnsmasq with Homebrew..."
brew install dnsmasq
else
# Linux installation
if command -v apt-get &> /dev/null; then
echo "Installing dnsmasq with apt..."
sudo apt-get update && sudo apt-get install -y dnsmasq
elif command -v yum &> /dev/null; then
echo "Installing dnsmasq with yum..."
sudo yum install -y dnsmasq
elif command -v dnf &> /dev/null; then
echo "Installing dnsmasq with dnf..."
sudo dnf install -y dnsmasq
elif command -v pacman &> /dev/null; then
echo "Installing dnsmasq with pacman..."
sudo pacman -S --noconfirm dnsmasq
else
echo "Could not detect package manager. Please install dnsmasq manually."
exit 1
fi
fi
fi

echo ""
echo "Configuring dnsmasq for *.unkey.local..."

if [[ "$OS" == "macos" ]]; then
# macOS configuration
DNSMASQ_CONF="$(brew --prefix)/etc/dnsmasq.conf"

# Backup existing config if it exists
if [[ -f "$DNSMASQ_CONF" ]]; then
cp "$DNSMASQ_CONF" "$DNSMASQ_CONF.backup.$(date +%Y%m%d_%H%M%S)"
echo "Backed up existing config"
fi

# Add our configuration
echo "address=/unkey.local/127.0.0.1" > "$DNSMASQ_CONF"
echo "Configured dnsmasq to resolve *.unkey.local to 127.0.0.1"

# Start dnsmasq service
echo ""
echo "Would you like to start dnsmasq as a service? (y/n)"
read -r response
if [[ "$response" == "y" ]]; then
sudo brew services start dnsmasq
echo "dnsmasq service started"
fi

# Setup resolver
echo ""
echo "Setting up macOS resolver for .unkey.local domain..."
sudo mkdir -p /etc/resolver
echo "nameserver 127.0.0.1" | sudo tee /etc/resolver/unkey.local > /dev/null
echo "Resolver configured"

else
# Linux configuration
DNSMASQ_CONF="/etc/dnsmasq.d/unkey.local.conf"

# Create configuration in dnsmasq.d directory (included by default in most dnsmasq setups)
# This keeps our config separate from the main dnsmasq configuration
{
echo "# Unkey local development DNS configuration"
echo "# Resolve all *.unkey.local domains to localhost"
echo "address=/unkey.local/127.0.0.1"
} | sudo tee "$DNSMASQ_CONF" > /dev/null
echo "Configured dnsmasq to resolve *.unkey.local to 127.0.0.1"

# Restart dnsmasq service
echo ""
echo "Would you like to restart dnsmasq service? (y/n)"
read -r response
if [[ "$response" == "y" ]]; then
if systemctl is-active --quiet dnsmasq; then
sudo systemctl restart dnsmasq
echo "dnsmasq service restarted"
else
sudo systemctl start dnsmasq
sudo systemctl enable dnsmasq
echo "dnsmasq service started and enabled"
fi
fi

# Configure systemd-resolved or NetworkManager if present
if systemctl is-active --quiet systemd-resolved; then
echo ""
echo "systemd-resolved detected. You may need to configure it to use dnsmasq."
echo "Add 'DNS=127.0.0.1' to /etc/systemd/resolved.conf and restart systemd-resolved"
fi
fi

echo ""
echo "Setup complete!"
echo ""
echo "Test your setup with:"
echo " dig test.unkey.local"
echo " ping my-deployment.unkey.local"
echo " curl http://anything.unkey.local"
echo ""
echo "To undo these changes:"
if [[ "$OS" == "macos" ]]; then
echo " sudo brew services stop dnsmasq"
echo " sudo rm /etc/resolver/unkey.local"
echo " brew uninstall dnsmasq # optional"
else
echo " sudo systemctl stop dnsmasq"
echo " sudo rm $DNSMASQ_CONF"
echo " sudo systemctl restart dnsmasq"
fi
4 changes: 4 additions & 0 deletions go/apps/ctrl/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -85,6 +85,10 @@ type Config struct {
Acme AcmeConfig

DefaultDomain string

// IsRunningDocker indicates whether this service is running inside a Docker container
// Affects host address resolution for container-to-container communication
IsRunningDocker bool
}

func (c Config) Validate() error {
Expand Down
8 changes: 4 additions & 4 deletions go/apps/ctrl/services/deployment/backend_adapter.go
Original file line number Diff line number Diff line change
Expand Up @@ -60,8 +60,8 @@ type LocalBackendAdapter struct {
logger logging.Logger
}

func NewLocalBackendAdapter(backendType string, logger logging.Logger) (*LocalBackendAdapter, error) {
backend, err := backends.NewBackend(backendType, logger)
func NewLocalBackendAdapter(backendType string, logger logging.Logger, isRunningDocker bool) (*LocalBackendAdapter, error) {
backend, err := backends.NewBackend(backendType, logger, isRunningDocker)
if err != nil {
return nil, err
}
Expand Down Expand Up @@ -117,10 +117,10 @@ func (f *LocalBackendAdapter) Name() string {
}

// NewDeploymentBackend creates the appropriate backend based on configuration
func NewDeploymentBackend(metalDClient metaldv1connect.VmServiceClient, fallbackType string, logger logging.Logger) (DeploymentBackend, error) {
func NewDeploymentBackend(metalDClient metaldv1connect.VmServiceClient, fallbackType string, logger logging.Logger, isRunningDocker bool) (DeploymentBackend, error) {
if fallbackType != "" {
logger.Info("using local deployment backend", "type", fallbackType)
return NewLocalBackendAdapter(fallbackType, logger)
return NewLocalBackendAdapter(fallbackType, logger, isRunningDocker)
}

if metalDClient == nil {
Expand Down
Loading