Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
106 changes: 106 additions & 0 deletions go/apps/krane/backend/docker/gateway_create.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,106 @@
package docker

import (
"context"
"fmt"

"connectrpc.com/connect"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/network"
"github.com/docker/go-connections/nat"
kranev1 "github.com/unkeyed/unkey/go/gen/proto/krane/v1"
)

// CreateGateway creates containers for a gateway with the specified replica count.
//
// Creates multiple containers with shared labels, dynamic port mapping to port 8040,
// and resource limits. Returns GATEWAY_STATUS_PENDING as containers may not be
// immediately ready.
func (d *docker) CreateGateway(ctx context.Context, req *connect.Request[kranev1.CreateGatewayRequest]) (*connect.Response[kranev1.CreateGatewayResponse], error) {
gateway := req.Msg.GetGateway()
d.logger.Info("creating gateway",
"gateway_id", gateway.GetGatewayId(),
"image", gateway.GetImage(),
)

// Ensure image exists locally (pull if not present)
if err := d.ensureImageExists(ctx, gateway.GetImage()); err != nil {
return nil, connect.NewError(connect.CodeInternal,
fmt.Errorf("failed to ensure image exists: %w", err))
}

// Configure port mapping
exposedPorts := nat.PortSet{
"8040/tcp": struct{}{},
}

portBindings := nat.PortMap{
"8040/tcp": []nat.PortBinding{
{
HostIP: "0.0.0.0",
HostPort: "0", // Docker will assign a random available port
},
},
}

// Configure resource limits
cpuNanos := int64(gateway.GetCpuMillicores()) * 1_000_000 // Convert millicores to nanoseconds
memoryBytes := int64(gateway.GetMemorySizeMib()) * 1024 * 1024 //nolint:gosec // Intentional conversion

//nolint:exhaustruct // Docker SDK types have many optional fields
containerConfig := &container.Config{
Image: gateway.GetImage(),
Labels: map[string]string{
"unkey.gateway.id": gateway.GetGatewayId(),
"unkey.managed.by": "krane",
},
ExposedPorts: exposedPorts,
Env: []string{
fmt.Sprintf("UNKEY_WORKSPACE_ID=%s", gateway.GetWorkspaceId()),
fmt.Sprintf("UNKEY_GATEWAY_ID=%s", gateway.GetGatewayId()),
fmt.Sprintf("UNKEY_IMAGE=%s", gateway.GetImage()),
},
}

//nolint:exhaustruct // Docker SDK types have many optional fields
hostConfig := &container.HostConfig{
PortBindings: portBindings,
RestartPolicy: container.RestartPolicy{
Name: "unless-stopped",
},
Resources: container.Resources{
NanoCPUs: cpuNanos,
Memory: memoryBytes,
},
}

//nolint:exhaustruct // Docker SDK types have many optional fields
networkConfig := &network.NetworkingConfig{}

// Create container

for i := range req.Msg.GetGateway().GetReplicas() {
//nolint:exhaustruct // Docker SDK types have many optional fields
resp, err := d.client.ContainerCreate(
ctx,
containerConfig,
hostConfig,
networkConfig,
nil,
fmt.Sprintf("%s-%d", gateway.GetGatewayId(), i),
)
if err != nil {
return nil, connect.NewError(connect.CodeInternal, fmt.Errorf("failed to create container: %w", err))
}

//nolint:exhaustruct // Docker SDK types have many optional fields
err = d.client.ContainerStart(ctx, resp.ID, container.StartOptions{})
if err != nil {
return nil, connect.NewError(connect.CodeInternal, fmt.Errorf("failed to start container: %w", err))
}
}

return connect.NewResponse(&kranev1.CreateGatewayResponse{
Status: kranev1.GatewayStatus_GATEWAY_STATUS_PENDING,
}), nil
Comment on lines +19 to +105
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

Guard against nil gateway and double‑check replica loop semantics

The overall shape of CreateGateway looks good: one image existence check, shared configs for ports/resources, and per‑replica container creation & start.

Two things are worth tightening up:

  1. Nil gateway can panic

gateway := req.Msg.GetGateway() can be nil if the client sends a request without the embedded message set. Calling gateway.GetGatewayId()/GetImage() will then panic the handler.

Adding a fast validation guard and returning INVALID_ARGUMENT (or similar) avoids a crash:

-	gateway := req.Msg.GetGateway()
-	d.logger.Info("creating gateway",
+	gateway := req.Msg.GetGateway()
+	if gateway == nil {
+		return nil, connect.NewError(
+			connect.CodeInvalidArgument,
+			fmt.Errorf("gateway is required"),
+		)
+	}
+
+	d.logger.Info("creating gateway",
 		"gateway_id", gateway.GetGatewayId(),
 		"image", gateway.GetImage(),
 	)
  1. Replica loop likely wants an explicit count, not range

The loop

for i := range req.Msg.GetGateway().GetReplicas() {

assumes GetReplicas() is something you can range over. If replicas is actually a numeric count (common for deployment-style APIs), this won’t compile; if it’s a slice, you’re also re-fetching gateway from the request on each iteration instead of using the local gateway.

Assuming replicas is a scalar count, something like this is safer and clearer:

-	// Create container
-
-	for i := range req.Msg.GetGateway().GetReplicas() {
+	// Create containers (one per replica)
+	replicas := int(gateway.GetReplicas()) // adjust type conversion as needed
+	for i := 0; i < replicas; i++ {
 		//nolint:exhaustruct // Docker SDK types have many optional fields
 		resp, err := d.client.ContainerCreate(
 			ctx,
@@
-			nil,
-			fmt.Sprintf("%s-%d", gateway.GetGatewayId(), i),
+			nil,
+			fmt.Sprintf("%s-%d", gateway.GetGatewayId(), i),
 		)

Please verify the actual protobuf type of replicas (int32, uint32, or a repeated message) and adjust the conversion accordingly.

To confirm the field shape, you can run:


🏁 Script executed:

#!/bin/bash
# Inspect the krane gateway proto to see how replicas is defined.
rg -n "message Gateway" -S
rg -n "replicas" -S

Length of output: 5236



---

🏁 Script executed:

```shell
#!/bin/bash
# Check the complete Gateway proto message definition
cat go/proto/krane/v1/gateway.proto | head -50

# Also verify nil-check patterns in similar code (ensureImageExists, container creation)
rg -n "if.*== nil" go/apps/krane/backend/docker/gateway_create.go -A 2

Length of output: 997



---

🏁 Script executed:

```shell
#!/bin/bash
# View the exact loop line in gateway_create.go
sed -n '80,95p' go/apps/krane/backend/docker/gateway_create.go

Length of output: 437


Fix compilation error in replica loop and add nil guard for gateway

The overall shape of CreateGateway looks good: one image existence check, shared configs for ports/resources, and per‑replica container creation & start.

Two critical issues must be fixed:

  1. Nil gateway can panic

gateway := req.Msg.GetGateway() can be nil if the client sends a request without the embedded message set. Calling gateway.GetGatewayId()/GetImage() will then panic the handler.

Adding a fast validation guard and returning INVALID_ARGUMENT (or similar) avoids a crash:

-	gateway := req.Msg.GetGateway()
-	d.logger.Info("creating gateway",
+	gateway := req.Msg.GetGateway()
+	if gateway == nil {
+		return nil, connect.NewError(
+			connect.CodeInvalidArgument,
+			fmt.Errorf("gateway is required"),
+		)
+	}
+
+	d.logger.Info("creating gateway",
 		"gateway_id", gateway.GetGatewayId(),
 		"image", gateway.GetImage(),
 	)
  1. Replica loop cannot compile: GetReplicas() is uint32, not iterable

The proto defines uint32 replicas = 5; in GatewayRequest. In Go, range only works on arrays, slices, maps, strings, and channels—not scalar integers.

The current code:

for i := range req.Msg.GetGateway().GetReplicas() {

will fail to compile. Use an explicit numeric loop instead:

-	// Create container
-
-	for i := range req.Msg.GetGateway().GetReplicas() {
+	// Create containers (one per replica)
+	replicas := gateway.GetReplicas() // uint32
+	for i := uint32(0); i < replicas; i++ {
 		//nolint:exhaustruct // Docker SDK types have many optional fields
 		resp, err := d.client.ContainerCreate(
 			ctx,
 			containerConfig,
 			hostConfig,
 			networkConfig,
 			nil,
-			fmt.Sprintf("%s-%d", gateway.GetGatewayId(), i),
+			fmt.Sprintf("%s-%d", gateway.GetGatewayId(), i),
 		)
🤖 Prompt for AI Agents
In go/apps/krane/backend/docker/gateway_create.go around lines 19-105, add a nil
check for gateway right after retrieving it from the request and return a
connect.NewError(connect.CodeInvalidArgument, fmt.Errorf("gateway must be
provided")) if nil; move any uses of gateway.Get... (including the initial
d.logger.Info call) to after this guard so we don't panic. Replace the invalid
range over GetReplicas() with a numeric loop such as: for i := 0; i <
int(gateway.GetReplicas()); i++ { ... } (use int conversion for
indexing/naming), leaving the container creation/start logic unchanged and
returning the same response on success.

}
48 changes: 48 additions & 0 deletions go/apps/krane/backend/docker/gateway_delete.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
package docker

import (
"context"
"fmt"

"connectrpc.com/connect"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/filters"
kranev1 "github.com/unkeyed/unkey/go/gen/proto/krane/v1"
)

// DeleteGateway removes all containers for a gateway.
//
// Finds containers by gateway ID label and forcibly removes them with
// volumes and network links to ensure complete cleanup.
func (d *docker) DeleteGateway(ctx context.Context, req *connect.Request[kranev1.DeleteGatewayRequest]) (*connect.Response[kranev1.DeleteGatewayResponse], error) {
gatewayID := req.Msg.GetGatewayId()

d.logger.Info("getting gateway", "gateway_id", gatewayID)

containers, err := d.client.ContainerList(ctx, container.ListOptions{
Size: false,
Latest: false,
Since: "",
Before: "",
Limit: 0,
All: true,
Filters: filters.NewArgs(
filters.Arg("label", fmt.Sprintf("unkey.gateway.id=%s", gatewayID)),
),
})
if err != nil {
return nil, connect.NewError(connect.CodeInternal, fmt.Errorf("failed to list containers: %w", err))
}

for _, c := range containers {
err := d.client.ContainerRemove(ctx, c.ID, container.RemoveOptions{
RemoveVolumes: true,
RemoveLinks: true,
Force: true,
})
if err != nil {
return nil, connect.NewError(connect.CodeInternal, fmt.Errorf("failed to remove container: %w", err))
}
}
return connect.NewResponse(&kranev1.DeleteGatewayResponse{}), nil
}
Comment on lines +13 to +48
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

❓ Verification inconclusive

Tighten DeleteGateway logging and consider best‑effort, idempotent semantics

The core deletion flow (list by unkey.gateway.id label, All: true, then ContainerRemove with Force and RemoveVolumes) looks good for aggressive cleanup.

Two small points:

  1. The log line currently says "getting gateway" in a delete handler, which is confusing when scanning logs.
  2. Right now, any ContainerRemove error aborts the whole RPC. If a container is already gone between ContainerList and ContainerRemove, that will surface as an internal error rather than a successful, idempotent delete.

You might want to:

  • Fix the log message.
  • Optionally treat "not found" as a non-fatal condition and continue deleting other containers.

Example diff:

-	d.logger.Info("getting gateway", "gateway_id", gatewayID)
+	d.logger.Info("deleting gateway", "gateway_id", gatewayID)
@@
-	for _, c := range containers {
-		err := d.client.ContainerRemove(ctx, c.ID, container.RemoveOptions{
+	for _, c := range containers {
+		err := d.client.ContainerRemove(ctx, c.ID, container.RemoveOptions{
 			RemoveVolumes: true,
 			RemoveLinks:   true,
 			Force:         true,
 		})
 		if err != nil {
-			return nil, connect.NewError(connect.CodeInternal, fmt.Errorf("failed to remove container: %w", err))
+			// Optionally, treat “already gone” as success and continue:
+			// if client.IsErrNotFound(err) { continue }
+			return nil, connect.NewError(connect.CodeInternal, fmt.Errorf("failed to remove container: %w", err))
 		}
 	}

If you choose to special‑case "not found", please double‑check the exact helper (e.g. client.IsErrNotFound) against the Docker client version used in this repo.


🏁 Script executed:

#!/bin/bash
# Inspect available helpers for not-found errors in the Docker client version used here.
rg -n "IsErrNotFound" --type go

Length of output: 0


I'll search for Docker error handling patterns in the codebase and also check the Docker client documentation.

Now let me search the codebase for the Docker client imports and error handling patterns:

Now let me verify the actual file structure and imports:Based on my verification through web search and documentation:

Findings:

  1. The Docker client library has IsErrNotFound which is an alias for cerrdefs.IsNotFound, though it is deprecated in favor of using cerrdefs.IsNotFound directly.
  2. The library also provides IsErrContainerNotFound specifically for checking when containers are not found.
  3. The review comment correctly identifies two issues: the misleading log message and the non-idempotent error handling.
  4. The suggested solution of using either client.IsErrNotFound (deprecated, use cerrdefs.IsNotFound instead) is valid.

The review comment's analysis is accurate and the suggested error handling improvement is valid. The log message should indeed be changed from "getting gateway" to "deleting gateway," and optionally checking for "not found" errors to make the delete operation more resilient is a reasonable design choice.


Fix the misleading log message and consider best-effort error handling for idempotent semantics

The deletion flow (list by unkey.gateway.id label with All: true, then ContainerRemove with Force and RemoveVolumes) is sound.

Two small improvements:

  1. The log message says "getting gateway" in a delete handler; change it to "deleting gateway" to match the operation.
  2. Currently, any ContainerRemove error aborts. If a container is already gone between ContainerList and ContainerRemove, it surfaces as an internal error. Optionally, treat "not found" as non-fatal and continue deleting other containers for better idempotency.

Suggested changes:

-	d.logger.Info("getting gateway", "gateway_id", gatewayID)
+	d.logger.Info("deleting gateway", "gateway_id", gatewayID)
@@
-	for _, c := range containers {
-		err := d.client.ContainerRemove(ctx, c.ID, container.RemoveOptions{
+	for _, c := range containers {
+		err := d.client.ContainerRemove(ctx, c.ID, container.RemoveOptions{
 			RemoveVolumes: true,
 			RemoveLinks:   true,
 			Force:         true,
 		})
 		if err != nil {
-			return nil, connect.NewError(connect.CodeInternal, fmt.Errorf("failed to remove container: %w", err))
+			// Optionally treat "container not found" as success and continue:
+			// import "github.com/docker/docker/errdefs"
+			// if errdefs.IsNotFound(err) { continue }
+			return nil, connect.NewError(connect.CodeInternal, fmt.Errorf("failed to remove container: %w", err))
 		}
 	}
🤖 Prompt for AI Agents
In go/apps/krane/backend/docker/gateway_delete.go around lines 13 to 48, the
handler logs "getting gateway" but is performing a delete and also treats any
ContainerRemove error as fatal; change the log entry to "deleting gateway" and
update the ContainerRemove error handling so that if the error is a Docker "not
found" error (use cerrdefs.IsNotFound or the appropriate client helper) you
treat it as non-fatal and continue deleting other containers, while still
returning a connect internal error for any other failures.

65 changes: 65 additions & 0 deletions go/apps/krane/backend/docker/gateway_get.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
package docker

import (
"context"
"fmt"

"connectrpc.com/connect"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/filters"
kranev1 "github.com/unkeyed/unkey/go/gen/proto/krane/v1"
)

// GetGateway retrieves container status and addresses for a deployment.
//
// Finds containers by gateway ID label and returns instance information
// with host.docker.internal addresses using dynamically assigned ports.
func (d *docker) GetGateway(ctx context.Context, req *connect.Request[kranev1.GetGatewayRequest]) (*connect.Response[kranev1.GetGatewayResponse], error) {
gatewayID := req.Msg.GetGatewayId()
d.logger.Info("getting gateway", "gateway_id", gatewayID)

//nolint:exhaustruct // Docker SDK types have many optional fields
containers, err := d.client.ContainerList(ctx, container.ListOptions{
All: true,
Filters: filters.NewArgs(
filters.Arg("label", fmt.Sprintf("unkey.gateway.id=%s", gatewayID)),
),
})
if err != nil {
return nil, connect.NewError(connect.CodeInternal, fmt.Errorf("failed to list containers: %w", err))
}

res := &kranev1.GetGatewayResponse{
Instances: []*kranev1.GatewayInstance{},
}

for _, c := range containers {
d.logger.Info("container found", "container", c)

// Determine container status
status := kranev1.GatewayStatus_GATEWAY_STATUS_UNSPECIFIED
switch c.State {
case container.StateRunning:
status = kranev1.GatewayStatus_GATEWAY_STATUS_RUNNING
case container.StateExited:
status = kranev1.GatewayStatus_GATEWAY_STATUS_TERMINATING
case container.StateCreated:
status = kranev1.GatewayStatus_GATEWAY_STATUS_PENDING
}

d.logger.Info("gateway found",
"gateway_id", gatewayID,
"container_id", c.ID,
"status", status.String(),
"port", c.Ports[0].PublicPort,
)

res.Instances = append(res.Instances, &kranev1.GatewayInstance{
Id: c.ID,
Address: fmt.Sprintf("host.docker.internal:%d", c.Ports[0].PublicPort),
Status: status,
})
}

return connect.NewResponse(res), nil
}
Comment on lines +13 to +65
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

Avoid panics on empty port lists and fix minor gateway doc/comment details

The general approach in GetGateway (label‑filtered ContainerList, per‑container status mapping, returning GatewayInstances with host+port) looks solid, but there are a couple of sharp edges:

  1. Potential panic on c.Ports[0]

You index c.Ports[0].PublicPort without checking that Ports is non‑empty:

"port", c.Ports[0].PublicPort,
...
Address: fmt.Sprintf("host.docker.internal:%d", c.Ports[0].PublicPort),

If Docker returns a container summary with no published ports (empty slice), this will panic with index out of range. That can happen if:

  • The container was created without port bindings (e.g. via docker CLI or another tool but still labeled).
  • The Docker API behavior changes or a gateway is misconfigured.
  • The container is in a transient state where Ports is empty.

A small defensive check avoids crashing the whole RPC:

-	for _, c := range containers {
+	for _, c := range containers {
@@
-		d.logger.Info("gateway found",
+		if len(c.Ports) == 0 {
+			d.logger.Info("gateway container has no published ports; skipping instance",
+				"gateway_id", gatewayID,
+				"container_id", c.ID,
+				"state", c.State,
+			)
+			continue
+		}
+
+		publicPort := c.Ports[0].PublicPort
+
+		d.logger.Info("gateway found",
 			"gateway_id", gatewayID,
 			"container_id", c.ID,
 			"status", status.String(),
-			"port", c.Ports[0].PublicPort,
+			"port", publicPort,
 		)
 
 		res.Instances = append(res.Instances, &kranev1.GatewayInstance{
 			Id:      c.ID,
-			Address: fmt.Sprintf("host.docker.internal:%d", c.Ports[0].PublicPort),
+			Address: fmt.Sprintf("host.docker.internal:%d", publicPort),
 			Status:  status,
 		})
  1. Doc/naming nit

The comment says:

// GetGateway retrieves container status and addresses for a deployment.

Since this is the gateway handler, consider updating “deployment” to “gateway” for clarity:

-// GetGateway retrieves container status and addresses for a deployment.
+// GetGateway retrieves container status and addresses for a gateway.

I'll verify this review by checking the actual code, examining Docker SDK behavior, and searching for related patterns in the codebase.





🏁 Script executed:

#!/bin/bash
# Locate and verify the file exists
fd -t f "gateway_get.go" --path "*/krane/backend/docker/*"

Length of output: 233


🏁 Script executed:

#!/bin/bash
# Verify the code matches the snippet — check for c.Ports[0] access
rg -n "c\.Ports\[0\]" --type go go/apps/krane/backend/docker/

Length of output: 411


🏁 Script executed:

#!/bin/bash
# Search for other instances where Ports is checked for length
rg -n "len.*Ports" --type go go/apps/krane/backend/docker/ -C 2

Length of output: 0


🏁 Script executed:

#!/bin/bash
# Search for ContainerList usage in the codebase to see if other handlers check ports
rg -n "ContainerList" --type go go/apps/krane/backend/docker/ -C 3

Length of output: 2416


🌐 Web query:

Docker SDK Go container.Types Ports field empty slice behavior

💡 Result:

Short answer: In the Go Docker SDK the Container.Ports field is a slice ([]Port). If that slice is nil it will marshal to JSON null; if it is a non-nil empty slice (len==0) it will marshal to an empty array ([]). To guarantee clients see [] (not null) initialize the slice (e.g. Ports: []types.Port{}). [1][2][3]

Sources

  • Docker / moby types.Container shows Ports is a slice. [1]
  • Go JSON marshal behavior: nil slice → null, non-nil empty slice → []. (see encoding/json notes / docs). [2]
  • Docker Engine API notes: the API may omit empty fields in responses; if you need stable JSON arrays prefer returning [] explicitly. [3]

References
[1] moby/api/types (Container.Ports).
[2] encoding/json behavior for nil vs empty slices (Go docs / encoding/json v1/v2 notes).
[3] Docker Engine API docs — empty/unsupported fields may be omitted in responses.



---

🏁 Script executed:

```shell
#!/bin/bash
# Verify deployment_get.go has the same issue
rg -n -A 15 "c.Ports\[0\]" go/apps/krane/backend/docker/deployment_get.go | head -30

Length of output: 295


Add defensive port check to prevent panic; also exists in deployment_get.go

Verification confirms a critical panic risk in GetGateway. The code accesses c.Ports[0] without checking that the slice is non-empty (lines 54 and 59). If Docker returns a container with no published ports—a realistic scenario with misconfigured containers or API edge cases—the handler will crash with an index-out-of-bounds panic.

The same vulnerability also exists identically in deployment_get.go (lines 54 and 59), making this a systemic issue across both handlers.

Fixes needed:

  1. Add defensive check before accessing c.Ports[0] in both gateway_get.go and deployment_get.go:
if len(c.Ports) == 0 {
	d.logger.Info("container has no published ports; skipping instance",
		"gateway_id", gatewayID,
		"container_id", c.ID,
		"state", c.State,
	)
	continue
}

publicPort := c.Ports[0].PublicPort

Then use publicPort in both the logging and Address field.

  1. Update the doc comment in gateway_get.go (line 13) from "for a deployment" to "for a gateway" for consistency.
🤖 Prompt for AI Agents
In go/apps/krane/backend/docker/gateway_get.go (lines 13-65) and the identical
pattern in deployment_get.go (check around lines ~54 and ~59), the code indexes
c.Ports[0] without a nil/length check causing a panic if no published ports
exist; add a defensive check: if len(c.Ports) == 0 { d.logger.Info("container
has no published ports; skipping instance", "gateway_id", gatewayID,
"container_id", c.ID, "state", c.State); continue } then extract publicPort :=
c.Ports[0].PublicPort and use publicPort in the existing logging and Address
fmt.Sprintf("host.docker.internal:%d", publicPort); additionally update the doc
comment in gateway_get.go line 13 from "for a deployment" to "for a gateway".

2 changes: 2 additions & 0 deletions go/apps/krane/backend/docker/service.go
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,11 @@ type docker struct {
registryAuth string // base64 encoded auth for pulls

kranev1connect.UnimplementedDeploymentServiceHandler
kranev1connect.UnimplementedGatewayServiceHandler
}

var _ kranev1connect.DeploymentServiceHandler = (*docker)(nil)
var _ kranev1connect.GatewayServiceHandler = (*docker)(nil)

// Config holds configuration for the Docker backend
type Config struct {
Expand Down