Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion cmd/main.go
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@ import (
containerprofilemanagerv1 "github.com/kubescape/node-agent/pkg/containerprofilemanager/v1"
"github.com/kubescape/node-agent/pkg/containerwatcher"
containerwatcherv2 "github.com/kubescape/node-agent/pkg/containerwatcher/v2"
"github.com/kubescape/node-agent/pkg/contextdetection"
"github.com/kubescape/node-agent/pkg/dnsmanager"
"github.com/kubescape/node-agent/pkg/exporters"
"github.com/kubescape/node-agent/pkg/fimmanager"
Expand Down Expand Up @@ -290,8 +291,10 @@ func main() {
logger.L().Ctx(ctx).Fatal("error creating CEL evaluator", helpers.Error(err))
}

mntnsRegistry := contextdetection.NewMntnsRegistry()

// create runtimeDetection managers
ruleManager, err = rulemanager.CreateRuleManager(ctx, cfg, k8sClient, ruleBindingCache, objCache, exporter, prometheusExporter, processTreeManager, dnsResolver, nil, ruleCooldown, adapterFactory, celEvaluator)
ruleManager, err = rulemanager.CreateRuleManager(ctx, cfg, k8sClient, ruleBindingCache, objCache, exporter, prometheusExporter, processTreeManager, dnsResolver, nil, ruleCooldown, adapterFactory, celEvaluator, mntnsRegistry)
if err != nil {
logger.L().Ctx(ctx).Fatal("error creating RuleManager", helpers.Error(err))
}
Expand Down
2 changes: 2 additions & 0 deletions configuration/config.json
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,8 @@
"seccompServiceEnabled": "false",
"enableEmbeddedSBOMs": "false",
"fimEnabled": true,
"hostMonitoringEnabled": false,
"standaloneMonitoringEnabled": false,
"exporters": {
"syslogExporterURL": "http://syslog.kubescape.svc.cluster.local:514",
"stdoutExporter": "false",
Expand Down
148 changes: 148 additions & 0 deletions docs/RULE_ENGINE_MULTI_CONTEXT_REDESIGN.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,148 @@
# Rule Engine Multi-Context Redesign

## Overview

This document describes the design and implementation of the multi-context rule engine in the Kubescape Node Agent. The system enables runtime security monitoring and alerting across three distinct execution contexts:

1. **Kubernetes**: Containers running within a Kubernetes cluster (Pod-based).
2. **Host**: The underlying node itself, treated as a specialized context for monitoring host-level activities.
3. **Standalone**: Non-Kubernetes containers (e.g., Docker containers, standalone containerd instances) that are not managed by the Kubernetes orchestrator.

## Goals

- Provide a unified rule evaluation engine for all execution contexts.
- Use the mount namespace (mntns) ID as the primary key for identifying event contexts.
- Support multiple container runtimes through automated discovery (fanotify).
- Allow fine-grained control over where rules apply using context-specific tags.
- Maintain backward compatibility with existing Kubernetes-only monitoring and alert formats.

## Architecture

### 1. Event Source Context

The system defines three primary context types in `pkg/contextdetection/types.go`:

```go
type EventSourceContext string

const (
Kubernetes EventSourceContext = "kubernetes"
Host EventSourceContext = "host"
Standalone EventSourceContext = "standalone"
Comment on lines +29 to +31
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix markdownlint MD010 hard tabs in code blocks.

Markdownlint flags hard tabs in the Go snippets. Replace tabs with spaces to keep doc linting clean.

🧹 Example fix
-	Kubernetes EventSourceContext = "kubernetes"
+    Kubernetes EventSourceContext = "kubernetes"

Also applies to: 44-45, 59-61, 71-78

🧰 Tools
🪛 markdownlint-cli2 (0.18.1)

29-29: Hard tabs
Column: 1

(MD010, no-hard-tabs)


30-30: Hard tabs
Column: 1

(MD010, no-hard-tabs)


31-31: Hard tabs
Column: 1

(MD010, no-hard-tabs)

🤖 Prompt for AI Agents
In `@docs/RULE_ENGINE_MULTI_CONTEXT_REDESIGN.md` around lines 29 - 31, The Go code
blocks contain hard tab characters (e.g., lines defining
Kubernetes/Host/Standalone as EventSourceContext and other blocks at the ranges
noted); replace all hard tabs inside those fenced code blocks with spaces so
markdownlint MD010 is satisfied. Locate the Go snippets that define
EventSourceContext constants and similar blocks (symbols like
EventSourceContext, Kubernetes, Host, Standalone and other constant/grouped
lines at the noted ranges) and convert indentation and separators to use spaces
instead of \t throughout each code fence.

)
```

### 2. Context Detection and Registry

The architecture relies on a discovery mechanism that identifies the nature of a process or container when it starts.

#### Context Info and Detectors
Each detected context is represented by a `ContextInfo` object which provides the context type and a unique `WorkloadID`.

```go
type ContextInfo interface {
Context() EventSourceContext
WorkloadID() string
}
```

The `DetectorManager` coordinates several `ContextDetector` implementations:
- **K8sDetector**: Identifies containers enriched with Kubernetes metadata (Namespace, Pod name).
- **HostDetector**: Identifies the host context based on PID 1 or the host's mount namespace.
- **StandaloneDetector**: Identifies containers that have runtime information but lack Kubernetes metadata.

#### Mount Namespace Registry
The `MntnsRegistry` maintains a thread-safe mapping of mount namespace IDs to their corresponding `ContextInfo`. This registry is the "source of truth" used to enrich eBPF events as they arrive.

```go
type Registry interface {
Register(mntns uint64, info ContextInfo) error
Lookup(mntns uint64) (ContextInfo, bool)
Unregister(mntns uint64)
}
```

### 3. Event Enrichment

As eBPF events (exec, open, network, etc.) are captured, they are wrapped in an `EnrichedEvent`. The `RuleManager` enriches these events with context information by looking up the event's mount namespace ID in the registry.

```go
func (rm *RuleManager) enrichEventWithContext(enrichedEvent *events.EnrichedEvent) {
mntnsID := enrichedEvent.Event.GetMountNsID()
enrichedEvent.MountNamespaceID = mntnsID

if mntnsID != 0 {
if contextInfo, found := rm.mntnsRegistry.Lookup(mntnsID); found {
enrichedEvent.SourceContext = contextInfo
}
}
}
```

### 4. Rule Evaluation Logic

#### Context-Aware Filtering
Rules can specify where they should execute using the `context:` tag prefix. The `RuleAppliesToContext` function determines if a rule is applicable:

- If a rule has tags like `context:host`, it will only run for events detected as `Host`.
- If a rule has no `context:` tags, it defaults to `Kubernetes` only, ensuring backward compatibility for existing rule sets.

```go
func RuleAppliesToContext(rule *typesv1.Rule, contextInfo contextdetection.ContextInfo) bool {
// ... logic to check "context:" tags ...
// Default: return currentContext == contextdetection.Kubernetes
}
```

#### Profile Dependencies
Kubernetes-specific features like Application Profiles and Network Neighborhoods are only enforced for the `Kubernetes` context. Rules requiring these profiles are skipped for `Host` and `Standalone` contexts.

### 5. Alert Structure

The `GenericRuleFailure` structure has been extended to include the `SourceContext`. To maintain compatibility with existing consumers (like the Kubescape Cloud or third-party SIEMs), context-specific metadata is mapped into the existing `RuntimeAlertK8sDetails` structure where appropriate:

- **Host alerts**: The node's hostname is populated in the `NodeName` field.
- **Standalone alerts**: Container ID and Image information are populated, while K8s-specific fields (Namespace, Pod) remain empty.

```go
type GenericRuleFailure struct {
// ... existing fields ...
SourceContext contextdetection.EventSourceContext
}
```

### 6. Multiple Runtime Discovery

The Node Agent leverages `inspektor-gadget`'s `WithContainerFanotifyEbpf()` capability. This allows the agent to:
1. Use fanotify to watch for OCI runtime (runc, crun) executions.
2. Capture the container's bundle directory and PID.
3. Automatically detect and monitor containers regardless of whether they were started by `kubelet`, `docker`, or `containerd` directly.

## Configuration

Context monitoring is configurable via the Node Agent configuration:

```yaml
# Enable/disable specific monitoring contexts
hostMonitoringEnabled: true
standaloneMonitoringEnabled: true

# Note: Kubernetes monitoring is usually tied to enableRuntimeDetection
enableRuntimeDetection: true
```

## Implementation Status

- [x] **Core Infrastructure**: Definition of context types and the `MntnsRegistry`.
- [x] **Detector Framework**: Implementation of K8s, Host, and Standalone detectors.
- [x] **Event Enrichment**: Integration into `RuleManager` to attach context to every event.
- [x] **Context-Aware Rules**: Support for `context:` tags in rule definitions.
- [x] **Unified Alerting**: Updated `RuleFailureCreator` to handle multi-context metadata.
- [x] **Multi-Runtime Support**: Integration with fanotify for standalone container discovery.
- [x] **Testing**: Unit and integration tests for context detection and rule application.

## Future Considerations

- **Standalone Profiles**: Extending Application Profile learning to standalone containers.
- **Host Policy**: Specific rule sets tailored for host-level hardening and monitoring.
- **Dynamic Context Tags**: Allowing users to define custom contexts based on container labels or environment variables.
165 changes: 164 additions & 1 deletion pkg/cloudmetadata/metadata.go
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,10 @@ package cloudmetadata
import (
"context"
"fmt"
"io"
"net/http"
"strings"
"time"

apitypes "github.com/armosec/armoapi-go/armotypes"
"github.com/kubescape/go-logger"
Expand All @@ -12,6 +16,11 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)

const (
azureApiVersion = "2021-12-13"
metadataTimeout = 2 * time.Second
)

// GetCloudMetadata retrieves cloud metadata for a given node
func GetCloudMetadata(ctx context.Context, client *k8sinterface.KubernetesApi, nodeName string) (*apitypes.CloudMetadata, error) {
node, err := client.GetKubernetesClient().CoreV1().Nodes().Get(ctx, nodeName, metav1.GetOptions{})
Expand Down Expand Up @@ -53,9 +62,163 @@ func GetCloudMetadataWithIMDS(ctx context.Context) (*apitypes.CloudMetadata, err
cMetadataClient := k8sInterfaceCloudMetadata.NewMetadataClient(true)

cMetadata, err := cMetadataClient.GetMetadata(ctx)
if err == nil {
return cMetadata, nil
}

logger.L().Info("failed to get cloud metadata from IMDS, trying fallbacks", helpers.Error(err))

// Fallback strategy: try different providers
fallbacks := []struct {
name string
fetch func(context.Context) (*apitypes.CloudMetadata, error)
}{
{name: "DigitalOcean", fetch: fetchDigitalOceanMetadata},
{name: "GCP", fetch: fetchGCPMetadata},
{name: "Azure", fetch: fetchAzureMetadata},
}

for _, fb := range fallbacks {
if meta, ferr := fb.fetch(ctx); ferr == nil && meta != nil {
logger.L().Info(fmt.Sprintf("retrieved cloud metadata from %s metadata service", fb.name))
return meta, nil
}
}

// Wrap the underlying error with additional context so logs make it clearer why metadata is missing.
return nil, fmt.Errorf("failed to get cloud metadata from IMDS or fallbacks: %w", err)
Comment on lines +81 to +89
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Preserve fallback failure context in the returned error.

Right now the final error always wraps the original IMDS failure, even if fallbacks fail for different reasons. Capturing the last fallback error (or aggregating) will make troubleshooting much clearer.

🛠️ Proposed fix (keep the IMDS error but include fallback failure)
@@
-	for _, fb := range fallbacks {
-		if meta, ferr := fb.fetch(ctx); ferr == nil && meta != nil {
-			logger.L().Info(fmt.Sprintf("retrieved cloud metadata from %s metadata service", fb.name))
-			return meta, nil
-		}
-	}
-
-	// Wrap the underlying error with additional context so logs make it clearer why metadata is missing.
-	return nil, fmt.Errorf("failed to get cloud metadata from IMDS or fallbacks: %w", err)
+	var lastErr error
+	for _, fb := range fallbacks {
+		if meta, ferr := fb.fetch(ctx); ferr == nil && meta != nil {
+			logger.L().Info(fmt.Sprintf("retrieved cloud metadata from %s metadata service", fb.name))
+			return meta, nil
+		} else if ferr != nil {
+			lastErr = fmt.Errorf("%s: %w", fb.name, ferr)
+		}
+	}
+
+	if lastErr != nil {
+		return nil, fmt.Errorf("failed to get cloud metadata from IMDS (%v) or fallbacks (%w)", err, lastErr)
+	}
+	return nil, fmt.Errorf("failed to get cloud metadata from IMDS or fallbacks: %w", err)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
for _, fb := range fallbacks {
if meta, ferr := fb.fetch(ctx); ferr == nil && meta != nil {
logger.L().Info(fmt.Sprintf("retrieved cloud metadata from %s metadata service", fb.name))
return meta, nil
}
}
// Wrap the underlying error with additional context so logs make it clearer why metadata is missing.
return nil, fmt.Errorf("failed to get cloud metadata from IMDS or fallbacks: %w", err)
var lastErr error
for _, fb := range fallbacks {
if meta, ferr := fb.fetch(ctx); ferr == nil && meta != nil {
logger.L().Info(fmt.Sprintf("retrieved cloud metadata from %s metadata service", fb.name))
return meta, nil
} else if ferr != nil {
lastErr = fmt.Errorf("%s: %w", fb.name, ferr)
}
}
if lastErr != nil {
return nil, fmt.Errorf("failed to get cloud metadata from IMDS (%v) or fallbacks (%w)", err, lastErr)
}
return nil, fmt.Errorf("failed to get cloud metadata from IMDS or fallbacks: %w", err)
🤖 Prompt for AI Agents
In `@pkg/cloudmetadata/metadata.go` around lines 81 - 89, The current return
always wraps only the original IMDS error (err) even when fallbacks fail for
other reasons; update the loop over fallbacks (the fallbacks slice and fb.fetch
calls) to capture the last non-nil fallback error (e.g., lastFallbackErr) or
aggregate errors as you iterate, then when returning use fmt.Errorf to include
both the IMDS error and the fallback failure context (e.g., "failed to get cloud
metadata from IMDS or fallbacks: imds: %v; fallback: %w" or similar), so the
final error preserves the IMDS error plus the final fallback error from fb.fetch
for easier troubleshooting.

}

// fetchHTTPMetadata helper to fetch metadata from a URL with optional headers
func fetchHTTPMetadata(ctx context.Context, url string, headers map[string]string) (string, error) {
client := &http.Client{
Timeout: metadataTimeout,
}
Comment on lines +93 to +96
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Read the metadata.go file to understand the full context
cat -n pkg/cloudmetadata/metadata.go | head -120

Repository: kubescape/node-agent

Length of output: 4780


🌐 Web query:

Go http.Client default proxy behavior HTTP_PROXY environment variable

💡 Result:

Short answer

  • The default http.Client uses http.DefaultTransport, whose Transport.Proxy is set to http.ProxyFromEnvironment — so by default Go will use HTTP_PROXY / HTTPS_PROXY and NO_PROXY (also lowercase) from the environment. [1][2]

Key details

  • Proxy selection: requests use the proxy var matching the request scheme (HTTPS_PROXY takes precedence for https); NO_PROXY excludes hosts. [1][2]
  • Special cases: ProxyFromEnvironment treats localhost/loopback as non-proxied by default. [2]
  • Security: do not trust HTTP_PROXY in CGI environments without mitigation (the httpoxy issue — incoming Proxy header can set HTTP_PROXY). Sanitize environment in CGI contexts. [3]

If you want code to rely on environment proxies explicitly, set Transport.Proxy = http.ProxyFromEnvironment on your client’s Transport. [2]

Sources

  1. net/http documentation — ProxyFromEnvironment. [1]
  2. net/http DefaultTransport (Proxy: ProxyFromEnvironment) and golang.org/x/net/http/httpproxy docs. [2]
  3. httpoxy / Go vulnerability guidance (GO-2022-0761). [3]

🌐 Web query:

Go http.Transport Proxy nil disable proxies metadata

💡 Result:

Yes — setting Transport.Proxy to nil disables proxies. By default net/http.Transport.Proxy is http.ProxyFromEnvironment; if Proxy is nil or returns a nil *url.URL, no proxy is used. Example:

  • Default (uses environment):
    tr := &http.Transport{} // ProxyFromEnvironment is used by default when nil on a default Client
  • Explicitly disable proxies:
    tr := &http.Transport{ Proxy: nil }

Or explicitly return nil:
tr := &http.Transport{ Proxy: func(*http.Request) (*url.URL, error) { return nil, nil } }

References: net/http docs and Transport source. [1][2]


Disable proxies for metadata traffic.

http.Client without a custom transport defaults to http.ProxyFromEnvironment, which respects HTTP_PROXY and NO_PROXY environment variables. Metadata service endpoints (169.254.169.254 for AWS, 169.254.170.2 for Azure, etc.) are not automatically in NO_PROXY by default. If a proxy is configured without explicitly excluding metadata hosts, metadata traffic—including credentials—may be proxied through an untrusted proxy, risking credential leakage or service failure. Set transport.Proxy = nil to disable proxying for these requests.

🔒 Proposed fix (disable proxies for metadata calls)
@@
-	client := &http.Client{
-		Timeout: metadataTimeout,
-	}
+	transport := http.DefaultTransport.(*http.Transport).Clone()
+	transport.Proxy = nil
+	client := &http.Client{
+		Timeout:   metadataTimeout,
+		Transport: transport,
+	}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
func fetchHTTPMetadata(ctx context.Context, url string, headers map[string]string) (string, error) {
client := &http.Client{
Timeout: metadataTimeout,
}
func fetchHTTPMetadata(ctx context.Context, url string, headers map[string]string) (string, error) {
transport := http.DefaultTransport.(*http.Transport).Clone()
transport.Proxy = nil
client := &http.Client{
Timeout: metadataTimeout,
Transport: transport,
}
🤖 Prompt for AI Agents
In `@pkg/cloudmetadata/metadata.go` around lines 93 - 96, The http.Client in
fetchHTTPMetadata currently uses the default transport which may proxy metadata
requests; create and assign a custom transport with Proxy set to nil (e.g.,
transport := &http.Transport{Proxy: nil}) and set client.Transport = transport
so metadata traffic bypasses any HTTP_PROXY/NO_PROXY environment settings; keep
the existing Timeout on the http.Client and preserve any other transport fields
if needed.

req, err := http.NewRequestWithContext(ctx, http.MethodGet, url, nil)
if err != nil {
return "", err
}
for k, v := range headers {
req.Header.Set(k, v)
}
resp, err := client.Do(req)
if err != nil {
return "", err
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return "", fmt.Errorf("metadata endpoint %s returned status: %d", url, resp.StatusCode)
}
body, err := io.ReadAll(resp.Body)
if err != nil {
return "", err
}
return strings.TrimSpace(string(body)), nil
}

func getLastPathPart(val string) string {
if val == "" {
return ""
}
parts := strings.Split(val, "/")
return parts[len(parts)-1]
}

// fetchDigitalOceanMetadata attempts to fetch basic metadata from DigitalOcean's metadata service.
func fetchDigitalOceanMetadata(ctx context.Context) (*apitypes.CloudMetadata, error) {
base := "http://169.254.169.254/metadata/v1/"

// Probe root to see whether the metadata endpoint responds and contains expected entries.
body, err := fetchHTTPMetadata(ctx, base, nil)
if err != nil {
return nil, err
}

return cMetadata, nil
// Basic heuristic: the DO metadata root typically lists resources like 'id', 'hostname', 'region' etc.
if !strings.Contains(body, "id") && !strings.Contains(body, "region") && !strings.Contains(body, "hostname") {
return nil, fmt.Errorf("digitalocean metadata root missing expected entries")
}
Comment thread
matthyx marked this conversation as resolved.

get := func(path string) string {
val, _ := fetchHTTPMetadata(ctx, base+path, nil)
return val
}

id := get("id")
if id == "" {
id = get("droplet_id")
}
instanceType := get("size")
if instanceType == "" {
instanceType = get("type")
}

meta := &apitypes.CloudMetadata{
Provider: "digitalocean",
InstanceID: id,
InstanceType: instanceType,
Region: get("region"),
PrivateIP: get("interfaces/private/0/ipv4/address"),
PublicIP: get("interfaces/public/0/ipv4/address"),
Hostname: get("hostname"),
}

// if nothing useful was obtained, return an error so callers can continue trying other fallbacks
if meta.InstanceID == "" && meta.Hostname == "" && meta.Region == "" && meta.PrivateIP == "" && meta.PublicIP == "" && meta.InstanceType == "" {
return nil, fmt.Errorf("digitalocean metadata endpoints returned no data")
}
Comment thread
matthyx marked this conversation as resolved.

return meta, nil
}

// fetchGCPMetadata attempts to fetch basic metadata from GCP's metadata service.
func fetchGCPMetadata(ctx context.Context) (*apitypes.CloudMetadata, error) {
base := "http://metadata.google.internal/computeMetadata/v1/"
headers := map[string]string{"Metadata-Flavor": "Google"}

get := func(path string) string {
val, _ := fetchHTTPMetadata(ctx, base+path, headers)
return val
}

machineType := get("instance/machine-type")
if machineType == "" {
return nil, fmt.Errorf("not a GCP instance")
}

return &apitypes.CloudMetadata{
Provider: "gcp",
AccountID: get("project/project-id"),
InstanceID: get("instance/id"),
InstanceType: getLastPathPart(machineType),
Zone: getLastPathPart(get("instance/zone")),
Hostname: get("instance/hostname"),
}, nil
}

// fetchAzureMetadata attempts to fetch basic metadata from Azure's metadata service.
func fetchAzureMetadata(ctx context.Context) (*apitypes.CloudMetadata, error) {
base := "http://169.254.169.254/metadata/instance/compute/"
headers := map[string]string{"Metadata": "true"}
params := "?api-version=" + azureApiVersion + "&format=text"

get := func(path string) string {
val, _ := fetchHTTPMetadata(ctx, base+path+params, headers)
return val
}

vmSize := get("vmSize")
if vmSize == "" {
return nil, fmt.Errorf("not an Azure instance")
}

return &apitypes.CloudMetadata{
Provider: "azure",
AccountID: get("subscriptionId"),
InstanceID: get("vmId"),
InstanceType: vmSize,
Region: get("location"),
Zone: get("zone"),
Hostname: get("name"),
}, nil
}
4 changes: 4 additions & 0 deletions pkg/config/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -66,6 +66,8 @@ type Config struct {
EnableRuntimeDetection bool `mapstructure:"runtimeDetectionEnabled"`
EnableSbomGeneration bool `mapstructure:"sbomGenerationEnabled"`
EnableSeccomp bool `mapstructure:"seccompServiceEnabled"`
HostMonitoringEnabled bool `mapstructure:"hostMonitoringEnabled"`
StandaloneMonitoringEnabled bool `mapstructure:"standaloneMonitoringEnabled"`
SeccompProfileBackend string `mapstructure:"seccompProfileBackend"`
EventBatchSize int `mapstructure:"eventBatchSize"`
ExcludeJsonPaths []string `mapstructure:"excludeJsonPaths"`
Expand Down Expand Up @@ -179,6 +181,8 @@ func LoadConfig(path string) (Config, error) {
viper.SetDefault("dnsCacheSize", 50000)
viper.SetDefault("seccompProfileBackend", "storage") // "storage" or "crd"
viper.SetDefault("containerEolNotificationBuffer", 100)
viper.SetDefault("hostMonitoringEnabled", false)
viper.SetDefault("standaloneMonitoringEnabled", false)
// HTTP Exporter Alert Bulking defaults
viper.SetDefault("exporters::httpExporterConfig::bulkMaxAlerts", 50)
viper.SetDefault("exporters::httpExporterConfig::bulkTimeoutSeconds", 10)
Expand Down
Loading
Loading