Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

azure-servicebus scaler with Hashicorp Vault - keda-operator crashes when secret path is non-exsistent #1864

Closed
rajarajan-s-imanage opened this issue Jun 5, 2021 · 0 comments · Fixed by #1867
Assignees
Labels
bug Something isn't working

Comments

@rajarajan-s-imanage
Copy link

rajarajan-s-imanage commented Jun 5, 2021

Report

I attempted to use Azure Service Bus Scaler with Hashicorp Vault, accidentally in the secret path I used a path that didn't exist, also the vault roles and policies were configured for the same path which didn't exist.

Provided this circumstance, I deployed my app with scaledObject and TriggerAuth shortly after which the keda-operator crashed and went into a loop of starting up and crashing for the same reason.

I've already discussed the same in here - #1754

keda-scaler.yaml

apiVersion: keda.sh/v1alpha1
kind: TriggerAuthentication
metadata:
  name: {{ .Values.app.name }}-trigger-auth
  namespace: {{ .Release.Namespace }}
spec:
  hashiCorpVault:
    address: {{ .Values.vault.address }}
    authentication: kubernetes
    role: {{ .Values.keda.namespace }}-keda-role
    mount: "kubernetes"
    credential:
      serviceAccount: "/var/run/secrets/kubernetes.io/serviceaccount/token"
    secrets:
      - parameter: connection
        key: "secret"
        path: "kv/data/{{ .Release.Namespace }}/crack-task/manage"
---
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: {{ .Values.app.name }}-keda-autoscaler
  namespace: {{ .Release.Namespace }}
spec:
  scaleTargetRef:
    name: {{ .Values.app.name }}
  pollingInterval: 10
  cooldownPeriod:  300 # Optional. Default: 300 seconds
  maxReplicaCount: {{ .Values.keda.maxReplicaCount }}
  triggers:
    - type: azure-servicebus
      metadata:
        queueName: {{.Release.Namespace}}-crack-task
        messageCount: {{ .Values.keda.messageCount | int | quote}}
      authenticationRef:
        name: {{ .Values.app.name }}-trigger-auth

Expected Behavior

I would expect it to not crash but handled with an appropriate error message and exception.

Actual Behavior

keda-operator crashes under the described circumstance

Steps to Reproduce the Problem

  1. Configured vault with secret path which didn’t exist and roles, policies to allow keda attempt to read secret from that path.
  2. Ran my app with keda scaled object with service bus scaler and triggerAuth respectively.
  3. keda operator crashes.

Logs from KEDA operator

2021-04-22T15:57:40.740Z	INFO	controller	Starting EventSource	{"reconcilerGroup": "keda.sh", "reconcilerKind": "ClusterTriggerAuthentication", "controller": "clustertriggerauthentication", "source": "kind source: /, Kind="}
2021-04-22T15:57:40.740Z	INFO	controller	Starting EventSource	{"reconcilerGroup": "keda.sh", "reconcilerKind": "TriggerAuthentication", "controller": "triggerauthentication", "source": "kind source: /, Kind="}
2021-04-22T15:57:40.740Z	INFO	controller	Starting EventSource	{"reconcilerGroup": "keda.sh", "reconcilerKind": "ScaledObject", "controller": "scaledobject", "source": "kind source: /, Kind="}
2021-04-22T15:57:40.740Z	INFO	controller	Starting EventSource	{"reconcilerGroup": "keda.sh", "reconcilerKind": "ScaledJob", "controller": "scaledjob", "source": "kind source: /, Kind="}
2021-04-22T15:57:40.840Z	INFO	controller	Starting Controller	{"reconcilerGroup": "keda.sh", "reconcilerKind": "ClusterTriggerAuthentication", "controller": "clustertriggerauthentication"}
2021-04-22T15:57:40.840Z	INFO	controller	Starting workers	{"reconcilerGroup": "keda.sh", "reconcilerKind": "ClusterTriggerAuthentication", "controller": "clustertriggerauthentication", "worker count": 1}
2021-04-22T15:57:40.840Z	INFO	controller	Starting Controller	{"reconcilerGroup": "keda.sh", "reconcilerKind": "TriggerAuthentication", "controller": "triggerauthentication"}
2021-04-22T15:57:40.840Z	INFO	controller	Starting EventSource	{"reconcilerGroup": "keda.sh", "reconcilerKind": "ScaledObject", "controller": "scaledobject", "source": "kind source: /, Kind="}
2021-04-22T15:57:40.840Z	INFO	controller	Starting Controller	{"reconcilerGroup": "keda.sh", "reconcilerKind": "ScaledJob", "controller": "scaledjob"}
2021-04-22T15:57:40.841Z	INFO	controller	Starting workers	{"reconcilerGroup": "keda.sh", "reconcilerKind": "TriggerAuthentication", "controller": "triggerauthentication", "worker count": 1}
2021-04-22T15:57:40.941Z	INFO	controller	Starting Controller	{"reconcilerGroup": "keda.sh", "reconcilerKind": "ScaledObject", "controller": "scaledobject"}
2021-04-22T15:57:40.941Z	INFO	controller	Starting workers	{"reconcilerGroup": "keda.sh", "reconcilerKind": "ScaledObject", "controller": "scaledobject", "worker count": 1}
2021-04-22T15:57:40.941Z	INFO	controller	Starting workers	{"reconcilerGroup": "keda.sh", "reconcilerKind": "ScaledJob", "controller": "scaledjob", "worker count": 1}
2021-04-22T15:57:40.941Z	INFO	controllers.ScaledObject	Reconciling ScaledObject	{"ScaledObject.Namespace": "ku-smoke", "ScaledObject.Name": "ku-cracking-service-keda-autoscaler"}
2021-04-22T15:57:40.941Z	INFO	controllers.ScaledObject	Creating a new HPA	{"ScaledObject.Namespace": "ku-smoke", "ScaledObject.Name": "ku-cracking-service-keda-autoscaler", "HPA.Namespace": "ku-smoke", "HPA.Name": "keda-hpa-ku-cracking-service-keda-autoscaler"}
E0422 15:58:01.846297       1 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 448 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic(0x2348d40, 0x3c45860)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:74 +0x95
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:48 +0x89
panic(0x2348d40, 0x3c45860)
/usr/local/go/src/runtime/panic.go:969 +0x1b9
github.com/kedacore/keda/v2/pkg/scaling/resolver.ResolveAuthRef(0x2b57360, 0xc00098d6b0, 0x2b4aca0, 0xc00116bcf0, 0xc000e9a940, 0xc0013829f8, 0xc000e88ab0, 0x8, 0x8, 0xc000528de0, ...)
/workspace/pkg/scaling/resolver/scale_resolvers.go:99 +0x80d
github.com/kedacore/keda/v2/pkg/scaling.(*scaleHandler).buildScalers(0xc00018f900, 0xc000338280, 0xc001382900, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
/workspace/pkg/scaling/scale_handler.go:355 +0x3b3
github.com/kedacore/keda/v2/pkg/scaling.(*scaleHandler).GetScalers(0xc00018f900, 0x266c240, 0xc00000a780, 0xc0007fcc00, 0x106, 0x400, 0xc000961530, 0x106)
/workspace/pkg/scaling/scale_handler.go:76 +0xee
github.com/kedacore/keda/v2/controllers.(*ScaledObjectReconciler).getScaledObjectMetricSpecs(0xc000878e60, 0x2b4aca0, 0xc000d965c0, 0xc00000a780, 0x0, 0x255cf60, 0x12, 0x0, 0x0)
/workspace/controllers/hpa.go:146 +0x65
github.com/kedacore/keda/v2/controllers.(*ScaledObjectReconciler).newHPAForScaledObject(0xc000878e60, 0x2b4aca0, 0xc000d965c0, 0xc00000a780, 0xc000961b10, 0x4, 0x2c, 0x0)
/workspace/controllers/hpa.go:47 +0x6a
github.com/kedacore/keda/v2/controllers.(*ScaledObjectReconciler).createAndDeployNewHPA(0xc000878e60, 0x2b4aca0, 0xc000d965c0, 0xc00000a780, 0xc000961b10, 0xc0005421b0, 0x2c)
/workspace/controllers/hpa.go:30 +0x225
github.com/kedacore/keda/v2/controllers.(*ScaledObjectReconciler).ensureHPAForScaledObjectExists(0xc000878e60, 0x2b4aca0, 0xc000d965c0, 0xc00000a780, 0xc000961b10, 0x4, 0x26c60a4, 0x2)
/workspace/controllers/scaledobject_controller.go:316 +0x35e
github.com/kedacore/keda/v2/controllers.(*ScaledObjectReconciler).reconcileScaledObject(0xc000878e60, 0x2b4aca0, 0xc000d965c0, 0xc00000a780, 0x0, 0x0, 0x23, 0x2afd4e0)
/workspace/controllers/scaledobject_controller.go:204 +0x151
github.com/kedacore/keda/v2/controllers.(*ScaledObjectReconciler).Reconcile(0xc000878e60, 0xc000e88ab0, 0x8, 0xc001008360, 0x23, 0xc000d96590, 0xc00091e480, 0xc000576008, 0xc000576000)
/workspace/controllers/scaledobject_controller.go:160 +0x405
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc000a47830, 0x24100e0, 0xc0002f0000, 0x0)
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244 +0x2a9
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc000a47830, 0x203000)
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218 +0xb0
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker(0xc000a47830)
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197 +0x2b
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000d96560)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000d96560, 0x2aec320, 0xc0005906c0, 0x1, 0xc00064aae0)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0xad
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000d96560, 0x3b9aca00, 0x0, 0x1, 0xc00064aae0)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.Until(0xc000d96560, 0x3b9aca00, 0xc00064aae0)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90 +0x4d
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:179 +0x416
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x30 pc=0x20cfc0d]
goroutine 448 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:55 +0x10c
panic(0x2348d40, 0x3c45860)
/usr/local/go/src/runtime/panic.go:969 +0x1b9
github.com/kedacore/keda/v2/pkg/scaling/resolver.ResolveAuthRef(0x2b57360, 0xc00098d6b0, 0x2b4aca0, 0xc00116bcf0, 0xc000e9a940, 0xc0013829f8, 0xc000e88ab0, 0x8, 0x8, 0xc000528de0, ...)
/workspace/pkg/scaling/resolver/scale_resolvers.go:99 +0x80d
github.com/kedacore/keda/v2/pkg/scaling.(*scaleHandler).buildScalers(0xc00018f900, 0xc000338280, 0xc001382900, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
/workspace/pkg/scaling/scale_handler.go:355 +0x3b3
github.com/kedacore/keda/v2/pkg/scaling.(*scaleHandler).GetScalers(0xc00018f900, 0x266c240, 0xc00000a780, 0xc0007fcc00, 0x106, 0x400, 0xc000961530, 0x106)
/workspace/pkg/scaling/scale_handler.go:76 +0xee
github.com/kedacore/keda/v2/controllers.(*ScaledObjectReconciler).getScaledObjectMetricSpecs(0xc000878e60, 0x2b4aca0, 0xc000d965c0, 0xc00000a780, 0x0, 0x255cf60, 0x12, 0x0, 0x0)
/workspace/controllers/hpa.go:146 +0x65
github.com/kedacore/keda/v2/controllers.(*ScaledObjectReconciler).newHPAForScaledObject(0xc000878e60, 0x2b4aca0, 0xc000d965c0, 0xc00000a780, 0xc000961b10, 0x4, 0x2c, 0x0)
/workspace/controllers/hpa.go:47 +0x6a
github.com/kedacore/keda/v2/controllers.(*ScaledObjectReconciler).createAndDeployNewHPA(0xc000878e60, 0x2b4aca0, 0xc000d965c0, 0xc00000a780, 0xc000961b10, 0xc0005421b0, 0x2c)
/workspace/controllers/hpa.go:30 +0x225
github.com/kedacore/keda/v2/controllers.(*ScaledObjectReconciler).ensureHPAForScaledObjectExists(0xc000878e60, 0x2b4aca0, 0xc000d965c0, 0xc00000a780, 0xc000961b10, 0x4, 0x26c60a4, 0x2)
/workspace/controllers/scaledobject_controller.go:316 +0x35e
github.com/kedacore/keda/v2/controllers.(*ScaledObjectReconciler).reconcileScaledObject(0xc000878e60, 0x2b4aca0, 0xc000d965c0, 0xc00000a780, 0x0, 0x0, 0x23, 0x2afd4e0)
/workspace/controllers/scaledobject_controller.go:204 +0x151
github.com/kedacore/keda/v2/controllers.(*ScaledObjectReconciler).Reconcile(0xc000878e60, 0xc000e88ab0, 0x8, 0xc001008360, 0x23, 0xc000d96590, 0xc00091e480, 0xc000576008, 0xc000576000)
/workspace/controllers/scaledobject_controller.go:160 +0x405
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc000a47830, 0x24100e0, 0xc0002f0000, 0x0)
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244 +0x2a9
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc000a47830, 0x203000)
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218 +0xb0
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker(0xc000a47830)
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197 +0x2b
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000d96560)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000d96560, 0x2aec320, 0xc0005906c0, 0x1, 0xc00064aae0)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0xad
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000d96560, 0x3b9aca00, 0x0, 0x1, 0xc00064aae0)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.Until(0xc000d96560, 0x3b9aca00, 0xc00064aae0)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90 +0x4d
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:179 +0x416

KEDA Version

2.2.0

Kubernetes Version

1.19

Platform

Any

Scaler Details

Azure Service Bus Scaler

Anything else?

No response

@rajarajan-s-imanage rajarajan-s-imanage added the bug Something isn't working label Jun 5, 2021
@zroubalik zroubalik self-assigned this Jun 7, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants