-
Notifications
You must be signed in to change notification settings - Fork 307
Description
Describe the bug
rabbitmq-cluster-operator pod is getting OOM killed in openshift, it appears that the 500MiB memory isn't enough.
To Reproduce
Steps to reproduce the behavior:
Deploy the rabbitmq-cluster-operator in a namespace and check the status of the operator pod.
Expected behavior
rabbitmq-cluster-operator to be running.
Screenshots
containerStatuses:
- restartCount: 3697
started: false
ready: false
name: operator
state:
waiting:
reason: CrashLoopBackOff
message: >-
back-off 5m0s restarting failed container=operator
pod=rabbitmq-cluster-operator-98545478d-w24nz_-test(8c7a7e6-86f-4b4-ac06-580012acc5)
imageID: >-
docker.io/rabbitmqoperator/cluster-operator@sha256:563f230500a3efb1a90ae52b3090cd9f20c5c08d57ff063ec7ce88fbb0b1ab4c
image: 'docker.io/rabbitmqoperator/cluster-operator:2.1.0'
lastState:
terminated:
exitCode: 137
reason: OOMKilled
startedAt: '2023-03-07T10:25:07Z'
finishedAt: '2023-03-07T10:25:52Z'
containerID: >-
cri-o://400a52f06fef804a644e2f22eb29f6a275d7d032588a00f3ea2c6c74d70
containerID: 'cri-o://400a5206fefd04a644d2f22eb29f6275cd7d032588008f3e6ac6c74d70'
Version and environment information
- rabbitmq-cluster-operator.v2.1.0
- Openshift-4.11