You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Sep 21, 2021. It is now read-only.
It appears that in 3.141.59t the environment variables that set the CPU and Memory requests/limits of the node pods in kubernetes are no longer being respected.
It would appear that this is due to us using an older version of the chart. By default the chart will use the hub image tagged 3, so any updates occur automatically. One of the latest changes to the image appears to be incompatible with the chart/values from prior to that point in time.
After some digging with @pschuermann we found that this due to podSecurityContext and containerSecurityContext not having a valid default as {}. PR incoming.
🐛 Bug Report
It appears that in
3.141.59t
the environment variables that set the CPU and Memory requests/limits of the node pods in kubernetes are no longer being respected.We have our helm values set to:
I can see the values being set as environment variables when describing the pod:
and I confirmed they are set inside the pod using
kubectl exec
While the nodes have the following limits (these come from the defaults for our namespace):
Reverting the hub to use image tag
3.141.59s
fixed our OOM problems as the limits were again set on the node.To Reproduce
kubectl describe
on the hub and nodes to view limitsExpected behavior
The pods should have the expected memory/cpu limits set.
Environment
OS: AWS EKS K8s Node
Zalenium Image Version(s): 3.141.59t
The text was updated successfully, but these errors were encountered: