Skip to content
This repository has been archived by the owner on Sep 21, 2021. It is now read-only.

Kubernetes pods in 3.141.59t don't respect resource requests #1015

Closed
KierranM opened this issue Jul 22, 2019 · 2 comments · Fixed by #1016
Closed

Kubernetes pods in 3.141.59t don't respect resource requests #1015

KierranM opened this issue Jul 22, 2019 · 2 comments · Fixed by #1016

Comments

@KierranM
Copy link
Contributor

🐛 Bug Report

It appears that in 3.141.59t the environment variables that set the CPU and Memory requests/limits of the node pods in kubernetes are no longer being respected.

We have our helm values set to:

hub:
  # ... other hub config ...
  cpuRequest: 250m
  cpuLimit: 1000m
  memRequest: 1000Mi
  memLimit: 1500Mi

I can see the values being set as environment variables when describing the pod:

      ZALENIUM_KUBERNETES_CPU_REQUEST:     250m
      ZALENIUM_KUBERNETES_CPU_LIMIT:       1000m
      ZALENIUM_KUBERNETES_MEMORY_REQUEST:  1000Mi
      ZALENIUM_KUBERNETES_MEMORY_LIMIT:    1500Mi

and I confirmed they are set inside the pod using kubectl exec

While the nodes have the following limits (these come from the defaults for our namespace):

    Limits:
      cpu:     50m
      memory:  128Mi
    Requests:
      cpu:      50m
      memory:   128Mi

Reverting the hub to use image tag 3.141.59s fixed our OOM problems as the limits were again set on the node.

To Reproduce

  1. Install zalenium helm chart with using the values from above
  2. Use kubectl describe on the hub and nodes to view limits

Expected behavior

The pods should have the expected memory/cpu limits set.

Environment

OS: AWS EKS K8s Node
Zalenium Image Version(s): 3.141.59t

@KierranM
Copy link
Contributor Author

It would appear that this is due to us using an older version of the chart. By default the chart will use the hub image tagged 3, so any updates occur automatically. One of the latest changes to the image appears to be incompatible with the chart/values from prior to that point in time.

@KierranM
Copy link
Contributor Author

After some digging with @pschuermann we found that this due to podSecurityContext and containerSecurityContext not having a valid default as {}. PR incoming.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant