Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: document trade-offs in memory configuration #1651

Merged
merged 2 commits into from
Apr 25, 2024
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions deployment/helm/node-feature-discovery/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -100,6 +100,12 @@ master:
memory: 4Gi
requests:
cpu: 100m
# You may want to use the same value for `requests.memory` and `limits.memory`. The “requests” value affects scheduling to accommodate pods on nodes.
# If there is a large difference between “requests” and “limits” and nodes experience memory pressure, the kernel may invoke
# the OOM Killer, even if the memory does not exceed the “limits” threshold. This can cause unexpected pod evictions. Memory
# cannot be compressed and once allocated to a pod, it can only be reclaimed by killing the pod. There is a great article by
# Robusta that discusses this issue.
# https://home.robusta.dev/blog/kubernetes-memory-limit
cmontemuino marked this conversation as resolved.
Show resolved Hide resolved
memory: 128Mi

nodeSelector: {}
Expand Down
2 changes: 1 addition & 1 deletion docs/deployment/helm.md
Original file line number Diff line number Diff line change
Expand Up @@ -132,7 +132,7 @@ API's you need to install the prometheus operator in your cluster.
| `master.service.type` | string | ClusterIP | NFD master service type. **NOTE**: this parameter is related to the deprecated gRPC API and will be removed with it in a future release |
| `master.service.port` | integer | 8080 | NFD master service port. **NOTE**: this parameter is related to the deprecated gRPC API and will be removed with it in a future release |
| `master.resources.limits` | dict | {cpu: 300m, memory: 4Gi} | NFD master pod [resources limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits) |
| `master.resources.requests`| dict | {cpu: 100m, memory: 128Mi} | NFD master pod [resources requests](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits) |
| `master.resources.requests`| dict | {cpu: 100m, memory: 128Mi} | NFD master pod [resources requests](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits). You may want to use the same value for `requests.memory` and `limits.memory`. The “requests” value affects scheduling to accommodate pods on nodes. If there is a large difference between “requests” and “limits” and nodes experience memory pressure, the kernel may invoke the OOM Killer, even if the memory does not exceed the “limits” threshold. This can cause unexpected pod evictions. Memory cannot be compressed and once allocated to a pod, it can only be reclaimed by killing the pod. There is a great article by [Robusta](https://home.robusta.dev/blog/kubernetes-memory-limit) that discusses this issue.|
cmontemuino marked this conversation as resolved.
Show resolved Hide resolved
| `master.tolerations` | dict | _Scheduling to master node is disabled_ | NFD master pod [tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) |
| `master.annotations` | dict | {} | NFD master pod [annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) |
| `master.affinity` | dict | | NFD master pod required [node affinity](https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/) |
Expand Down