Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use GOMAXPROCS in gateway based on limits.cpu #1989

Merged
merged 5 commits into from
Dec 15, 2024
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 13 additions & 0 deletions autoscaler/controllers/gateway/deployment.go
Original file line number Diff line number Diff line change
Expand Up @@ -171,6 +171,19 @@ func getDesiredDeployment(dests *odigosv1.DestinationList, configDataHash string
Name: "GOMEMLIMIT",
Value: fmt.Sprintf("%dMiB", gateway.Spec.ResourcesSettings.GomemlimitMiB),
},
{
// let the Go runtime know how many CPUs are available,
// without this, Go will assume all the cores are available.
Name: "GOMAXPROCS",
ValueFrom: &corev1.EnvVarSource{
ResourceFieldRef: &corev1.ResourceFieldSelector{
ContainerName: containerName,
// limitCPU, Kubernetes automatically rounds up the value to an integer
// (700m -> 1, 1200m -> 2)
Resource: "limits.cpu",
},
},
},
},
SecurityContext: &corev1.SecurityContext{
AllowPrivilegeEscalation: boolPtr(false),
Expand Down
7 changes: 7 additions & 0 deletions tests/common/assert/pipeline-ready.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -55,12 +55,19 @@ spec:
fieldPath: metadata.name
- name: GOMEMLIMIT
(value != null): true
- name: GOMAXPROCS
valueFrom:
resourceFieldRef:
containerName: gateway
divisor: "0"
resource: limits.cpu
name: gateway
resources:
requests:
(memory != null): true
limits:
(memory != null): true
(cpu != null): true
RonFed marked this conversation as resolved.
Show resolved Hide resolved
volumeMounts:
- mountPath: /conf
name: collector-conf
Expand Down
4 changes: 0 additions & 4 deletions tests/e2e/cli-upgrade/chainsaw-test.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -103,10 +103,6 @@ spec:
try:
- apply:
file: ../../common/apply/add-tempo-traces-destination.yaml
- name: Odigos pipeline ready
try:
- assert:
file: ../../common/assert/pipeline-ready.yaml
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This guaranteed that we do not progress until the pipeline is ready.
Wonder if removing the assert can cause issues

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added this step back, but only to make sure all the pods are ready, not asserting all the pods spec.
The full pipeline ready assert will be after the upgrade as before

- name: Simple-demo instrumented after destination added
try:
- assert:
Expand Down
6 changes: 6 additions & 0 deletions tests/e2e/workload-lifecycle/03-assert-action-created.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,12 @@ spec:
fieldPath: metadata.name
- name: GOMEMLIMIT
(value != null): true
- name: GOMAXPROCS
valueFrom:
resourceFieldRef:
containerName: gateway
divisor: "0"
resource: limits.cpu
name: gateway
resources:
requests:
Expand Down
Loading