Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AAW Dev: Re-size workloads scheduled on general nodepools #1997

Closed
Jose-Matsuda opened this issue Nov 27, 2024 · 3 comments · Fixed by StatCan/aaw-kubeflow-manifests#408
Closed
Assignees

Comments

@Jose-Matsuda
Copy link
Contributor

Jose-Matsuda commented Nov 27, 2024

Follow up / should be done after #1992

This one has significantly more pods but at least 12 of them can be eliminated due to being daemonsets that have already been explored. Since its also bigger would be nicer to be done after 1992 so we have a flow going

@jacek-dudek
Copy link

jacek-dudek commented Dec 7, 2024

Posting the suggested resource requests based on current usage metrics for workloads on the general nodepool. Updated table to fix suggested memory requests, because previously I didn't notice that Grafana reported memory metrics with varying unit types.
resource-utilization-on-aaw-dev-general-nodes.xlsx

@jacek-dudek
Copy link

Updated table with all the pods examined. 34 workloads out of 140 have argocd annotations.
object-hierarchy-and-labels-and-annotations-aaw-dev-general-nodepool.xlsx

@jacek-dudek
Copy link

Posting my work on the config changes so far.
changes-to-configs-of-root-objects.xlsx

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants