-
Notifications
You must be signed in to change notification settings - Fork 599
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[OOMKilled] High memory consumption #1346
Comments
This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 7 days. |
Not stale. |
A code change could certainly be helpful but I think at least increasing the DS configuration from a limit of 50Mi to something like 200Mi would at least be a start |
We successfully applied #1356 as mitigation. Even a limit of 500Mi did not resolve an OOM-loop in one of our rather "involved" use cases. |
@JensErat That's a much better fix, mine is just a bandaid. Agreed 200M sometimes doesn't even get me out of a oom loop, most of the time it works, but sometimes not |
What would you like to be added:
The option to specify the number of parallel CNI requests being processed. In a scenario where many pods come to alive at the same time, memory of the multus pods (thick-mode) increases rapidly. Hence, I suggest we add a function to limit the concurrency.
Why is this needed:
This will prevent setting higher memory requests, which are really a waste as they're only required in these bursty scenarios.
Btw, I'm happy to help with the implementation.
The text was updated successfully, but these errors were encountered: