-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Set resource limits for containers #68
Comments
Thank you for filing the issue. We'll look into it next sprint. |
Issues go stale after 90d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle stale |
Stale issues rot after 30d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle rotten |
Rotten issues close after 30d of inactivity. Reopen the issue by commenting /close |
@openshift-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Would it be possible to re-open this issue? After a week running the pods consume about 3GiB RAM each. |
@felixkrohn what versoin are you using? |
0.1.13 as distributed by RH on operatorhub (image: http://quay.io/file-integrity-operator/file-integrity-operator:0.1.13) |
@felixkrohn we'll look into it. |
@felixkrohn would you be able to run with the steps outlined in https://mrogers950.gitlab.io/openshift/2021/04/12/fio-profile/ ? |
@mrogers950 Thanks to the great how-to 👍 I got it running, and will send you the .gz files next week (don't hesitate to remind me should I forget...) |
Did the traces help in any way? |
@felixkrohn yes, thanks for your help! the pprof data shows what I expected, which is that the daemon's actual heap usage is only a small percentage of the total that is reported by the cluster: This coincides with what I found about the reserved space used by the go runtime which, I tried to outline briefly here: https://mrogers950.gitlab.io/golang/2021/03/12/wild-crazy-golang-mem/ But I think that now we can support pod limits properly because the daemon pods are more robust and should be able to handle restart by OOM occasionally. I'll work on a PR for that. |
Great news! thanks for the update. |
As a Platform Engineer I need to control usage of CPU Memory per Container.
Please add ressource limits:
The Aide PODS were running for a day and used 1.6 Gb of memory for no reason.
cheers
The text was updated successfully, but these errors were encountered: