Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

running out of disk space #22

Open
kc-dot-io opened this issue Sep 21, 2017 · 2 comments
Open

running out of disk space #22

kc-dot-io opened this issue Sep 21, 2017 · 2 comments

Comments

@kc-dot-io
Copy link

devmapper: Thin Pool has 3826 free data blocks which is less than minimum required 4454 free data blocks. Create more free space in thin pool or use dm.min_free_space option to change behavior

This appears to be due to images not being cleaned up and once the disk runs out of space, in the particular disk type, it has trouble reclaiming. I'm testing lowering the ECS image clean up interval to help avoid running into issues with disk over run.

@viveksura
Copy link

Hi slajax,

I have encountered the same issue. We ssh-ed into the ec2 container and added cronjobs which run every 15 mins to cleanup the dangling docker containers, unused images and extra volumes. Our build time is typically less than 3 mins, so i am also removing images which are older than an hour. I am attaching the crons i am running every hour. Hope it helps.

*/15 * * * * docker rm $(docker ps -q -f status=exited)
*/15 * * * * docker volume rm $(docker volume ls -qf dangling=true)
*/15 * * * * docker rmi $(docker images --filter "dangling=true" -q --no-trunc)
*/15 * * * * docker rmi $(docker images | grep "lambci-ecs-repo-name" | grep "hour" |awk "{print \$3}") -f
*/15 * * * * docker rmi $(docker images | grep "none" | grep "hour" |awk "{print \$3}") -f
``

PS: To SSH into an EC2 container, you need to attach an AWS keypair while creation of the EC2 container. I am attaching the cloudformation template with the required changes.
cluster.template.txt

BTW, I found your issues and PRs in lambci very helpful.

Thanks.

@kc-dot-io
Copy link
Author

kc-dot-io commented Sep 23, 2017

@vivek-rg nice! I tried implementing the strategy defined here but it didn't work out properly. I modified the stack but probably should have recreated. I also read something about this being an issue with the type of disk you choose. That said this should tell the ecs-agent to clean things up for you as it has the ability to manage this based on the params you set and that would make manually managed crons unnecessary.

Thanks for posting the example of the key pair. I noticed it didn't have one and this was going to be my next step to unblock myself if recreating the stack didn't help. You saved me some time! Hope my comment above is also helpful for you.

Let's keep posting here if we come across other things! There isn't a lot of activity on this repo and since we're obviously pushing the limits we should definitely share our improvements. If I can get an optimized ecs-agent clean up strategy I'll implement it in a PR just like the custom stack name fix that is open.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants