-
Notifications
You must be signed in to change notification settings - Fork 322
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ECS] One task prevents from all of the other in instance to change from PENDING to RUNNING #325
Comments
This is working by design. Our scheduler assumes that while a task is stopping (or exiting gracefully) it will still use its requested resources. For example, a webserver would still need its resource pool while waiting for all connections to terminate. The alternative would be to allocate fewer resources when a task transitions to stopping, but that's not a safe assumption to make across all workloads. Would configurable behavior help your use case? Or is it sufficient to be more clear about this behavior in the console? |
Hey @petderek. I get it works like that by design. However, i wonder why one task should need to prevent all other to be running. The best configurable behavior for us will be a task-for-task process: calculating the resources that are held by the stopping task (which is fine and reasonable), but not preventing from other tasks to be running on the instance while it has a resource to give. In our use case, the tasks are used for a long-poll workload that should run as a service, and not for a web client. That behavior makes our instances not be filled in time and can also "stuck" our process in a new deployment, because instances are waiting for one task to end until the other tasks are allowed to run. So the instance is actually in kind of "disabled" or "draining" status until the long workload is done (and it can take some time). What can we do in our use case to be supported in ECS? Thanks |
Hi! Thank you in advanced! |
Hi everyone, we are facing the exact same issue. @Alonreznik is right, one task is blocking all other tasks and in my opinion this does not make sense. Let me illustrate: Assume we have one task with 10 GB memory reservation running on a container instance that has registered with 30 GB. The container instance shows 20 GB of RAM available and that is correct. Now this task is stopped (the ECS agent will make docker send a SIGTERM) but the container keeps running to finish its calculations (now it shows under stopped tasks as "desired status = STOPPED" and "last status = RUNNING"). The container instance will now show 30 GB available in the AWS ECS console which is nonsense, it should still be 20 GB since the container is still using resources as @petderek mentioned. Even worse, if we try to launch three new tasks with 10 GB memory reservation each, they will all be pending until the still running task transitions to "last status = STOPPED". Expected behavior would be that two out of the three tasks can launch immediately. I hope my example was understandable, else feel free to ask. |
Hey! As a workaround, you can set ECS_CONTAINER_STOP_TIMEOUT to a smaller number. This is used to configure 'Time to wait for the container to exit normally before being forcibly killed'. By default, it is set as 30s. More information can be found here. I marked this issue as a feature request and we will work on it soon. |
Hi @yumex93 About your workaround - in most cases, we need our dockers to make a graceful shutdown before they die. Therefore, decreasing the ECS_CONTAINER_STOP_TIMEOUT will cause our workers to be killed before the shutdown is completed. Therefore the feature is more than needed :) Thank you again for your help, we're waiting for updates about it. Alon |
@Alonreznik , @yumex93 We have the same situation, some workers even take a few hours to complete their task and we've leveraged the ECS_CONTAINER_STOP_TIMEOUT to shut those down gracefully as well. Since ECS differentiates between a "desired status" and a "last status" for tasks, I believe it should be possible to handle tasks in the process of shutting down a bit better than how it works today. For illustration of what I mean, see this screenshot: The tasks are still running and still consume resources, but the container instance does not seem to keep track of those resources. If this is more than just confusing display, I expect it to cause issues, e.g. like the one above. |
Hi @yumex93, Any update with that issue? Thanks Alon |
We are aware of this issue, and we are working on prioritizing it. We will keep this issue open to track this issue, and provide update when we have more solid plans. |
Hi @yunhee-l. Thanks |
Hi @yunhee-l @FlorianWendel |
We don't have any new updates at this point. We will update when we have more solid plans. |
Related: aws/amazon-ecs-agent#731 |
Hi, Just wanted to add our experience with this with the hopes that it can be bumped in priority. We need to run tasks that can be long running. With this behaviour as it stands it essentially locks up the ec2 instance so that it cannot take any more tasks until the first task has shut down (which could be a few hours) It wouldn't quite so bad if ecs marked the host as unusable and placed tasks on other hosts but it doesn't, it still sends them to the host that cannot start them. This has the potential to cause us service outage in that we cannot create tasks to handle workload (we tell the service to create tasks but it can't due to the lock up) Thanks. |
@petderek @yumex93 Do you have any ETA for implementing it or deploying it? This is a real blocker for our ongoing processes. Thank you Alon |
@Alonreznik: thanks for following up again and communicating the importance of getting this resolved. this helps us prioritize our tasks. we don't have an ETA right now - but have identified the exact issue and have a path forward that requires changes to our scheduling system and the ecs agent. so to give you some more context. as @petderek said earlier,
so changing this behavior will be a departure from our existing way of accounting resources when we schedule tasks. considering that the current way has been in place since the beginning of ECS, the risks involved with changing this are significant as there could be subtle rippling effects in the system. we plan to explore ways to validate this change and ensure to not introduce regressions. the original design made the trade off towards oversubscribing resources for placement by releasing resource on the instance when tasks were stopped - but the side effect of that is the behavior you are describing. additionally, now that we've added granular sigkill timeouts for containers with #1849, we can see this problem being exasperated. so all that is to say - we're working on this issue and we will update this thread as we work towards deploying the changes. |
@adnxn We off course get this is something that is built in the design and we accept it. However, I assume that our intentions are not for this radical change in the core system (which is great!!). Our request is based on the ecs-agent assumption of all of the resources must be released in the instance from the last tasks, and our request is just to handle it by task (and also have some indication the task is still running on the instance backward after it got the As it looks today, the resource handling and releasing are based on the entire instance, and not on the tasks that run over the instance. So if a task releases its resources, the ecs-agent should allow scheduling these resources for new tasks (it they stand on the resource requirements). Thank you for your help! Please keep us posted, Alon |
Hello, Adding a new instances to the cluster helped US to workaround this situation, but it can be really costly if there are multiple deploys each day. Are there any new updates about this issues? or possible workarounds. It could definitely be solved by removing the long poll service and switch it to just calling ECS RunTask (process one job and terminate) without waiting for the result. But it would require more changes in our application architecture and also it would be more tightly coupled to ECS. thanks |
Hi Guys. Thanks |
Hi, thank you for your feedback on this issue. We are aware of this behavior and are researching solutions. We will keep the github issue up to date as the status changes. |
Hi @coultn . We must say this is something prevents our workloads to grow accordingly to our tasks, and there are situations this behavior actually stuck our production servers. Again, something that can be a no-go (or no-continue in our case) using ECS in prod. For example, you can see a typical production workloads desire/running gap. The Green layer is the gap between the desired and the running (orange layer) tasks. The blue is the PENDING tasks in the cluster. You can see a constant gap between these two parameters. No deployment was made today and this is something we're encountering in scaling up mechanism. Think about the situation we're encountering. We have new tasks in our queue (SQS), and therefore we're asking from the ECS to run new tasks (means - desire tasks increasing). The ECS agent schedule new workloads to that instance, and then hits the one task that is still working. For the agent - he made its job - he scheduled new tasks. But the tasks are stuck in PENDING state, for hours in some cases, makes this instance to be unusable because they're just not working yet. Now think about, that you need to raise the more 100 tasks in some hours to complete a quick workloads in the line, and you have 5-6 instances with one task blocks each one, and it becomes to be a mass. We also must say we encounter this in the last year only, after some upgrade of the agent a year or year and a half ago. We need every day to ask for more instances in our workloads in order to open the block. This is not how a production service in AWS should be maintained, and we're facing that again and again in this case, every day. Please help us to continue using ECS as our production orchestrator. We love this product and want it to succeed, but as it seems, it doesn't fit to long-working tasks. Your help of rushing this in your team will be kind, Thank you Alon |
I've discussed this with Nathan he told me that they plan fixing this, but unfortunately without a quick fix. We have similar issues with deployment and scaling and due to it lot of unnecessary rolls of new instances. Meanwhile we are experimenting with EKS (also fo multi-cloud deployment) where this issue isn't present. |
Hi @Halama . Thanks for the reply and the update. I can get this is not something that can be quick to solve, but meanwhile, ECS team can provide workarounds, such as placing-method of binpack and the newest instances, or determine the time task can be on PENDING state before it tries on the new instances. This issue is not getting any response due to many users encountering that. It is open more than a year and they're can't send any reasonable ETA (even 3 months is good for us). It was on "researching" just in the last week. Can you please share about your migration process to EKS from ECS? Thanks again Alon |
Hi, @pavneeta any update on this issue ? |
Any update on this ? |
I'm running 1 task per host, with autoscaling, but everything gets piled up in the MQ because of this one stopping task (which runs daily and should stop gracefully). Also CICD pipelines fail since I'm leveraging @coultn Your suggestion would solve it. Any ETA for this? |
We recently implemented Datadog and cAdvisor as Daemons for ECS using cloudformation, we have more than 20 stacks, a few of them running about 10 instances (bigger ones). At the first try deamons took about 5 hours to be running. The key to improve performance and get the new daemon tasks running was to set the MinClusterSize=1 (it was not previously defined) and the following placement strategy on ECS-Service.yaml, ( after those modifications we deployed the daemons ).
We are planning to apply it on prod soon, take in mind the placement strategy performs a rollout of your running instances, I don't thinks it is a solution but it could help! |
Any update about that? We love ECS, but this use case drives us into Kubernetes, which solves this case easily. |
BTW - 3 years (!!!!) after this issue had been opened, and still many people facing this unexpected behaviour. I think this is a good reason to make it fixed once for good. |
Hi @petderek. Any update? |
AWS seems to be overly cautious regarding a fix, and I think it's because the issue still isn't clearly understood by everyone involved. I'm not entirely sure that I understand it myself, but after reading the whole thread, here's what it seems to boil down to:
Scenario 2 makes no sense. It's clearly a bug. The phrase "by design" doesn't belong in this thread. I understand how it could've happened though–it's perhaps an unfortunate workaround for an older bug:
Is that accurate? Are we actually worried about the unintended consequences of fixing Bug A? In our case shortening stopTimeout isn't a viable option, and neither are placementConstraints. Every host may have tasks stopping at the same time, so placementConstraints would just continue making them all unusable. (And even in a best case, it would result in very suboptimal placement as everything gets squeezed onto a small number of usable hosts.) Two possible fixes:
|
Hi everyone. It seems that this is just won't be prioritized, and the ECS team just says "we're living with the bug", while this bug just prevents from so many users to do BASIC tasks on ECS, such as just "Run tasks that works". |
We are excited to share that we've addressed the known issue in ECS Agent to prevent tasks stuck in pending state on instances that have stopping tasks with long timeouts. For details on the root cause, fix, and other planned improvements, please see What's New Post, Blog Post, and documentation We'll be closing this issue. As always, happy to receive your feedback. Let us know if you face any other issues. |
Holy moly!!! 5 years!! Amazing guys! I'm so excited! Thank you so much 🙏🙏 |
Hello There.
We're facing lately a strange practice from ECS about tasks that prevent from new tasks to run in an instance.
A little about our case -
We have some tasks that need to complete their action and then exit by themselves in a stopTask command. That means we have a gracefull-shutdown process, that in sometimes make some time to complete (more than few seconds and even some long minutes).
However, when a stopTask is sent over these tasks, they do not appear anymore in the tasks at ECS console (which is fine), but they also make all other tasks in the same instance that are trying to change their state from
PENDING
toRUNNING
.Here is an example of one instance tasks when it happens:
Why is that behavior happens? Why one task prevent from the other to run next to it until it done? This is a bad practice in resource management (we don't use the max potential of our instances at the pending time).
The best thing will be that the stopped task will appear in the console until it really stopped in the instance, and the state changing from
PENDING
toRUNNING
won't be affected by other tasks in the same instance.I hope you can fix that behavior,
Thanks!
The text was updated successfully, but these errors were encountered: