-
-
Notifications
You must be signed in to change notification settings - Fork 70
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Global mode does not work correctly #20
Comments
I have the same problem. If I start a global service on 9 nodes running every 1m I can see that every minute it starts on a different set of nodes (usually 2 or 3 out of 9 more less randomly). |
Hi guys, I did some tests on my side and it appears that this could be due to the way the tasks on the Swarm side are managed. Maybe linked to a GC issue with task reaper. Any ideas @dperny? |
So, in spite of job support finally getting into Docker (yeah!), this project will probably still be useful as long as there is no cron scheduler in Swarm. @crazy-max Do you plan on migrating swarm-cronjob to the new model? That should then probably fix this issue as well. |
@djmaze Yes I'm aware of this feature. That looks promising and I think it will benefit for the global mode support (and fix this issue). |
Any news on this issue? |
Is anyone aware if Docker CE 20.10 will be released in the near future? That will contain the new job support which blocks this issue. |
@danielgrabowski I'm going to start working on it but I'll need to do a large refactoring beforehand. Keep you in touch. |
I have the same issue.
|
This command apparently solves the problem |
@crazy-max @djmaze It seems that compose file syntax support for jobs only recently landed in Docker 23 (moby/moby#41895 (comment)) |
Behaviour
Steps to reproduce this issue
Expected behaviour
On every schedule occasion, there should be one log entry for each node.
Actual behaviour
After the first deployment output (where there is output for every node), only the output of two nodes is shown.
docker stack ps
shows that the service is only restarted on two of the nodes.Most of the time, that is. Every few iterations, the service is successfully run on all nodes again.
I am able to reproduce this on PWD as well as a real swarm cluster with 3 managers and 1 worker (w/ docker 18.09.8).
Configuration
Docker info
Logs
swarm-cronjob logs:
logs from scheduled service:
The text was updated successfully, but these errors were encountered: