-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is there a way to delay container startup to support dependant services with a longer startup time #374
Comments
At work we wrap our dependent services in a script that check if the link is up yet. I know one of my colleagues would be interested in this too! Personally I feel it's a container-level concern to wait for services to be available, but I may be wrong :) |
We do the same thing with wrapping. You can see an example here: https://github.com/dominionenterprises/tol-api-php/blob/master/tests/provisioning/set-env.sh |
It'd be handy to have an entrypoint script that loops over all of the links and waits until they're working before starting the command passed to it. This should be built in to Docker itself, but the solution is a way off. A container shouldn't be considered started until the link it exposes has opened. |
@bfirsh that's more than I was imagining, but would be excellent.
I think that's exactly what people need. For now, I'll be using a variation on https://github.com/aanand/docker-wait |
Yeah, I'd be interested in something like this - meant to post about it earlier. The smallest impact pattern I can think of that would fix this usecase for us would be to be the following: Add "wait" as a new key in fig.yml, with similar value semantics as link. Docker would treat this as a pre-requisite and wait until this container has exited prior to carrying on. So, my docker file would look something like:
On running app, it will start up all the link containers, then run the wait container and only progress to the actual app container once the wait container (initdb) has exited. initdb would run a script that waits for the database to be available, then runs any initialisations/migrations/whatever, then exits. That's my thoughts, anyway. |
(revised, see below) |
+1 here too. It's not very appealing to have to do this in the commands themselves. |
+1 as well. Just ran into this issue. Great tool btw, makes my life so much easier! |
+1 would be great to have this. |
+1 also. Recently run into the same set of problems |
+1 also. any statement from dockerguys? |
I am writing wrapper scripts as entrypoints to synchronise at the moment, not sure if having a mechanism in fig is wise if you have other targets for your containers that perform orchestration a different way. Seems very application specific to me, as such the responsibility of the containers doing the work. |
After some thought and experimentation I do kind of agree with this. As such an application I'm building basically has a synchronous cheers James Mills / prologic E: [email protected] On Fri, Aug 22, 2014 at 6:34 PM, Mark Stuart [email protected]
|
Yes some basic "depend's on" neeeded here... |
Another +1 here. I have Postgres taking longer than Django to start so the DB isn't there for the migration command without hackery. |
@ahknight interesting, why is migration running during Don't you want to actually run migrate during the |
There's a larger startup script for the application in question, alas. For now, we're doing non-DB work first, using |
I've had a lot of success doing this work during the No need to poll for startup. Although I've done something similar with mysql, where I did have to poll for startup because the |
Here is what I was thinking: Using the idea of moby/moby#7445 we could implement this "wait_for_helth_check" attribute in fig? is there anyway of making fig check the tcp status on the linked container, if so then I think this is the way to go. =) |
@dnephin can you explain a bit more what you're doing in Dockerfiles to help this ? |
@docteurklein I can. I fixed the link from above (https://github.com/dnephin/readthedocs.org/blob/fig-demo/dockerfiles/database/Dockerfile#L21) The idea is that you do all the slower "setup" operations during the build, so you don't have to wait for anything during container startup. In the case of a database or search index, you would:
all as a single build step. Later when you |
nice! thanks :) |
@dnephin nice, hadn't thought of that . |
+1 This is definitely needed. |
Could you give an example of why/when it's needed? |
In the use case I have, I have an Elasticsearch server and then an application server that's connecting to Elasticsearch. Elasticsearch takes a few seconds to spin up, so I can't simply do a |
Say one container starts MySQL and the other starts an app that needs MySQL and it turns out the other app starts faster. We have transient |
crane has a way around this by letting you create groups that can be started individually. So you can start the MySQL group, wait 5 secs and then start the other stuff that depends on it. |
@oskarhane not sure if this "wait 5 secs" helps, in some cases in might need to wait more (or just can't be sure it won't go over the 5 secs)... it's isn't much safe to rely on time waiting. |
@vladikoff: more info about version 3 at #4305 Basically, it won't be supported, you have to make your containers fault-tolerant instead of relying on docker-compose. |
I believe this can be closed now. |
Unfortunatelly, condition is not supported anymore in v3. Here is workaround, that I've found:
wait-for-postgres.sh:
|
@slava-nikulin custom entrypoint is a common practice, it is almost the only (docker native) way how you can define and check all conditions you need before staring your app in a container. |
Truth is there was a lot of debate and I think the 2.x support for the conditional support to natively integrate with health checks and order the startup was a much needed support. Docker does not support local pod of containers natively and when it does it will have to support something similar again just like kubernetes for example provides the semantics.
Docker 3.x is a series to bring swarm support into compose and hence bunch of options has been dropped keeping the distributed nature in mind.
2.x series preserves the original compose/local topology features.
Docker has to figure out how to merge these 2 versions because forcing swarm onto compose by reducing feature set of compose is not a welcome direction.
… On May 10, 2017, at 8:15 PM, Slava Nikulin ***@***.***> wrote:
Unfortunatelly, condition is not supported anymore in v3. Here is workaround, that I've found:
website:
depends_on:
- 'postgres'
build: .
ports:
- '3000'
volumes:
- '.:/news_app'
- 'bundle_data:/bundle'
entrypoint: ./wait-for-postgres.sh postgres 5432
postgres:
image: 'postgres:9.6.2'
ports:
- '5432'
wait-for-postgres.sh:
#!/bin/sh
postgres_host=$1
postgres_port=$2
cmd="$@"
# wait for the postgres docker to be running
while ! pg_isready -h $postgres_host -p $postgres_port -q -U postgres; do
>&2 echo "Postgres is unavailable - sleeping"
sleep 1
done
>&2 echo "Postgres is up - executing command"
# run the command
exec $cmd
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
I was able to do something like this #!/bin/sh
set -eu
docker volume create --name=gql-sync
echo "Building docker containers"
docker-compose build
echo "Running tests inside docker container"
docker-compose up -d pubsub
docker-compose up -d mongo
docker-compose up -d botms
docker-compose up -d events
docker-compose up -d identity
docker-compose up -d importer
docker-compose run status
docker-compose run testing
exit $? // status.sh #!/bin/sh
set -eu pipefail
echo "Attempting to connect to bots"
until $(nc -zv botms 3000); do
printf '.'
sleep 5
done
echo "Attempting to connect to events"
until $(nc -zv events 3000); do
printf '.'
sleep 5
done
echo "Attempting to connect to identity"
until $(nc -zv identity 3000); do
printf '.'
sleep 5
done
echo "Attempting to connect to importer"
until $(nc -zv importer 8080); do
printf '.'
sleep 5
done
echo "Was able to connect to all"
exit 0 // in my docker compose file status:
image: yikaus/alpine-bash
volumes:
- "./internals/scripts:/scripts"
command: "sh /scripts/status.sh"
depends_on:
- "mongo"
- "importer"
- "events"
- "identity"
- "botms" |
I have a similar problem but a bit different. I have to wait for MongoDB to start and initialize a replica set. docker-compose.txt Am having difficulty in doing so, anyone has any idea? |
@patrickml If I don't use docker compose, How I you do it with Dockerfile? cat DockerfileFROM store/datastax/dse-server:5.1.8 USER root RUN apt-get update ADD db-scripts-2.1.33.2-RFT-01.tar /docker/cms/ WORKDIR /docker/cms/db-scripts-2.1.33.2/ USER dse ============= Step 8/9 : RUN cqlsh -f build_all.cql |
A race condition between these two containers was causing the database to sometimes get cleaned after migrations had been run. Rather than hack together scripts to track this state I'm simply removing the clean service from the docker-compose configuration. See also: * https://docs.docker.com/compose/startup-order/ * docker/compose#374
Because `docker-compose` is not capable of checking open ports docker/compose#374
Requires= var-lib-libvirt.mount var-lib-libvirt-images-ram.mount |
In case anyone comes back to this years later, read this wonderful page from the Docker Compose docs for updated info! |
There are also health checks built in now, which solve the problem better. |
I have a MySQL container that takes a little time to start up as it needs to import data.
I have an Alfresco container that depends upon the MySQL container.
At the moment, when I use fig, the Alfresco service inside the Alfresco container fails when it attempts to connect to the MySQL container... ostensibly because the MySQL service is not yet listening.
Is there a way to handle this kind of issue in Fig?
The text was updated successfully, but these errors were encountered: