Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement conditional depends_on #453

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

Elkasitu
Copy link

This PR implements the necessary changes so that podman-compose complies with the compose v3 spec's long syntax for the depends_on block [1].

This essentially means that services that depend on other services can be set up in such a way that they'll only start if the defined conditions are met for each service that it depends on.

[1] https://github.com/compose-spec/compose-spec/blob/0c768bea2e06d5550bc5f390bebe1f830dca8756/spec.md#long-syntax-1

@Elkasitu Elkasitu force-pushed the long-syntax-depends branch 4 times, most recently from f92d0b1 to c2941da Compare March 20, 2022 17:54
podman_compose.py Outdated Show resolved Hide resolved
@muayyad-alsadi
Copy link
Collaborator

muayyad-alsadi commented Mar 21, 2022

The approach I'm aiming for delegate the logic to podman using --requires

podman pod create --name mypod
podman create --pod mypod --name mypod_db
podman create --pod mypod --requires mypod_db --name mypod_web
podman pod start mypod
podman pod logs -f mypod

https://docs.podman.io/en/latest/markdown/podman-create.1.html#requires-container
https://docs.podman.io/en/latest/markdown/podman-run.1.html#requires-container

@Elkasitu
Copy link
Author

That only takes care of the case in which the condition is service_started or it's a short syntax depends_on, correct? We would still need the current sort logic to have the other conditions work, I think

@muayyad-alsadi
Copy link
Collaborator

it's a short syntax depends_on, correct?

short syntax means require the container to be started.
for long syntax current implementation is to convert all decencies to short ones

the down side of this is that it does not handle healthy
that is don't just wait it to be up and but also up and healthy (based on some use defined health check)

either defined in the compose

https://github.com/compose-spec/compose-spec/blob/master/spec.md#healthcheck

healthcheck:
  test: ["CMD", "curl", "-f", "http://localhost"]
  interval: 1m30s
  timeout: 10s
  retries: 3
  start_period: 40s

of in the dockerfile

we can add a test with busybox httpd -p 8080 /etc/ and define healthcheck that wget -q -O - http://localhost:8080/hosts

I'll post my personal opinion very soon
why I see those checks useless as start condition
despite being very useful from operational point of view (in k8s world they differentiate between healthiness and readiness conditions)

  • livenessProbe
  • readinessProbe
  • startupProbe

and they have built in port and tcp checks

livenessProbe:
  httpGet:
    httpHeaders:
      - name: Accept
        value: application/json

startupProbe:
  httpGet:
    httpHeaders:
      - name: User-Agent
        value: MyUserAgent

https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/

Adrian Torres added 2 commits March 21, 2022 20:07
This commit changes the way that depends_on blocks are parsed and
tracked during the `podman-compose up` process.

Each service's dependencies will be tracked as a dict in which keys are
services being depended on and values are the condition to be met by
said services before the current service can start.

This lays the groundwork for supporting long-syntax / conditional
depends_on blocks, but should not change any behavior so far.

Signed-off-by: Adrian Torres <[email protected]>
This commit implements the long-syntax / conditional depends_on compose
mechanism which instructs the compose system to wait for a certain
condition before starting a container.

Currently available conditions are:

- service_started: same behavior as before this commit, the depending
  container will start as soon as the depended on container has started
- service_healthy: if the depended on container has a healthcheck, wait
  until said container is marked as healthy before starting the
  depending container
- service_completed_successfully: wait until the depended on container
  has exited and its exit code is 0, after which the depending container
  can be started

This mechanism is part of the v3 [1] compose spec and is useful for
controlling container startup based on other containers that can take a
certain amount of time to start or on containers that do complicated
setups and must exit before starting other containers.

[1] https://red.ht/conditional-depends

Signed-off-by: Adrian Torres <[email protected]>
@Elkasitu Elkasitu force-pushed the long-syntax-depends branch from c2941da to fcbf15e Compare March 21, 2022 19:07
@muayyad-alsadi
Copy link
Collaborator

I said:

I'll post my personal opinion very soon

so let's go

first of all thank you for your contribution
I really really appreciate your time and effort.
I'll keep this open until we find the best way to implement it.

for context and vision please refer to our discussion here

Regarding how docker-compose implements it and how should we follow compose spec

since docker compose does not have k8s-like probes (started vs. ready vs. healthy)
the main difference between ready and healthy is that, while both ready and healthy follows started
a healthy later might become unhealthy

so regarding start condition, it should be with readiness not healthiness (because we don't stop the dependent one it when it later became unhealthy)

a db and a web, we start the web after db is ready
if there was later a moment db was under heavy load caused the health check to report it as unhealthy for sometime
we don't pull the web down as a result of that
it's just a start condition
maybe we ask podman to support --require=<CONTAINER>:<STATUS>
where status is created, started, ready, healthy
or something like this

regarding service_completed_successfully it's completely a joke
in compose spec there is nothing similar to init-containers
and there is no cron-jobs like as in k8s in compose spec
and there is nothing wrong with this (for a single-host orchestration)
because this can be done using compose profiles, cron jobs, podman exec, systemd timers ..etc
but the problem is why I would run something after something that completes successfully via compose?
other than swarm mode / multi-host
why I would need such a thing, given that there is no init nor jobs in compose spec?
in my personal opinion it's half-backed feature.

if you have a real-world use case for service_completed_successfully please let me know.

@Elkasitu
Copy link
Author

maybe we ask podman to support --require=<CONTAINER>:<STATUS>

Sure, I don't really care where it's implemented as long as it works, it's currently kind of a blocker for our project since we need the feature, so we've been debating migrating to docker-compose, creating "ready scripts" or contributing the feature to podman-compose, I really wanted to do the latter because I figured other people also want the feature but didn't really have time to contribute during work hours, so I decided to implement it over the weekend, since it's already implemented we might just use my fork until a solution exists either in podman-compose or in podman.

The reasons I implemented it here are mostly:

  • That's the way it's done in docker-compose and seemed easy enough to implement
  • podman-compose as a project has probably less overhead than podman so we could get the feature shipped to rhel, fedora faster and worst-case we can just download the script and run it locally

so regarding start condition, it should be with readiness not healthiness (because we don't stop the dependent one it when it later became unhealthy)

This is a valid opinion but I'm not really looking to discuss semantics or challenge the compose spec status quo, this is just a feature that I need so I went ahead and implemented it as defined by the spec.

if you have a real-world use case for service_completed_successfully please let me know.

Yes, in fact in one of our branches in which we test docker-compose, we use a container exactly as a k8s init-container, this container loads up .sql files into our database (extensions, row-level security, etc.) and once the container exits with status code 0 we can start up our web service, there's probably many other use-cases, but again as I mentioned earlier I'm not here to challenge the compose spec status quo, even if I didn't use it I would have implemented it anyway.

@muayyad-alsadi
Copy link
Collaborator

maybe we ask podman to support --require=:

Sure, I don't really care where it's implemented as long as it works, it's currently kind of a blocker for our project since we need the feature,

we will carry your patch until podman have a solution and then we either detect the version or branch and state dependency (let's say podman-compose 1.2 requires podman 4.5 which has the fix, if you don't have it then use podman-compose < 1.2)

I see you are interested in podman-compose that's why I'm trying to onboard you with the reasoning behind those choices.

long long ago rootless podman did not have inter container communication, in podman compose we created a pod or a container then shared the network namespace between all the containers so that all containers were able to communicate using 127.0.0.1 then we used --add-host to simulate the DNS. later podman supported network, we no longer have this workaround we keep it in 0.1.x branch for those who have legacy version.

similarly we will carry your patch until it can be done with podman.

keep in mind the final objective is that podmn-compose up -d should setup (create ...) things then ends with exec podman pod start mypod
if there is a case that can't be handled by exec podman pod start mypod it should be detected and handled as an exception

currently podman-compose systemd generates a unit file that calls podman pod strat mypod
please take your time checking how that systemd unit file works

#443

because I'll recommend people to use systemd, cron jobs ...etc.
I don't tell people to open screen or tmux and type podman-compose up there
I tell people to register it as a service systemd --user enable --now podman-compose@myproj
I would love to cover any use case (like yours)

your approach breaks this, it's more of just run it in screen.

I'll get back to read all of your points maybe this weekend.

@Elkasitu
Copy link
Author

I'll get back to read all of your points maybe this weekend.

Hey no worries, take your time, I'm gonna leave some comments here but don't feel pressured into rushing your answers

we will carry your patch until podman have a solution and then we either detect the version or branch and state dependency (let's say podman-compose 1.2 requires podman 4.5 which has the fix, if you don't have it then use podman-compose < 1.2)

I'm ok with that, I think it's important that podman-compose provides this option because using the latest version and integrating it into a project is much easier than with podman, we can just install it via pip or even clone it and add it to our path. However podman likely has way more overhead and the distros that we use have very old versions of podman (Fedora 35 uses 3.4.4 and latest RHEL 3.4.2).

I see you are interested in podman-compose that's why I'm trying to onboard you with the reasoning behind those choices.

Definitely, hopefully me and my team can contribute more in the future :) I hope you didn't take my comment wrong, when I said I didn't care where it's implemented I was simply stating that for me it's fine either way

your approach breaks this, it's more of just run it in screen.

Did you mean it's not compatible? because if so I agree, if systemd units simply call podman commands then it will not and cannot work the same until this is implemented in podman

@muayyad-alsadi
Copy link
Collaborator

Yes, here is the systemd unit

$ cat ~/.config/containers/compose/projects/simple.env
COMPOSE_PROJECT_DIR=/home/alsadi/proj/podman-compose/tests/simple
COMPOSE_FILE=docker-compose.yaml
COMPOSE_PROJECT_NAME=simple
COMPOSE_PATH_SEPARATOR=:

$ systemctl --user cat podman-compose@simple
# /usr/lib/systemd/user/[email protected]

[Unit]
Description=%i rootless pod (podman-compose)

[Service]
Type=simple
EnvironmentFile=%h/.config/containers/compose/projects/%i.env
ExecStartPre=-/home/alsadi/proj/podman-compose/podman_compose.py up --no-start
ExecStartPre=/usr/bin/podman pod start pod_%i
ExecStart=/home/alsadi/proj/podman-compose/podman_compose.py wait
ExecStop=/usr/bin/podman pod stop pod_%i

[Install]
WantedBy=default.target

As you can see, we just assert container creation with podman-compose up --no-start
Then podman pod start mypod

It should work since we create them with --require.

podman_compose.py Outdated Show resolved Hide resolved
Co-authored-by: Dawid Dziurla <[email protected]>
robinbobbitt pushed a commit to ansible/ansible-ai-connect-service that referenced this pull request Feb 3, 2023
* Portable database health check

to ensure that the Django migrations do not run until the database is
fully up.

The previous solution, to use the long form of `depends_on:` in our
compose.yaml file, isn't supported by podman-compose:
containers/podman-compose#453

* Switch to using a bash heredoc

to more readably construct the Python script to be run.
@p12tic
Copy link
Collaborator

p12tic commented Mar 8, 2024

@Elkasitu Just FYI, I would be open for this PR to land. If it improves compatibility with docker-compose, then it should land regardless of whether there are better ways to achieve something.

There's been two years since this PR has been opened, so I understand it's likely you won't have interest in it anymore. Maybe someone else can pick this up in such case.

@Elkasitu
Copy link
Author

@p12tic I can take a look, hopefully the conflicts won't be too much of a hassle

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants