Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Value for --health-start-period ignored #25584

Open
pohlt opened this issue Mar 14, 2025 · 1 comment
Open

Value for --health-start-period ignored #25584

pohlt opened this issue Mar 14, 2025 · 1 comment
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@pohlt
Copy link

pohlt commented Mar 14, 2025

Issue Description

The value for --health-start-period given to podman run seems to be ignored.

Steps to reproduce the issue

I used the following Makefile to reproduce the issue:

.PHONY: run

run:
	podman run \
	--name busybox \
	--rm \
	--health-cmd="false" \
	--health-interval=10s \
	--health-start-period=5s \
	busybox:stable-uclibc \
	/bin/sleep 30

watch:
	podman watch
  1. make run starts the container
  2. make watch shows me the healthcheck events.

Describe the results you received

What I see is this:

2025-03-14 10:15:40.634147812 +0100 CET image pull e3baba8bcaa61ab3c082342db565b78fadbaddd497b3bbb77d81d0812f818793 busybox:stable-uclibc
2025-03-14 10:15:40.814148546 +0100 CET container health_status 724ab58135f3dd25b5047c566a5c4d3d1b9cdc7adc1e869cbd91ea8a095071cd (image=docker.io/library/busybox:stable-uclibc, name=busybox, health_status=starting, health_failing_streak=0, health_log=)
2025-03-14 10:15:51.661471875 +0100 CET container health_status 724ab58135f3dd25b5047c566a5c4d3d1b9cdc7adc1e869cbd91ea8a095071cd (image=docker.io/library/busybox:stable-uclibc, name=busybox, health_status=starting, health_failing_streak=1, health_log=)
2025-03-14 10:16:02.651278492 +0100 CET container health_status 724ab58135f3dd25b5047c566a5c4d3d1b9cdc7adc1e869cbd91ea8a095071cd (image=docker.io/library/busybox:stable-uclibc, name=busybox, health_status=starting, health_failing_streak=2, health_log=)
2025-03-14 10:16:10.712806687 +0100 CET container died 724ab58135f3dd25b5047c566a5c4d3d1b9cdc7adc1e869cbd91ea8a095071cd (image=docker.io/library/busybox:stable-uclibc, name=busybox)
2025-03-14 10:16:10.782786938 +0100 CET container remove 724ab58135f3dd25b5047c566a5c4d3d1b9cdc7adc1e869cbd91ea8a095071cd (image=docker.io/library/busybox:stable-uclibc, name=busybox)

This means that a healthcheck has been run right after starting the container (as it says on the tin, but still weird) and then every 10s until the container is considered to be dead.

Describe the results you expected

I expected that the first healthcheck is run after the period given by --health-start-period. After consulting the docs, I instead expected to see a second healtcheck (after the failing inital one) after the period given by --health-start-period.

podman info output

host:
  arch: amd64
  buildahVersion: 1.39.0
  cgroupControllers:
  - cpu
  - io
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.13-1.fc41.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.13, commit: '
  cpuUtilization:
    idlePercent: 97.37
    systemPercent: 0.7
    userPercent: 1.94
  cpus: 32
  databaseBackend: boltdb
  distribution:
    distribution: fedora
    variant: kde
    version: "41"
  eventLogger: journald
  freeLocks: 2047
  hostname: big
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 6.13.5-200.fc41.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 586493952
  memTotal: 33529864192
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns-1.14.0-1.fc41.x86_64
      path: /usr/libexec/podman/aardvark-dns
      version: aardvark-dns 1.14.0
    package: netavark-1.14.0-1.fc41.x86_64
    path: /usr/libexec/podman/netavark
    version: netavark 1.14.0
  ociRuntime:
    name: crun
    package: crun-1.20-2.fc41.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.20
      commit: 9c9a76ac11994701dd666c4f0b869ceffb599a66
      rundir: /run/user/1000/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
  os: linux
  pasta:
    executable: /usr/bin/pasta
    package: passt-0^20250217.ga1e48a0-2.fc41.x86_64
    version: ""
  remoteSocket:
    exists: true
    path: /run/user/1000/podman/podman.sock
  rootlessNetworkCmd: pasta
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.3.1-1.fc41.x86_64
    version: |-
      slirp4netns version 1.3.1
      commit: e5e368c4f5db6ae75c2fce786e31eef9da6bf236
      libslirp: 4.8.0
      SLIRP_CONFIG_VERSION_MAX: 5
      libseccomp: 2.5.5
  swapFree: 2003771392
  swapTotal: 8589930496
  uptime: 50h 36m 46.00s (Approximately 2.08 days)
  variant: ""
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
store:
  configFile: /home/tom/.config/containers/storage.conf
  containerStore:
    number: 1
    paused: 0
    running: 0
    stopped: 1
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/tom/.local/share/containers/storage
  graphRootAllocated: 312854183936
  graphRootUsed: 278225367040
  graphStatus:
    Backing Filesystem: btrfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Supports shifting: "false"
    Supports volatile: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 510
  runRoot: /run/user/1000/containers
  transientStore: false
  volumePath: /home/tom/.local/share/containers/storage/volumes
version:
  APIVersion: 5.4.0
  BuildOrigin: Fedora Project
  Built: 1739232000
  BuiltTime: Tue Feb 11 01:00:00 2025
  GitCommit: ""
  GoVersion: go1.23.5
  Os: linux
  OsArch: linux/amd64
  Version: 5.4.0

Podman in a container

No

Privileged Or Rootless

None

Upstream Latest Release

No

Additional environment details

Additional environment details

Additional information

Additional information like issue happens only occasionally or issue happens with a particular architecture or on a particular setting

@pohlt pohlt added the kind/bug Categorizes issue or PR as related to a bug. label Mar 14, 2025
@pohlt
Copy link
Author

pohlt commented Mar 14, 2025

After reading the docs once more, I think I understood that --health-start-period is not about running an additional healthcheck after this start period, but it's about the health state of the container ("starting", "healthy", "unhealthy"), right?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

1 participant