Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pull through API can fail with index error #8870

Closed
marusak opened this issue Jan 4, 2021 · 2 comments · Fixed by #8876
Closed

Pull through API can fail with index error #8870

marusak opened this issue Jan 4, 2021 · 2 comments · Fixed by #8876
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@marusak
Copy link
Contributor

marusak commented Jan 4, 2021

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

When trying to pull image with invalid tag through API often it fails with index out of range error.

Steps to reproduce the issue:

$ sudo curl -X POST -v --unix-socket /run/podman/podman.sock http://d/v1.24/libpod/images/pull?reference=fedora:foobar
*   Trying /run/podman/podman.sock:0...
* Connected to d (/run/podman/podman.sock) port 80 (#0)
> POST /v1.24/libpod/images/pull?reference=fedora:foobar HTTP/1.1
> Host: d
> User-Agent: curl/7.71.1
> Accept: */*
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Api-Version: 1.40
< Libpod-Api-Version: 2.0.0
< Server: Libpod/2.0.0 (linux)
< Date: Mon, 04 Jan 2021 12:01:17 GMT
< Transfer-Encoding: chunked
< 
{"stream":"Resolved short name \"fedora\" to a recorded short-name alias (origin: /etc/containers/registries.conf.d/shortnames.conf)\n"}
{"stream":"Trying to pull registry.fedoraproject.org/fedora:foobar...\n"}
{"stream":"  manifest unknown: manifest unknown\n"}
{}
{"cause":"runtime error: index out of range [-1]","message":"runtime error: index out of range [-1]","response":500}
* Connection #0 to host d left intact

sometimes it however manages to end up with correct error message: (all is the same except the last line)

...
{"error":"Error initializing source docker://registry.fedoraproject.org/fedora:foobar: Error reading manifest foobar in registry.fedoraproject.org/fedora: manifest unknown: manifest unknown\n"}
* Connection #0 to host d left intact

I can surely reproduce this any time. Sometimes 5 times in a row all fail with index error, sometimes I see the expected error message a few times in a row. But a few retries and I hit it sooner or later.

Describe the results you expected:
Always end up with error code 200 and the expected error message.

Output of podman version:

Version:      2.2.1
API Version:  2.1.0
Go Version:   go1.15.5
Built:        Tue Dec  8 15:37:50 2020
OS/Arch:      linux/amd64

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.18.0
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.0.21-3.fc33.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.21, commit: 0f53fb68333bdead5fe4dc5175703e22cf9882ab'
  cpus: 8
  distribution:
    distribution: fedora
    version: "33"
  eventLogger: journald
  hostname: foo
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 5.9.13-200.fc33.x86_64
  linkmode: dynamic
  memFree: 9189642240
  memTotal: 33439072256
  ociRuntime:
    name: crun
    package: crun-0.16-3.fc33.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 0.16
      commit: eb0145e5ad4d8207e84a327248af76663d4e50dd
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  remoteSocket:
    exists: true
    path: /run/user/1000/podman/podman.sock
  rootless: true
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.1.8-1.fc33.x86_64
    version: |-
      slirp4netns version 1.1.8
      commit: d361001f495417b880f20329121e3aa431a8f90f
      libslirp: 4.3.1
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.0
  swapFree: 21084758016
  swapTotal: 21084758016
  uptime: 39h 34m 43.24s (Approximately 1.62 days)
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - registry.centos.org
  - docker.io
store:
  configFile: /home/mmarusak/.config/containers/storage.conf
  containerStore:
    number: 4
    paused: 0
    running: 0
    stopped: 4
  graphDriverName: overlay
  graphOptions:
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: fuse-overlayfs-1.3.0-1.fc33.x86_64
      Version: |-
        fusermount3 version: 3.9.3
        fuse-overlayfs: version 1.3
        FUSE library version 3.9.3
        using FUSE kernel interface version 7.31
  graphRoot: /home/mmarusak/.local/share/containers/storage
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 26
  runRoot: /run/user/1000/containers
  volumePath: /home/mmarusak/.local/share/containers/storage/volumes
version:
  APIVersion: 2.1.0
  Built: 1607438270
  BuiltTime: Tue Dec  8 15:37:50 2020
  GitCommit: ""
  GoVersion: go1.15.5
  OsArch: linux/amd64
  Version: 2.2.1

Package info (e.g. output of rpm -q podman or apt list podman):

podman-2.2.1-1.fc33.x86_64

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?

Yes

@vrothberg
Copy link
Member

Thanks for report! I'll take a look.

@vrothberg vrothberg self-assigned this Jan 4, 2021
vrothberg added a commit to vrothberg/libpod that referenced this issue Jan 4, 2021
Fix a race condition in the pull endpoint caused by buffered channels.
Using buffered channels can lead to the context's cancel function to be
executed prior to the items being read from the channel.

Fixes: containers#8870
Signed-off-by: Valentin Rothberg <[email protected]>
@vrothberg
Copy link
Member

Opened #8876

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 22, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 22, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants