-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Podman inspect fail with Error: readObjectStart #20156
Comments
As an additional note, skopeo inspect seems to cope with both signed and unsigned v1 schema manifest.
|
Interested in opening a PR? |
@ipanova
|
|
Thanks @mtrmac , it seems that we need older docker/podman to have registry generating schema1 manifest, do you know which docker/podman version to choose? |
I’d use (I also suspect the c/common code can be run and tested without a registry at all, maybe by writing that manifest to c/storage, or if the code is well-isolated, by providing a |
I tried skopeo copy and unable to generate schema without signature
When using curl to get manifest, both return v1 schema with signature. I'll check out c/common code then |
A friendly reminder that this issue had no activity for 30 days. |
containers/common#1748 will fix this. |
Issue Description
If a registry serves unsigned v1 schema manifest, podman inspect fails with
Error: readObjectStart: expect { or n, but found , error found in #0 byte of ...||..., bigger context ...||...
Steps to reproduce the issue
Steps to reproduce the issue
Describe the results you received
$ podman inspect puffy.example.com/quay-busybox --log-level=debug
INFO[0000] podman filtering at log level debug
DEBU[0000] Called inspect.PersistentPreRunE(podman inspect puffy.example.com/quay-busybox --log-level=debug)
DEBU[0000] Using conmon: "/usr/bin/conmon"
DEBU[0000] Initializing boltdb state at /home/vagrant/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] systemd-logind: Unknown object '/'.
DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /home/vagrant/.local/share/containers/storage
DEBU[0000] Using run root /run/user/1000/containers
DEBU[0000] Using static dir /home/vagrant/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp
DEBU[0000] Using volume path /home/vagrant/.local/share/containers/storage/volumes
DEBU[0000] Using transient store: false
DEBU[0000] Set libpod namespace to ""
DEBU[0000] [graphdriver] trying provided driver "overlay"
DEBU[0000] Cached value indicated that overlay is supported
DEBU[0000] Cached value indicated that overlay is supported
DEBU[0000] Cached value indicated that metacopy is not being used
DEBU[0000] Cached value indicated that native-diff is usable
DEBU[0000] backingFs=btrfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false
DEBU[0000] Initializing event backend journald
DEBU[0000] Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument
DEBU[0000] Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument
DEBU[0000] Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument
DEBU[0000] Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument
DEBU[0000] Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument
DEBU[0000] Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument
DEBU[0000] Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument
DEBU[0000] Using OCI runtime "/usr/bin/crun"
INFO[0000] Setting parallel job count to 13
DEBU[0000] Looking up image "puffy.example.com/quay-busybox" in local containers storage
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux [] }
DEBU[0000] Trying "puffy.example.com/quay-busybox:latest" ...
DEBU[0000] parsed reference into "[overlay@/home/vagrant/.local/share/containers/storage+/run/user/1000/containers]@e3121c769e3948dd4a7e1764f4841d044efcfda47804a5384597b7b117054c4c"
DEBU[0000] Found image "puffy.example.com/quay-busybox" as "puffy.example.com/quay-busybox:latest" in local containers storage
DEBU[0000] Found image "puffy.example.com/quay-busybox" as "puffy.example.com/quay-busybox:latest" in local containers storage ([overlay@/home/vagrant/.local/share/containers/storage+/run/user/1000/containers]@e3121c769e3948dd4a7e1764f4841d044efcfda47804a5384597b7b117054c4c)
DEBU[0000] Inspecting image e3121c769e3948dd4a7e1764f4841d044efcfda47804a5384597b7b117054c4c
Error: readObjectStart: expect { or n, but found , error found in #0 byte of ...||..., bigger context ...||...
DEBU[0000] Shutting down engines
Describe the results you expected
podman inspect should succeed
podman info output
Podman in a container
No
Privileged Or Rootless
None
Upstream Latest Release
No
Additional environment details
No response
Additional information
I have setup registry that serves 2 repos. 1 repo serves unsigned v1 schema manifest. 2nd repo serves signed v2 schema manifest.
Podman pull works on both, podman inspect fails on unsigned v1 schema manifest.
VS:
Suspected offending line is containers/common@c53283f#diff-fc461513b322becd13becb226b44b4164a4913c732302e079863d85571686b1eL111
After talking to @mtrmac here is current thinking:
(Guess being that https://github.com/containers/common/blob/e18cda8d7750746031de3d3d9305df059ac5f0ae/libimage/inspect.go#L186 tries to unmarshal a config object which does not exist in schema1, i.e. an empty byte array. That would also be consistent with the "unexpected end of JSON input" text.)
It is two bugs:
Now, one nit in the above (or, alternatively, a damning note), is that the code has been in there for 2 years. So it’s both that we didn’t notice for 2 years, and that this is manifesting on an upgrade from a pretty old Podman.
Yes, If downgrading to podman 3, the inspect command works.
Here is the BZ that impacts customers https://bugzilla.redhat.com/show_bug.cgi?id=2240252 Current workaround is to edit locally stored manifest.json and add to it
signatures: []
The text was updated successfully, but these errors were encountered: