Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

User running podman segfaults #1189

Closed
afbjorklund opened this issue Jul 30, 2018 · 22 comments
Closed

User running podman segfaults #1189

afbjorklund opened this issue Jul 30, 2018 · 22 comments
Labels
locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@afbjorklund
Copy link
Contributor

afbjorklund commented Jul 30, 2018

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

Description

Running podman as user causes crashes.

Steps to reproduce the issue:

  1. podman version

Describe the results you received:

$ podman version
ERRO[0000] open /etc/subuid: no such file or directory  
fatal error: unexpected signal during runtime execution
[signal SIGSEGV: segmentation violation code=0x1 addr=0x3926000 pc=0xf61a89]

Describe the results you expected:

$ sudo podman version
Version:       0.7.4
Go Version:    go1.10.2
OS/Arch:       linux/amd64

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

(paste your output here)

Output of podman info:

(paste your output here)

Additional environment details (AWS, VirtualBox, physical, etc.):

VirtualBox

@mheon
Copy link
Member

mheon commented Jul 30, 2018

@giuseppe PTAL

@mheon mheon added the bug label Jul 30, 2018
@giuseppe
Copy link
Member

@afbjorklund is the podman distributed by Fedora? I cannot reproduce with the last version available there.

Could you try https://koji.fedoraproject.org/koji/buildinfo?buildID=1131468 ?

@afbjorklund
Copy link
Contributor Author

Nope, I built it from source (using buildroot for minikube)

deploy/iso/minikube-iso/package/podman/podman.mk

Had to do some vendor hack for "varlink" - whatever that is...

Seems like it wasn't getting installed with the install.tools ?

@afbjorklund
Copy link
Contributor Author

@giuseppe : maybe it requires that /etc/subuid to be present ? (rootless mode)

But maybe not for the "version" command, if that happens to be the case...

@afbjorklund
Copy link
Contributor Author

Yes, that seems to have been it:

$ echo $USER:10000:65536 | sudo tee /etc/subuid
docker:10000:65536
$ echo $USER:10000:65536 | sudo tee /etc/subgid
docker:10000:65536
$ podman version
Version:       0.7.4
Go Version:    go1.10.2
OS/Arch:       linux/amd64

Same as documented in #1185

@mheon
Copy link
Member

mheon commented Jul 30, 2018

@afbjorklund re: varlink - If you don't want if in your build, edit the Makefile and remove varlink from BUILDTAGS (or manually set the environment variable)

@afbjorklund
Copy link
Contributor Author

@mheon : I have no idea, I was just updating podman (trying to get away from kpod eventually)

kubernetes/minikube#3026

Happy to build with the varlink feature, as long as it works... Maybe I could go get it instead ?

@mheon
Copy link
Member

mheon commented Jul 30, 2018

@afbjorklund There's a dependency package (libvarlink, I think?) as well. Generally speaking, if you're not thinking you need a remote API, it's safe to build without varlink.

@giuseppe
Copy link
Member

@afbjorklund it is required for rootless containers, but if it is not present it should definitely not crash.

I've just built the same version you are using but still it works here, I just get the error message and podman exits.

On what system are you trying to run it? I've tried to use it only on Fedora and Ubuntu

@afbjorklund
Copy link
Contributor Author

Users seem to be happy about hi-jacking the old docker daemon (of minikube), not sure if that should be allowed to continue or if they should use their own container infrastructure instead of the (mini) cloud's ?

https://kubernetes.io/docs/setup/minikube/#reusing-the-docker-daemon

Currently I'm just ssh'ing into the machine, happy that it can show the images now (cache got turned off...) even if seems like listing containers (i.e. ps) got broken out-of-the-box again. Probably just a socket move.

Hacking on: kubernetes/minikube#2757 (running Kubernetes with CRI-O)

@afbjorklund
Copy link
Contributor Author

@giuseppe : it is running the "minikube.iso", which is a buildroot 2018.05 environment

@giuseppe
Copy link
Member

I'm still not able to reproduce the segfault when the /etc/subuid file is missing. What changes have you done to varlink? If you have the binary you have built somewhere, that could help as well.

@afbjorklund
Copy link
Contributor Author

@giuseppe : I used the vendored version of varlink, to avoid the out-of-the-box build error when just running "make install.tools podman" that the buildroot make naively tried to do. Did not modify anything.

Will see if I can upload the minikube.iso and the podman binary somewhere for easier access... There was a stacktrace as well, after the error printout. But I'm not sure if this segfault is "important" or not ?

@afbjorklund
Copy link
Contributor Author

Here is the ISO: https://github.com/afbjorklund/minikube/releases/download/15e56aa/minikube.iso
Built from: https://github.com/afbjorklund/minikube/tree/15e56aa8e13d97a1bb6a0813281f9bc886816f4d

As requested, direct link to the binary itself: podman

Example output
                         _             _            
            _         _ ( )           ( )           
  ___ ___  (_)  ___  (_)| |/')  _   _ | |_      __  
/' _ ` _ `\| |/' _ `\| || , <  ( ) ( )| '_`\  /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )(  ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)

$ podman version
ERRO[0000] open /etc/subuid: no such file or directory
$ fatal error: unexpected signal during runtime execution
[signal SIGSEGV: segmentation violation code=0x1 addr=0x3b56000 pc=0xf61a89]

runtime stack:
runtime.throw(0x122f0e5, 0x2a)
/home/anders/KUBE/minikube/out/buildroot/output/host/lib/go/src/runtime/panic.go:616 +0x81
runtime.sigpanic()
/home/anders/KUBE/minikube/out/buildroot/output/host/lib/go/src/runtime/signal_unix.go:372 +0x28e

goroutine 1 [syscall, locked to thread]:
runtime.cgocall(0xf618e0, 0xc420281470, 0x4971f1)
/home/anders/KUBE/minikube/out/buildroot/output/host/lib/go/src/runtime/cgocall.go:128 +0x64 fp=0xc420281430 sp=0xc4202813f8 pc=0x4090c4
github.com/projectatomic/libpod/pkg/rootless._Cfunc_reexec_in_user_namespace(0xc400000005, 0x0)
_cgo_gotypes.go:43 +0x4d fp=0xc420281470 sp=0xc420281430 pc=0xd36d5d
github.com/projectatomic/libpod/pkg/rootless.BecomeRootInUserNS(0x101c300, 0x0, 0x0, 0x0)
/home/anders/KUBE/minikube/out/buildroot/output/build/podman-v0.7.4/_output/src/github.com/projectatomic/libpod/pkg/rootless/rootless_linux.go:92 +0x1ff fp=0xc420281680 sp=0xc420281470 pc=0xd375bf
main.main()
/home/anders/KUBE/minikube/out/buildroot/output/build/podman-v0.7.4/_output/src/github.com/projectatomic/libpod/cmd/podman/main.go:32 +0x71 fp=0xc420281f88 sp=0xc420281680 pc=0xf33e51
runtime.main()
/home/anders/KUBE/minikube/out/buildroot/output/host/lib/go/src/runtime/proc.go:198 +0x212 fp=0xc420281fe0 sp=0xc420281f88 pc=0x4331b2
runtime.goexit()
/home/anders/KUBE/minikube/out/buildroot/output/host/lib/go/src/runtime/asm_amd64.s:2361 +0x1 fp=0xc420281fe8 sp=0xc420281fe0 pc=0x45ee81

goroutine 5 [chan receive]:
github.com/projectatomic/libpod/vendor/github.com/golang/glog.(*loggingT).flushDaemon(0x1d102a0)
/home/anders/KUBE/minikube/out/buildroot/output/build/podman-v0.7.4/_output/src/github.com/projectatomic/libpod/vendor/github.com/golang/glog/glog.go:882 +0x8b
created by github.com/projectatomic/libpod/vendor/github.com/golang/glog.init.0
/home/anders/KUBE/minikube/out/buildroot/output/build/podman-v0.7.4/_output/src/github.com/projectatomic/libpod/vendor/github.com/golang/glog/glog.go:410 +0x203

goroutine 6 [syscall]:
os/signal.signal_recv(0x0)
/home/anders/KUBE/minikube/out/buildroot/output/host/lib/go/src/runtime/sigqueue.go:139 +0xa6
os/signal.loop()
/home/anders/KUBE/minikube/out/buildroot/output/host/lib/go/src/os/signal/signal_unix.go:22 +0x22
created by os/signal.init.0
/home/anders/KUBE/minikube/out/buildroot/output/host/lib/go/src/os/signal/signal_unix.go:28 +0x41

@afbjorklund
Copy link
Contributor Author

afbjorklund commented Aug 2, 2018

Using a previous version works fine. So it's probably something with "varlink" ?

$ podman version
Version:       0.4.1
Go Version:    go1.10.2
Git Commit:    "f3d114a1effd8a6ef773bee14fe49ea6d8d7c350"
Built:         Thu Aug  2 11:58:17 2018
OS/Arch:       linux/amd64

^ The git commit above refers to buildroot, not to libpod (which is from tarball)

@mheon
Copy link
Member

mheon commented Aug 2, 2018 via email

@giuseppe
Copy link
Member

giuseppe commented Aug 2, 2018

I am still not able to reproduce locally but inspecting the code and your call stack, I think the error can be fixed by: #1201.

Could you please verify if it solves the problem for you?

@afbjorklund
Copy link
Contributor Author

Didn't seem to help, unfortunately.

>>> podman v0.7.4 Patching

Applying 1201.patch using patch: 
patching file pkg/rootless/rootless_linux.c

Think I need to read up on my CGo :-)
Or opt out of this "rootless" thing perhaps.

runtime.cgocall(0xf618e0, 0xc42027d470, 0x4971f1)
	/home/anders/KUBE/minikube/out/buildroot/output/host/lib/go/src/runtime/cgocall.go:128 +0x64 fp=0xc42027d430 sp=0xc42027d3f8 pc=0x4090c4
github.com/projectatomic/libpod/pkg/rootless._Cfunc_reexec_in_user_namespace(0xc400000005, 0x0)
	_cgo_gotypes.go:43 +0x4d fp=0xc42027d470 sp=0xc42027d430 pc=0xd36d5d
github.com/projectatomic/libpod/pkg/rootless.BecomeRootInUserNS(0x101c300, 0x0, 0x0, 0x0)
	/home/anders/KUBE/minikube/out/buildroot/output/build/podman-v0.7.4/_output/src/github.com/projectatomic/libpod/pkg/rootless/rootless_linux.go:92 +0x1ff fp=0xc42027d680 sp=0xc42027d470 pc=0xd375bf
main.main()
	/home/anders/KUBE/minikube/out/buildroot/output/build/podman-v0.7.4/_output/src/github.com/projectatomic/libpod/cmd/podman/main.go:32 +0x71 fp=0xc42027df88 sp=0xc42027d680 pc=0xf33e51

rh-atomic-bot pushed a commit that referenced this issue Aug 2, 2018
Closes: #1189

Signed-off-by: Giuseppe Scrivano <[email protected]>

Closes: #1201
Approved by: rhatdan
@afbjorklund
Copy link
Contributor Author

No luck with gdb, so added some oldschool printf debugging instead...

It seems like when the program crashes, that argc is 0 (cmdline gone?)

$ ./podman version
ERRO[0000] open /etc/subuid: no such file or directory  
argc: 2
used: 17
$ ./podman version
ERRO[0000] open /etc/subuid: no such file or directory  
argc: 0
used: 0
fatal error: unexpected signal during runtime execution

So that termination code needs to check for that possibility too...

  argv = malloc (sizeof (char *) * (argc + 1));
  if (argv == NULL)
    return NULL;
  argc = 0;

  argv[argc++] = buffer;
  for (i = 0; i < used - 1; i++)
    if (buffer[i] == '\0')
      argv[argc++] = buffer + i + 1;

  argv[argc] = NULL;

@afbjorklund
Copy link
Contributor Author

Like so: 054d9bc

@afbjorklund
Copy link
Contributor Author

Thanks, with those additions the segfault doesn't happen anymore.

$ podman version
ERRO[0000] open /etc/subuid: no such file or directory  
$ sudo podman version
Version:       0.7.4
Go Version:    go1.10.2
OS/Arch:       linux/amd64

@hsmiranda
Copy link

hsmiranda commented Jul 8, 2019

Yes, that seems to have been it:

$ echo $USER:10000:65536 | sudo tee /etc/subuid
docker:10000:65536
$ echo $USER:10000:65536 | sudo tee /etc/subgid
docker:10000:65536
$ podman version
Version:       0.7.4
Go Version:    go1.10.2
OS/Arch:       linux/amd64

Same as documented in #1185

it's worked for me, thanks.

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 24, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 24, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

No branches or pull requests

4 participants