-
Notifications
You must be signed in to change notification settings - Fork 426
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ERROR: Could not virtualize PID namespace: Invalid argument #204
Comments
Hi Faisal, Ohh, I would bet that there is a misalignment between userspace libraries that say CLONE_NEWPID is supported and the kernel that isn't supporting it. Can you send me the configure output so I can see what userspace supports (the configure script just looks at usespace, not kernel). As a work around, you can set 'allow pid ns = no' in singularity.conf and it should work properly. Also in the new code I've been working on for 2.2 (due to release mid-end September) the PID namespace must be requested as it is not used by default. Thanks and hope that helps! |
Thanks for the quick response Greg. Turning off PID namespace in singularity.conf did the trick. I'm attaching the output of the configure script as requested. Thanks for working on this wonderful project. We're looking forward to exploring both singularity and shifter on our soon-to-be-installed Cray XC system! Regards, |
Yes, this confirmed my suspicion... The user space is claiming to support CLONE_NEWPID, but the kernel is not. I am pretty sure that a kernel update will fix it, but I also understand if you can't update it so just keeping the PID namespace disabled for now will work fine. My pleasure on working on the project and thank you for the compliment! Once you are done with your investigation of Singularity and if you end up using it can you please send me a note to let me know what you are running it on? I keep a document that I share with management when the need arises. lol Lastly, if you are investigating Singularity, I would also encourage you to look at the master branch which will soon be released as 2.2. Lots of really cool work going on there and I'd love the help testing it! Thanks! |
Thanks again. I will update you when we get to testing on our new system in a month or two. Regards, Sent from my Samsung device -------- Original message -------- Yes, this confirmed my suspicion... The user space is claiming to support CLONE_NEWPID, but the kernel is not. I am pretty sure that a kernel update will fix it, but I also understand if you can't update it so just keeping the PID namespace disabled for now will work fine. My pleasure on working on the project and thank you for the compliment! Once you are done with your investigation of Singularity and if you end up using it can you please send me a note to let me know what you are running it on? I keep a document that I share with management when the need arises. lol Lastly, if you are investigating Singularity, I would also encourage you to look at the master branch which will soon be released as 2.2. Lots of really cool work going on there and I'd love the help testing it! Thanks! � |
My pleasure and please let me know if you have any other questions or problems. Greg |
Hello. I am testing singularity 2.2 on our Clay based HPC system (IU Bigred 2), and I am seeing this error message.
Is this issued fixed on 2.2? Thank you for working on this great project! |
Two things are making me curious....
|
For MESSAGELEVEL, I updated my earlier comment after I posted it to include output from -v but I forgot to update the actual command line.. sorry! For SUID, yes the singularity was installed as root and /N/soft (alias for /gpfs/hps/soft) is mounted with following options.
By the way, this is on SLE11 (IU BigRed2). Will mounting singularity binaries on GPFS be a problem? |
For distributing setuid binaries via a shared filesystem: probably technically works, but most admins will be reluctant to put that much trust in GPFS. In most setups, the filesystem is mounted with the nosuid option. Sent from my iPhone
|
@bbockelm Does this mean admin have to install singularity locally on all nodes? GPFS is our current shared file system of choice and we use it to distribute applications (modules) as well as user's home directories across our cluster. By the way, I can still launch singularity container on our cluster (binaries installed on GPFS). If suid is disabled, I would think that I won't be able to launch singularity at all.. let alone PID namespace. Does it use different mechanism to do chroot / launch container? |
Like I said, it's a bit up to the admin whether they trust the shared filesystem with setuid executables. At a technical level, it should work just fine.
Nope - it's all the same underlying mechanism. If the simple tests work fine, I would expect everything else to work. Brian |
I've confirmed that SUID is working on our GPFS. Yet, I am seeing following error message on BigRed 2 (SLE11)
Earlier you said this
So, I am understaing we won't be able to do PID name spacing - because SLE11's kernel (3.0.101) doesn't support it (and there is nothing we can do about it)? |
Hi @soichih, The PID namespace has technically been available since 2.6.24, so I am not sure why CLONE_NEWPID is failing aside from the fact that it is a Cray. I don't have direct access to a Cray to test, but I can ask around. Can you do this and let me know what the result is please: $ grep CONFIG_PID_NS /boot/config-`uname -r` Thanks! |
@gmkurtzer - while |
Yes, you are correct. I need to stop trusting RedHat as they have spoiled me by back-porting so much into the RHEL6 kernel (and removing the upstream kernel version it was actually supported in their man pages). |
OK. Thanks. I think it will be nice if there is a table (matrix) of various major OSes that singularity runs on, and which features works and which doesn't. It looks like there are a lot of red flags for SLE11.. |
I am also having this problem on a Redhat 6.4 system. Singularity is installed as root When I run
I get the following error
Is there an incompatibility with singularity and Redhat 6.4? Kernel is 2.6.32-358.el6.x86_64 |
I anticipate @gmkurtzer will ask for this too - could you put |
The output using --debug is
|
Sometime the Debian arch string is not identical to the `runtime.GOARCH` value for a platform. Map from `runtime.GOARCH` to the Debian arch to address this. Fixes: singularity-ce apptainer#204
Hello,
I've compiled singularity 2.1.2 from source on two different hosts, both running redhat clones, although of different versions. The compilation on both hosts was uneventful, and in both I used --prefix=/opt/singularity with the configure script. On the host with a more recent kernel version (2.6.32-573.22.1.el6.x86_64), things seem to work...
$ singularity shell container.img
Singularity.container.img> $ cat /etc/debian_version
8.5
Singularity.container.img> $ exit
/bin/sh: 2: Cannot set tty process group (No such process)
$
...while on the host with the older kernel version (2.6.32-220.23.1.bl6.Bull.28.8.x86_64), I run into this error when invoking singularity...
$ singularity -v shell container.img
increasing verbosity level (2)
Exec'ing: /opt/singularity/libexec/singularity/cli/shell.exec container.img
VERBOSE: Set messagelevel to: 2
LOG : Command=shell, Container=container.img, CWD=/panfs/vol/f/fachaud74, Arg1=(null)
VERBOSE: Creating/Verifying session directory: /tmp/.singularity-session-7001.19.4214115159
VERBOSE: Calculating image offset
VERBOSE: Found valid loop device: /dev/loop0
VERBOSE: Using loop device: /dev/loop0
VERBOSE: Creating namespace process
ERROR : Could not virtualize PID namespace: Invalid argument
VERBOSE: Cleaning sessiondir: /tmp/.singularity-session-7001.19.4214115159
$
The older host is running the equivalent of redhat 6.2 while the newer one is running stock 6.4. Short of upgrading the OS on older host, is there anything I can do to get singularity to work?
Thank you.
Regards,
Faisal
The text was updated successfully, but these errors were encountered: