Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Slurm multi nodes cluster installation failure #1179

Closed
karanveersingh5623 opened this issue Jun 10, 2022 · 26 comments
Closed

Slurm multi nodes cluster installation failure #1179

karanveersingh5623 opened this issue Jun 10, 2022 · 26 comments

Comments

@karanveersingh5623
Copy link

HI Team

I am using two nodes to deploy a slurm cluster and its failing , not able to get any verbose details , earlier I tested with single node and everything comes up , validation playbook succeeded in single node slurm GPU cluster !!!

Below are details of my two nodes inventory file and attached are screenshots of sinfo , scontrol and run.log ( verbose details of installation playbook:

[all]
adas-ml     ansible_host=192.168.61.2
adas-ml-2   ansible_host=192.168.61.3

[slurm-master]
adas-ml

[slurm-nfs]
#login01

[slurm-node]
#adas-ml
adas-ml-2
#gpu01
#gpu02

# The following groups are used to break out individual cluster services onto
# different nodes. By default, we put all these functions on the cluster head
# node. To break these out into different nodes, replace the
# [group:children] section with [group], and list individual nodes.
[slurm-cache:children]
slurm-master

[slurm-nfs-client:children]
slurm-node

[slurm-metric:children]
slurm-master

[slurm-login:children]
slurm-master

# Single group for the whole cluster
[slurm-cluster:children]
slurm-master
slurm-node
slurm-cache
slurm-nfs
slurm-metric
slurm-login

######
# SSH connection configuration
######
[all:vars]

image
run.log

@karanveersingh5623
Copy link
Author

karanveersingh5623 commented Jun 10, 2022

@ajdecon
[slurm-node] group defined in inventory is not picking another server if added , what could be the reason ?

@ajdecon
Copy link
Collaborator

ajdecon commented Jun 14, 2022

In the run log, during the driver install, I'm seeing an error that looks like this:

<192.168.61.3> (1, b'\n{"msg": "failed to fetch key at https://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-8 , error was: Request failed: <urlopen error [Errno -2] Name or service not known>", "failed": true, "invocation": {"module_args": {"key": "https://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-8", "state": "present", "validate_certs": true, "fingerprint": null}}}\n', b'')
<192.168.61.3> Failed to connect to the host via ssh: 
fatal: [adas-ml-2]: FAILED! => changed=false 

So the run appears to have failed due to not being able to import the upstream EPEL GPG key for RHEL-8 compatible systems.

I'll try kicking off a test run for this on my end and see if I can duplicate the issue.

@karanveersingh5623
Copy link
Author

@ajdecon , thanks will wait for update

@ajdecon
Copy link
Collaborator

ajdecon commented Jun 15, 2022

@karanveersingh5623 : my test today on a RHEL-8 compatible system failed to reproduce the issue. Since the error related to an external resource on the Fedora site, this hopefully means they have fixed it. :) Can you retry?

@karanveersingh5623
Copy link
Author

@ajdecon: I am retrying now but i guess its some ansible issue .
When I edit config/inventory file and add adas-ml-2 host to group [slurm-node] , its not running any slurm installations related playbooks.....just runs some initial tasks with both the slurm nodes and then when the slurm installations begins .....only master node adas-ml shows up in every tasks ....nowhere i can see my 2nd node .....seems like ansible inventory is not getting updated or slurm-installation playbook only runs for [slurm-master] group .....please share your thoughts .
For more info , please refer to inventory file attached above

Also very strange behavior , I removed the current slurm master host from inventory file and just added my 2nd node i.e [adas-ml-2] to it , still playbook runs tasks on 1st node i.e. [ adas-ml] .....couldn't understand this behaviour

@karanveersingh5623
Copy link
Author

@ajdecon , it still failed , why my 2nd node " adas-ml-2" , not getting into any slurm installation tasks , only it shows up in starting tasks

(env) (base) [root@adas-ml deepops]# ansible all -m raw -a "hostname"

PLAY [Ansible Ad-Hoc] ******************************************************************************************************************************************

TASK [raw] *****************************************************************************************************************************************************
changed: [adas-ml-2]
changed: [adas-ml]

PLAY RECAP *****************************************************************************************************************************************************
adas-ml                    : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
adas-ml-2                  : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

PLAY [slurm-node] **********************************************************************************************************************************************

TASK [rsyslog_client : ensure rsyslog is installed] ************************************************************************************************************
ok: [adas-ml]

TASK [rsyslog_client : configure syslog forwarding] ************************************************************************************************************
ok: [adas-ml]

PLAY [all] *****************************************************************************************************************************************************

TASK [singularity_wrapper : ensure prereq packages installed] **************************************************************************************************
skipping: [adas-ml]

TASK [singularity_wrapper : enable powertools] *****************************************************************************************************************
skipping: [adas-ml]

TASK [singularity_wrapper : rhel 8 - ensure CRB repository is enabled] *****************************************************************************************
skipping: [adas-ml]

TASK [singularity_wrapper : debian - ensure apt cache is up to date] *******************************************************************************************
skipping: [adas-ml]

TASK [singularity_wrapper : create a folder for go] ************************************************************************************************************
skipping: [adas-ml]

TASK [install golang explicitly] *******************************************************************************************************************************
skipping: [adas-ml]

TASK [install singularity] *************************************************************************************************************************************
skipping: [adas-ml]

PLAY [slurm-master[0]] *****************************************************************************************************************************************

TASK [ood-wrapper : gather os specific variables] **************************************************************************************************************
skipping: [adas-ml] => (item=/root/deepops/roles/ood-wrapper/vars/../vars/redhat.yml)

TASK [ood-wrapper : install dependencies] **********************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : remove any existing OOD apps installed by ansible] *****************************************************************************************
skipping: [adas-ml] => (item={'key': 'bc_osc_codeserver', 'value': {'repo': 'https://github.com/OSC/bc_osc_codeserver.git', 'dest': '/var/www/ood/apps/sys', 'version': 'gtc'}})

TASK [ood-wrapper : remove build dir] **************************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : enable httpd listen port in SELinux] *******************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : set httpd to permissive in SELinux] ********************************************************************************************************
skipping: [adas-ml]

TASK [install Open OnDemand] ***********************************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : create clusters dir] ***********************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : configure slurm cluster connector] *********************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : configure front page] **********************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : install package dependency for htpasswd module] ********************************************************************************************
skipping: [adas-ml] => (item=python-passlib)
skipping: [adas-ml] => (item=python3-passlib)

TASK [ood-wrapper : install package dependency for htpasswd module] ********************************************************************************************
skipping: [adas-ml] => (item=python-passlib)

TASK [ood-wrapper : create .htpasswd entries] ******************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : install required apache module] ************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : install required apache module] ************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : restart apache after module install] *******************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : create ssh key pair for default user] ******************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : get ssh pub key] ***************************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : set ssh authorized key] ********************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : remove old desktop app config] *************************************************************************************************************
skipping: [adas-ml] => (item=form.yml)

TASK [ood-wrapper : create desktop app dir] ********************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : configure desktop] *************************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : configure desktop app submit] **************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : configure desktop app form] ****************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : remove old config] *************************************************************************************************************************
skipping: [adas-ml] => (item=form.yml)
skipping: [adas-ml] => (item=manifest.yml)
skipping: [adas-ml] => (item=submit.yml.erb)
skipping: [adas-ml] => (item=template/script.sh.erb)

TASK [ood-wrapper : configure vs code app form] ****************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : configure vs code app manifest] ************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : configure vs code app submit] **************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : configure vs code app script] **************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : install dependencies] **********************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : install turbovnc] **************************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : install turbovnc] **************************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : install codeserver] ************************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : configure PAM] *****************************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : configure script] **************************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : configure linuxhost-adapter] ***************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : create singularity image directory] ********************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : pull singularity image] ********************************************************************************************************************
skipping: [adas-ml]

PLAY [slurm-node] **********************************************************************************************************************************************

TASK [ood-wrapper : gather os specific variables] **************************************************************************************************************
skipping: [adas-ml] => (item=/root/deepops/roles/ood-wrapper/vars/../vars/redhat.yml)

TASK [ood-wrapper : install dependencies] **********************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : remove any existing OOD apps installed by ansible] *****************************************************************************************
skipping: [adas-ml] => (item={'key': 'bc_osc_codeserver', 'value': {'repo': 'https://github.com/OSC/bc_osc_codeserver.git', 'dest': '/var/www/ood/apps/sys', 'version': 'gtc'}})

TASK [ood-wrapper : remove build dir] **************************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : enable httpd listen port in SELinux] *******************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : set httpd to permissive in SELinux] ********************************************************************************************************
skipping: [adas-ml]

TASK [install Open OnDemand] ***********************************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : create clusters dir] ***********************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : configure slurm cluster connector] *********************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : configure front page] **********************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : install package dependency for htpasswd module] ********************************************************************************************
skipping: [adas-ml] => (item=python-passlib)
skipping: [adas-ml] => (item=python3-passlib)

TASK [ood-wrapper : install package dependency for htpasswd module] ********************************************************************************************
skipping: [adas-ml] => (item=python-passlib)

TASK [ood-wrapper : create .htpasswd entries] ******************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : install required apache module] ************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : install required apache module] ************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : restart apache after module install] *******************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : create ssh key pair for default user] ******************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : get ssh pub key] ***************************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : set ssh authorized key] ********************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : remove old desktop app config] *************************************************************************************************************
skipping: [adas-ml] => (item=form.yml)

TASK [ood-wrapper : create desktop app dir] ********************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : configure desktop] *************************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : configure desktop app submit] **************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : configure desktop app form] ****************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : remove old config] *************************************************************************************************************************
skipping: [adas-ml] => (item=form.yml)
skipping: [adas-ml] => (item=manifest.yml)
skipping: [adas-ml] => (item=submit.yml.erb)
skipping: [adas-ml] => (item=template/script.sh.erb)

TASK [ood-wrapper : configure vs code app form] ****************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : configure vs code app manifest] ************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : configure vs code app submit] **************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : configure vs code app script] **************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : install dependencies] **********************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : install turbovnc] **************************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : install turbovnc] **************************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : install codeserver] ************************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : configure PAM] *****************************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : configure script] **************************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : configure linuxhost-adapter] ***************************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : create singularity image directory] ********************************************************************************************************
skipping: [adas-ml]

TASK [ood-wrapper : pull singularity image] ********************************************************************************************************************
skipping: [adas-ml]

PLAY [slurm-node] **********************************************************************************************************************************************

TASK [install custom facts] ************************************************************************************************************************************
skipping: [adas-ml]

TASK [set GPU clocks permissions] ******************************************************************************************************************************
skipping: [adas-ml]

PLAY [slurm-master] ********************************************************************************************************************************************

TASK [set controller fact] *************************************************************************************************************************************
ok: [adas-ml]

PLAY [slurm-node] **********************************************************************************************************************************************

TASK [set compute fact] ****************************************************************************************************************************************
ok: [adas-ml]

PLAY [slurm-cluster] *******************************************************************************************************************************************

TASK [set enroot DGX config fact] ******************************************************************************************************************************
skipping: [adas-ml]

PLAY [slurm-node] **********************************************************************************************************************************************

TASK [nvidia.enroot : ubuntu tasks] ****************************************************************************************************************************
skipping: [adas-ml]

TASK [nvidia.enroot : rhel tasks] ******************************************************************************************************************************
included: /root/deepops/roles/galaxy/nvidia.enroot/tasks/redhat.yml for adas-ml

TASK [nvidia.enroot : install epel repository] *****************************************************************************************************************
ok: [adas-ml]

TASK [nvidia.enroot : install enroot dependency packages] ******************************************************************************************************
ok: [adas-ml] => (item=['bash-completion', 'jq', 'parallel', 'pigz', 'zstd'])

TASK [nvidia.enroot : set up yum repository (if defined)] ******************************************************************************************************
skipping: [adas-ml]

TASK [nvidia.enroot : install packages from repo (if defined)] *************************************************************************************************
skipping: [adas-ml] => (item=[])

TASK [nvidia.enroot : gather package facts for comparison] *****************************************************************************************************
ok: [adas-ml]

TASK [nvidia.enroot : remove existing enroot package if we are changing version] *******************************************************************************
changed: [adas-ml]

TASK [nvidia.enroot : enroot rpm packages] *********************************************************************************************************************
changed: [adas-ml] => (item=['https://github.com/NVIDIA/enroot/releases/download/v3.2.0/enroot-3.2.0-1.el7.x86_64.rpm', 'https://github.com/NVIDIA/enroot/releases/download/v3.2.0/enroot+caps-3.2.0-1.el7.x86_64.rpm'])

TASK [nvidia.enroot : check if kernel unpriv enabled] **********************************************************************************************************
changed: [adas-ml]

TASK [nvidia.enroot : check if user namespaces enabled] ********************************************************************************************************
changed: [adas-ml]

TASK [nvidia.enroot : install grubby if needed] ****************************************************************************************************************
skipping: [adas-ml]

TASK [nvidia.enroot : add kernel options to use enroot] ********************************************************************************************************
skipping: [adas-ml] => (item=namespace.unpriv_enable=1)
skipping: [adas-ml] => (item=user_namespace.enable=1)

TASK [nvidia.enroot : set max_user_namespaces] *****************************************************************************************************************
[WARNING]: The value 65536 (type int) in a string field was converted to '65536' (type string). If this does not look like what you expect, quote the entire
value to ensure it does not change.
ok: [adas-ml]

TASK [nvidia.enroot : set max_mnt_namespaces] ******************************************************************************************************************
ok: [adas-ml]

TASK [nvidia.enroot : install bash completions] ****************************************************************************************************************
ok: [adas-ml]

TASK [nvidia.enroot : configure enroot.conf] *******************************************************************************************************************
changed: [adas-ml]

TASK [nvidia.enroot : configure environ.d/] ********************************************************************************************************************

TASK [nvidia.enroot : configure hooks.d/] **********************************************************************************************************************

TASK [nvidia.enroot : configure mounts.d/] *********************************************************************************************************************

PLAY [slurm-cluster] *******************************************************************************************************************************************

TASK [pyxis : install dependencies (Ubuntu)] *******************************************************************************************************************
skipping: [adas-ml] => (item=[])

TASK [pyxis : install dependencies (EL)] ***********************************************************************************************************************
ok: [adas-ml] => (item=util-linux)

TASK [pyxis : install slurm-pmi hook] **************************************************************************************************************************
ok: [adas-ml]

TASK [pyxis : install slurm-pytorch hook] **********************************************************************************************************************
ok: [adas-ml]

TASK [pyxis tasks] *********************************************************************************************************************************************
included: /root/deepops/roles/pyxis/tasks/pyxis.yml for adas-ml

TASK [create pyxis source dir] *********************************************************************************************************************************
ok: [adas-ml]

TASK [copy pyxis source] ***************************************************************************************************************************************
ok: [adas-ml]

TASK [clean pyxis source directory] ****************************************************************************************************************************
changed: [adas-ml]

TASK [build pyxis] *********************************************************************************************************************************************
changed: [adas-ml]

TASK [pyxis : copy top-level plugstack file] *******************************************************************************************************************
ok: [adas-ml]

TASK [copy pyxis plugstack file] *******************************************************************************************************************************
ok: [adas-ml]

TASK [set pyxis shared-object permissions] *********************************************************************************************************************
ok: [adas-ml]

RUNNING HANDLER [pyxis : restart slurmd] ***********************************************************************************************************************
changed: [adas-ml]

RUNNING HANDLER [pyxis : restart slurmctld] ********************************************************************************************************************
changed: [adas-ml]

PLAY [all] *****************************************************************************************************************************************************

TASK [nvidia-peer-memory : Check for DGX packages] *************************************************************************************************************
ok: [adas-ml]

TASK [nvidia-peer-memory : Autoinstall DKMS modules] ***********************************************************************************************************
skipping: [adas-ml]

TASK [nvidia-peer-memory : Modprobe nv_peer_mem] ***************************************************************************************************************
skipping: [adas-ml]

TASK [nvidia-peer-memory : Start nv_peer_mem service] **********************************************************************************************************
skipping: [adas-ml]

PLAY RECAP *****************************************************************************************************************************************************
adas-ml                    : ok=584  changed=27   unreachable=0    failed=0    skipped=1073 rescued=0    ignored=1
adas-ml-2                  : ok=29   changed=0    unreachable=0    failed=1    skipped=99   rescued=0    ignored=1

@ajdecon
Copy link
Collaborator

ajdecon commented Jun 16, 2022

Based on the play recap, it looks like a task failed very early in the run.

adas-ml-2                  : ok=29   changed=0    unreachable=0    failed=1    skipped=99   rescued=0    ignored=1

Can you share your full log, starting from the ansible-playbook command? Preferably in a Github gist since this will be very long.

@karanveersingh5623
Copy link
Author

@ajdecon
Copy link
Collaborator

ajdecon commented Jun 16, 2022

Hmm. This is the failure I see early in the log:

fatal: [adas-ml-2]: FAILED! => changed=false 
  invocation:
    module_args:
      fingerprint: null
      key: https://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-8
      state: present
      validate_certs: true
  msg: 'failed to fetch key at https://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-8 , error was: Request failed: <urlopen error [Errno -2] Name or service not known>'

Which is the same failure we saw previously. So it looks like this host is still failing to download the Fedora project GPG key.

However, a test on my local machine shows that this URL exists:

$ curl https://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-8
-----BEGIN PGP PUBLIC KEY BLOCK-----

mQINBFz3zvsBEADJOIIWllGudxnpvJnkxQz2CtoWI7godVnoclrdl83kVjqSQp+2
<snip>

And our automated tests running through Github Actions still succeed.

Can you try running curl https://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-8 on adas-ml-2, to see if you can manually reach this URL? That should help determine whether the error is due to something about the network configuration on the host.

@karanveersingh5623
Copy link
Author

@ajdecon , fixed the network issue , please have a look , still it failed
run2.log

@ajdecon
Copy link
Collaborator

ajdecon commented Jun 17, 2022

@karanveersingh5623 : It looks like the host is failing to download the epel-release-latest RPM from the Fedora project:

fatal: [adas-ml-2]: FAILED! => changed=false
  invocation:
    module_args:
      allow_downgrade: false
      autoremove: false
      bugfix: false
      conf_file: null
      disable_excludes: null
      disable_gpg_check: false
      disable_plugin: []
      disablerepo: []
      download_dir: null
      download_only: false
      enable_plugin: []
      enablerepo: []
      exclude: []
      install_repoquery: true
      install_weak_deps: true
      installroot: /
      list: null
      lock_timeout: 30
      name:
      - https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
      releasever: null
      security: false
      skip_broken: false
      state: latest
      update_cache: false
      update_only: false
      validate_certs: true
  msg: 'Failure downloading https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm, Request failed: <urlopen error _ssl.c:880: The handshake operation timed out>'

Based on the timeout message, this is probably still some sort of network configuration issue on the host in question.

I'd recommend testing using the curl command to see if you can download this file from the failing host. The DeepOps install process downloads quite a lot of different packages from open source repositories, so a reliable Internet connection is essential to be able to do this install.

Just as a tip, you can generally identify failing Ansible tasks by searching for lines which start with fatal:.

@karanveersingh5623
Copy link
Author

karanveersingh5623 commented Jun 18, 2022

@ajdecon , yea i am switching to another open network in my office , just another question
I am trying to run below mentioned HPC MLperf code , will deepops cluster will be enough to run it or I might have to install more components as I guess I need slurm + pyxis + enroot + MPI GPU support

[]https://github.com/mlcommons/hpc_results_v1.0/tree/master/NVIDIA/benchmarks/cosmoflow/implementations/mxnet

@karanveersingh5623
Copy link
Author

@ajdecon
I was able to complete the cluster and sinfo shows both nodes as idle , it was a network issue .
Now I am trying to validate the cluster but its failing , please refer
Munge , UID , GID of slurm user is same , dont know from where this " Invalid job credential " is coming
.
After running the Srun command listed below , adas-ml-2 machine goes in Completing state , it seems like a hung state .

(env) (base) [root@adas-ml ~]# srun -N 2 -G 2 --ntasks-per-node=1 --mpi=pmix --exclusive  --container-image=deepops/nccl-tests-tf20.06-ubuntu18.04:latest all_reduce_perf -b 1M -e 4G -f 2 -g 1

srun: error: Task launch for StepId=13.0 failed on node adas-ml: Invalid job credential
srun: error: Task launch for StepId=13.0 failed on node adas-ml-2: Invalid job credential
srun: error: Application launch failed: Invalid job credential
srun: Job step aborted: Waiting up to 32 seconds for job step to finish.
srun: error: Timed out waiting for job step to complete
(env) (base) [root@adas-ml ~]# sinfo
PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST
batch*       up   infinite      1   comp adas-ml-2
batch*       up   infinite      1   idle adas-ml
(env) (base) [root@adas-ml ~]# squeue
             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
                13     batch all_redu     root CG       0:32      1 adas-ml-2

Also when I just use single node with SRUN command , processes runs successfully

(env) (base) [root@adas-ml ~]# srun -N 1 -G 1 --ntasks-per-node=1 --mpi=pmix --exclusive  --container-image=deepops/nccl-tests-tf20.06-ubuntu18.04:latest all_reduce_perf -b 1M -e 4G -f 2 -g 1
pyxis: importing docker image ...
# nThread 1 nGpus 1 minBytes 1048576 maxBytes 4294967296 step: 2(factor) warmup iters: 5 iters: 20 validation: 1
#
# Using devices
#   Rank  0 Pid 337810 on    adas-ml device  0 [0x86] Tesla T4
#
#                                                     out-of-place                       in-place
#       size         count    type   redop     time   algbw   busbw  error     time   algbw   busbw  error
#        (B)    (elements)                     (us)  (GB/s)  (GB/s)            (us)  (GB/s)  (GB/s)
     1048576        262144   float     sum    11.16   93.95    0.00  0e+00     0.28  3781.38    0.00  0e+00
     2097152        524288   float     sum    19.72  106.34    0.00  0e+00     0.29  7219.11    0.00  0e+00
     4194304       1048576   float     sum    37.24  112.63    0.00  0e+00     0.28  15109.16    0.00  0e+00
     8388608       2097152   float     sum    70.86  118.38    0.00  0e+00     0.28  30410.03    0.00  0e+00
    16777216       4194304   float     sum    138.0  121.53    0.00  0e+00     0.29  57279.67    0.00  0e+00
    33554432       8388608   float     sum    272.8  123.00    0.00  0e+00     0.28  121091.42    0.00  0e+00
    67108864      16777216   float     sum    542.1  123.79    0.00  0e+00     0.28  241964.54    0.00  0e+00
   134217728      33554432   float     sum   1080.6  124.20    0.00  0e+00     0.28  483580.36    0.00  0e+00
   268435456      67108864   float     sum   2210.4  121.44    0.00  0e+00     0.28  951898.78    0.00  0e+00
   536870912     134217728   float     sum   4426.1  121.30    0.00  0e+00     0.28  1929108.56    0.00  0e+00
  1073741824     268435456   float     sum   8852.6  121.29    0.00  0e+00     0.28  3781446.82    0.00  0e+00
  2147483648     536870912   float     sum    17707  121.28    0.00  0e+00     0.28  7653184.78    0.00  0e+00
  4294967296    1073741824   float     sum    35425  121.24    0.00  0e+00     0.29  14741607.33    0.00  0e+00
# Out of bounds values : 0 OK
# Avg bus bandwidth    : 0

@karanveersingh5623
Copy link
Author

@ajdecon , was able to fix the above but it failed at below mentioned trace .

(env) (base) [root@adas-ml ~]# srun -N 2 -G 2 --ntasks-per-node=1 --mpi=pmix --exclusive  --container-image=deepops/nccl-tests-tf20.06-ubuntu18.04:latest all_reduce_perf -b 1M -e 4G -f 2 -g 1
pyxis: importing docker image ...
pyxis: importing docker image ...
slurmstepd: error:  mpi/pmix_v3: pmixp_coll_ring_reset_if_to: adas-ml [0]: pmixp_coll_ring.c:742: 0x14ce0c02cb20: collective timeout seq=0
slurmstepd: error:  mpi/pmix_v3: pmixp_coll_log: adas-ml [0]: pmixp_coll.c:281: Dumping collective state
slurmstepd: error:  mpi/pmix_v3: pmixp_coll_ring_log: adas-ml [0]: pmixp_coll_ring.c:760: 0x14ce0c02cb20: COLL_FENCE_RING state seq=0
slurmstepd: error:  mpi/pmix_v3: pmixp_coll_ring_log: adas-ml [0]: pmixp_coll_ring.c:762: my peerid: 0:adas-ml
slurmstepd: error:  mpi/pmix_v3: pmixp_coll_ring_log: adas-ml [0]: pmixp_coll_ring.c:769: neighbor id: next 1:adas-ml-2, prev 1:adas-ml-2
slurmstepd: error:  mpi/pmix_v3: pmixp_coll_ring_log: adas-ml [0]: pmixp_coll_ring.c:779: Context ptr=0x14ce0c02cb98, #0, in-use=0
slurmstepd: error:  mpi/pmix_v3: pmixp_coll_ring_log: adas-ml [0]: pmixp_coll_ring.c:779: Context ptr=0x14ce0c02cbd0, #1, in-use=0
slurmstepd: error:  mpi/pmix_v3: pmixp_coll_ring_log: adas-ml [0]: pmixp_coll_ring.c:779: Context ptr=0x14ce0c02cc08, #2, in-use=1
slurmstepd: error:  mpi/pmix_v3: pmixp_coll_ring_log: adas-ml [0]: pmixp_coll_ring.c:790:        seq=0 contribs: loc=1/prev=0/fwd=1
slurmstepd: error:  mpi/pmix_v3: pmixp_coll_ring_log: adas-ml [0]: pmixp_coll_ring.c:792:        neighbor contribs [2]:
slurmstepd: error:  mpi/pmix_v3: pmixp_coll_ring_log: adas-ml [0]: pmixp_coll_ring.c:825:                done contrib: -
slurmstepd: error:  mpi/pmix_v3: pmixp_coll_ring_log: adas-ml [0]: pmixp_coll_ring.c:827:                wait contrib: adas-ml-2
slurmstepd: error:  mpi/pmix_v3: pmixp_coll_ring_log: adas-ml [0]: pmixp_coll_ring.c:829:        status=PMIXP_COLL_RING_PROGRESS
slurmstepd: error:  mpi/pmix_v3: pmixp_coll_ring_log: adas-ml [0]: pmixp_coll_ring.c:833:        buf (offset/size): 479/1437
[adas-ml:350527] pml_ucx.c:119 Error: Failed to receive UCX worker address: Not found (-13)
[adas-ml:350527] pml_ucx.c:386 Error: Failed to resolve UCX endpoint for rank 1
[adas-ml:350527] *** An error occurred in MPI_Allgather
[adas-ml:350527] *** reported by process [3429566764,0]
[adas-ml:350527] *** on communicator MPI_COMM_WORLD
[adas-ml:350527] *** MPI_ERR_OTHER: known error not in list
[adas-ml:350527] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
[adas-ml:350527] ***    and potentially your MPI job)
srun: Job step aborted: Waiting up to 32 seconds for job step to finish.
slurmstepd: error: *** STEP 18.0 ON adas-ml CANCELLED AT 2022-06-21T10:10:28 ***
srun: error: adas-ml: task 0: Exited with exit code 16
slurmstepd: error:  mpi/pmix_v3: pmixp_coll_ring_reset_if_to: adas-ml-2 [1]: pmixp_coll_ring.c:742: 0x14b9e0007360: collective timeout seq=0
slurmstepd: error:  mpi/pmix_v3: pmixp_coll_log: adas-ml-2 [1]: pmixp_coll.c:281: Dumping collective state
slurmstepd: error:  mpi/pmix_v3: pmixp_coll_ring_log: adas-ml-2 [1]: pmixp_coll_ring.c:760: 0x14b9e0007360: COLL_FENCE_RING state seq=0
slurmstepd: error:  mpi/pmix_v3: pmixp_coll_ring_log: adas-ml-2 [1]: pmixp_coll_ring.c:762: my peerid: 1:adas-ml-2
slurmstepd: error:  mpi/pmix_v3: pmixp_coll_ring_log: adas-ml-2 [1]: pmixp_coll_ring.c:769: neighbor id: next 0:adas-ml, prev 0:adas-ml
slurmstepd: error:  mpi/pmix_v3: pmixp_coll_ring_log: adas-ml-2 [1]: pmixp_coll_ring.c:779: Context ptr=0x14b9e00073d8, #0, in-use=0
slurmstepd: error:  mpi/pmix_v3: pmixp_coll_ring_log: adas-ml-2 [1]: pmixp_coll_ring.c:779: Context ptr=0x14b9e0007410, #1, in-use=0
slurmstepd: error:  mpi/pmix_v3: pmixp_coll_ring_log: adas-ml-2 [1]: pmixp_coll_ring.c:779: Context ptr=0x14b9e0007448, #2, in-use=1
slurmstepd: error:  mpi/pmix_v3: pmixp_coll_ring_log: adas-ml-2 [1]: pmixp_coll_ring.c:790:      seq=0 contribs: loc=0/prev=1/fwd=0
slurmstepd: error:  mpi/pmix_v3: pmixp_coll_ring_log: adas-ml-2 [1]: pmixp_coll_ring.c:792:      neighbor contribs [2]:
slurmstepd: error:  mpi/pmix_v3: pmixp_coll_ring_log: adas-ml-2 [1]: pmixp_coll_ring.c:825:              done contrib: adas-ml
slurmstepd: error:  mpi/pmix_v3: pmixp_coll_ring_log: adas-ml-2 [1]: pmixp_coll_ring.c:827:              wait contrib: -
slurmstepd: error:  mpi/pmix_v3: pmixp_coll_ring_log: adas-ml-2 [1]: pmixp_coll_ring.c:829:      status=PMIXP_COLL_RING_PROGRESS
slurmstepd: error:  mpi/pmix_v3: pmixp_coll_ring_log: adas-ml-2 [1]: pmixp_coll_ring.c:833:      buf (offset/size): 479/1437
srun: error: adas-ml-2: task 1: Killed

@ajdecon
Copy link
Collaborator

ajdecon commented Jun 21, 2022

@avolkov1 @yangatgithub : When you have time, can one of you make suggestions for the error above? I think you've got the most familiarity with the Slurm NCCL test right now.

@avolkov1
Copy link
Contributor

Sorry, I'm a bit busy at the moment, but I will follow up on this as soon as I can.

That container deepops/nccl-tests-tf20.06-ubuntu18.04:latest is pretty old and probably does not have the right UCX configuration in it. You can try to run it like this without UCX and HCOLL:

srun \
 --export="NCCL_DEBUG=INFO,OMPI_MCA_pml=^ucx,OMPI_MCA_coll=^hcoll" \
  -N 2 \
  --ntasks-per-node=8 \
  --gpus-per-task=1 \
  --exclusive \
  --mpi=pmix_v3 \
  --container-image=deepops/nccl-tests-tf20.06-ubuntu18.04:latest \
 all_reduce_perf -b 1M -e 4G -f 2 -g 1 

I'll follow up and let you know how to update the container so it works with UCX.

@yangatgithub
Copy link
Contributor

yangatgithub commented Jun 21, 2022 via email

@avolkov1
Copy link
Contributor

Yes, use a newer container:

srun \
 --export="NCCL_DEBUG=INFO" \
  -N 2 \
  --ntasks-per-node=8 \
  --gpus-per-task=1 \
  --exclusive \
  --mpi=pmix_v3 \
  --container-image=deepops/mpi-nccl-test:latest \
 all_reduce_perf -b 1M -e 4G -f 2 -g 1

To build a new container:

## file: ncclmake.sh --------------------------------------------------
#!/bin/bash

export CUDA_HOME="/opt/nvidia/hpc_sdk/Linux_x86_64/22.5/cuda/11.7"
export LIBRARY_PATH=$LD_LIBRARY_PATH
mkdir -p /opt/nccl_tests/build
cd /opt/nccl_tests
NCCL_TESTS_COMMITISH=8274cb4
wget -q -O - https://github.com/NVIDIA/nccl-tests/archive/${NCCL_TESTS_COMMITISH}.tar.gz | \
tar --strip-components=1 -xzf - \
&& make MPI=1
cp -R /opt/nccl_tests/build/* /usr/local/bin/

## file: create_nccl_container.sh -------------------------------------
#!/bin/bash
CONTAINER="nvcr.io/nvidia/nvhpc:22.5-devel-cuda_multi-ubuntu20.04"
OUTCONTAINER="${HOME}/enroot_images/nvhpc_22.5_nccl_tests.sqsh"

mkdir -p ${HOME}/enroot_images

srun -N 1 \
--ntasks-per-node=1 \
--cpus-per-task=10 \
--container-image=${CONTAINER} \
--container-save=${OUTCONTAINER} \
--container-mounts=${HOME}:${HOME} \
--container-remap-root \
${PWD}/ncclmake.sh

----------------------------------------------------

# chmod above to make executable: chmod u+x <script>
# then run:
$ ./create_nccl_container.sh

# after above you should have a file "nvhpc_22.5_nccl_tests.sqsh" in "${HOME}/enroot_images" dir. 


Then run as:

srun \
 --export="NCCL_DEBUG=INFO" \
  -N 2 \
  --ntasks-per-node=8 \
  --gpus-per-task=1 \
  --exclusive \
  --mpi=pmix_v3 \
  --container-image=${HOME}/enroot_images/nvhpc_22.5_nccl_tests.sqsh \
  all_reduce_perf -b 1M -e 4G -f 2 -g 1

@karanveersingh5623
Copy link
Author

@ajdecon , playbook always fails on Nvidia-drivers gpg signature check , where I can disable this check ?
I have already disabled in below files

vim roles/galaxy/ood-ansible/defaults/main/install.yml

disable_gpg_check_rpm_repo: true
vim roles/galaxy/nvidia.nvidia_driver/tasks/install-redhat.yml

- name: install driver packages RHEL/CentOS 8 and newer
  dnf:
    name: "{{ nvidia_driver_package_version | ternary('@nvidia-driver:'+nvidia_driver_package_version, '@nvidia-driver:'+nvidia_driver_rhel_branch+'-dkms') }}"
    state: "{{ nvidia_driver_package_state }}"
    **disable_gpg_check: yes**
    autoremove: "{{ nvidia_driver_package_state == 'absent' }}"
  register: install_driver_rhel8
  environment: "{{proxy_env if proxy_env is defined else {}}}"
  when: ansible_distribution_major_version > '7'
 msg: Failed to validate GPG signature for nvidia-driver-cuda-3:470.129.06-1.el8.x86_64
<192.168.61.31> (1, b'\n{"msg": "Failed to validate GPG signature for nvidia-libXNVCtrl-devel-3:470.129.06-1.el8.x86_64", "failed": true, "invocation": {"module_args": {"name": ["@nvidia-driver:470-dkms"], "state": "present", "autoremove": false, "allow_downgrade": false, "bugfix": false, "disable_gpg_check": false, "disable_plugin": [], "disablerepo": [], "download_only": false, "enable_plugin": [], "enablerepo": [], "exclude": [], "installroot": "/", "install_repoquery": true, "install_weak_deps": true, "security": false, "skip_broken": false, "update_cache": false, "update_only": false, "validate_certs": true, "lock_timeout": 30, "conf_file": null, "disable_excludes": null, "download_dir": null, "list": null, "releasever": null}}}\n', b'')
<192.168.61.31> Failed to connect to the host via ssh:
fatal: [hpc-master]: FAILED! => changed=false
  invocation:
    module_args:
      allow_downgrade: false
      autoremove: false
      bugfix: false
      conf_file: null
      disable_excludes: null
      disable_gpg_check: false

@karanveersingh5623
Copy link
Author

@ajdecon
I got this error for first time , c8-i12 node (192.168.61.84) is the slurm master, how to fix this , will it has any issue with communications between master and compute nodes ?
Please let me know if you need more info .

TASK [slurm : Patch lastuserjob epilog.] ***********************************************************************************************************************************************************************
task path: /root/deepops/roles/slurm/tasks/login-compute-setup.yml:19
Using module file /opt/deepops/env/lib/python3.6/site-packages/ansible/modules/files/blockinfile.py
Pipelining is enabled.
<192.168.61.84> ESTABLISH SSH CONNECTION FOR USER: None
<192.168.61.84> SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=5m -o ConnectionAttempts=100 -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=60 -o 'ControlPath=~/.ssh/ansible-%r@%h:%p' 192.168.61.84 '/bin/sh -c '"'"'sudo -H -S -n  -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-jgkvdphrztrsuacstcqbzranwaqjzbdb ; /usr/libexec/platform-python'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<192.168.61.84> (1, b'\n{"rc": 257, "msg": "Path /etc/slurm/epilog.d/40-lastuserjob-processes does not exist !", "failed": true, "invocation": {"module_args": {"path": "/etc/slurm/epilog.d/40-lastuserjob-processes", "block": "\\nif grep -q -w \\"$SLURM_JOB_USER\\" /etc/slurm/localusers.backup ; then\\n    exit 0  # don\'t revoke access for these users\\nfi\\n", "insertafter": "^set -ex", "marker": "# {mark} ANSIBLE MANAGED BLOCK (ansible-role-slurm)", "state": "present", "create": false, "backup": false, "marker_begin": "BEGIN", "marker_end": "END", "follow": false, "unsafe_writes": false, "insertbefore": null, "validate": null, "mode": null, "owner": null, "group": null, "seuser": null, "serole": null, "selevel": null, "setype": null, "attributes": null, "src": null, "force": null, "content": null, "remote_src": null, "regexp": null, "delimiter": null, "directory_mode": null}}}\n', b'')
<192.168.61.84> Failed to connect to the host via ssh:
fatal: [c8-i12]: FAILED! => changed=false
  invocation:
    module_args:
      attributes: null
      backup: false
      block: |2-

        if grep -q -w "$SLURM_JOB_USER" /etc/slurm/localusers.backup ; then
            exit 0  # don't revoke access for these users
        fi
      content: null
      create: false
      delimiter: null
      directory_mode: null
      follow: false
      force: null
      group: null
      insertafter: ^set -ex
      insertbefore: null
      marker: '# {mark} ANSIBLE MANAGED BLOCK (ansible-role-slurm)'
      marker_begin: BEGIN
      marker_end: END
      mode: null
      owner: null
      path: /etc/slurm/epilog.d/40-lastuserjob-processes
      regexp: null
      remote_src: null
      selevel: null
      serole: null
      setype: null
      seuser: null
      src: null
      state: present
      unsafe_writes: false
      validate: null
  msg: Path /etc/slurm/epilog.d/40-lastuserjob-processes does not exist !
  rc: 257

PLAY RECAP *****************************************************************************************************************************************************************************************************
c8-i12                     : ok=166  changed=7    unreachable=0    failed=1    skipped=442  rescued=0    ignored=1
hpc-master                 : ok=167  changed=2    unreachable=0    failed=0    skipped=420  rescued=0    ignored=1
mlperf1                    : ok=167  changed=2    unreachable=0    failed=0    skipped=412  rescued=0    ignored=1

@karanveersingh5623
Copy link
Author

/nccl_tests/build/all_reduce_perf

@avolkov1 @yangatgithub , I tried the latest image of NCCL test but it failed with below error , please let me know further steps or what is the issue as my compute node has latest cuda 11.7 and 2 X A100 GPUs

(env) [root@c8-i12 deepops]# srun --export="NCCL_DEBUG=INFO" -N 1 -G 2 -w mlperf1 --ntasks-per-node=2 --gpus-per-task=1 --exclusive --mpi=pmix_v3 --container-image=192.168.61.4:5000#/deepops-nccl-test:0.1 /nccl_tests/build/all_reduce_perf -b 1M -e 4G -f 2 -g 1
pyxis: importing docker image ...
# nThread 1 nGpus 1 minBytes 1048576 maxBytes 4294967296 step: 2(factor) warmup iters: 5 iters: 20 validation: 1
#
# Using devices
mlperf1: Test CUDA failure common.cu:1045 'no CUDA-capable device is detected'
 .. mlperf1 pid 28568: Test failure common.cu:1007
srun: error: mlperf1: task 1: Exited with exit code 2
slurmstepd: error:  mpi/pmix_v3: _errhandler: mlperf1 [0]: pmixp_client_v2.c:211: Error handler invoked: status = -25, source = [slurm.pmix.11.0:1]
srun: Job step aborted: Waiting up to 32 seconds for job step to finish.
slurmstepd: error: *** STEP 11.0 ON mlperf1 CANCELLED AT 2022-07-21T03:25:14 ***
srun: error: mlperf1: task 0: Killed

@itzsimpl
Copy link
Contributor

@karanveersingh5623 note that the parameter --gpus-per-task in combination with --container-image, due to optimisations on behalf of nvidia/enroot, will lead to a GPU being assigned only to one process. This will lead to a crash of your other process, with 'no CUDA-capable device is detected'. If the parameter cannot be avoided, a workaround is to prepend ENROOT_RESTRICT_DEV=n to the srun command. See NVIDIA/pyxis#73 for more details.

@avolkov1
Copy link
Contributor

avolkov1 commented Jul 21, 2022

Yeah, that looks like an enroot bug/oversight. I hope that's resolved in a sensible manner, otherwise I guess use ENROOT_RESTRICT_DEV=n.

Depending on application this will vary. With NCCL tests you can specify -g 2 and run it like this:

srun \
  --export="NCCL_DEBUG=INFO" \
  -w mlperf1 \
  -N 1 \
  --ntasks-per-node=1 \
  --gpus-per-task=2 \
  --exclusive \
  --mpi=pmix_v3 \
  --container-image=192.168.61.4:5000#/deepops-nccl-test:0.1 \
  /nccl_tests/build/all_reduce_perf -b 1M -e 4G -f 2 -g 2

That will start 1 enroot container process on each node with all GPUs visible (--gpus-per-task=2 if 2 is number of gpus per node).

The -g 2 is an option to all_reduce_perf letting it know how many GPUs there are per process. The all_reduce_perf app will then either spawn or thread a sub-process per gpu.

Otherwise (and IMO more sensibly per typical Slurm + MPI workloads) try running like this:

ENROOT_RESTRICT_DEV=n srun \
  --export="NCCL_DEBUG=INFO" \
  -w mlperf1 \
  -N 1 \
  --ntasks-per-node=2 \
  --gpus-per-task=1 \
  --exclusive \
  --mpi=pmix_v3 \
  --container-image=192.168.61.4:5000#/deepops-nccl-test:0.1 \
  /nccl_tests/build/all_reduce_perf -b 1M -e 4G -f 2 -g 1

@karanveersingh5623
Copy link
Author

@itzsimpl @avolkov1 , thanks guys for helping out but i cannot run using 2 nodes , below is the output from single node passing validation and 2 nodes failing . Please let me know if more info required .

The last cmd stuck and was not running further , so I have to scancel the job .

(env) [root@c8-i12 burst-buffer]# srun --export="NCCL_DEBUG=INFO" -w mlperf1 -N 1 --ntasks-per-node=1 --gpus-per-task=2 --exclusive --mpi=pmix_v3 --container-image=192.168.61.4:5000#/deepops-nccl-test:0.1 /nccl_tests/build/all_reduce_perf -b 1M -e 4G -f 2 -g 2
slurmstepd: error: couldn't chdir to `/root/burst-buffer': No such file or directory: going to /tmp instead
slurmstepd: error: couldn't chdir to `/root/burst-buffer': No such file or directory: going to /tmp instead
pyxis: importing docker image ...



# nThread 1 nGpus 2 minBytes 1048576 maxBytes 4294967296 step: 2(factor) warmup iters: 5 iters: 20 validation: 1
#
# Using devices
#   Rank  0 Pid 131398 on    mlperf1 device  0 [0xaf] NVIDIA A100-PCIE-40GB
#   Rank  1 Pid 131398 on    mlperf1 device  1 [0xd8] NVIDIA A100-PCIE-40GB
mlperf1:131398:131398 [0] NCCL INFO Bootstrap : Using ens3f0:192.168.61.32<0>
mlperf1:131398:131398 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
mlperf1:131398:131398 [0] NCCL INFO NET/IB : No device found.
mlperf1:131398:131398 [0] NCCL INFO NET/Socket : Using [0]ens3f0:192.168.61.32<0> [1]virbr0:192.168.122.1<0> [2]veth2a3a9ac:fe80::4cb6:51ff:feae:4e12%veth2a3a9ac<0>
mlperf1:131398:131398 [0] NCCL INFO Using network Socket
NCCL version 2.11.4+cuda11.6
mlperf1:131398:132137 [0] NCCL INFO Channel 00/02 :    0   1
mlperf1:131398:132137 [0] NCCL INFO Channel 01/02 :    0   1
mlperf1:131398:132137 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] 1/-1/-1->0->-1
mlperf1:131398:132138 [1] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] -1/-1/-1->1->0
mlperf1:131398:132137 [0] NCCL INFO Channel 00 : 0[af000] -> 1[d8000] via direct shared memory
mlperf1:131398:132137 [0] NCCL INFO Channel 01 : 0[af000] -> 1[d8000] via direct shared memory
mlperf1:131398:132138 [1] NCCL INFO Channel 00 : 1[d8000] -> 0[af000] via direct shared memory
mlperf1:131398:132138 [1] NCCL INFO Channel 01 : 1[d8000] -> 0[af000] via direct shared memory
mlperf1:131398:132137 [0] NCCL INFO Connected all rings
mlperf1:131398:132137 [0] NCCL INFO Connected all trees
mlperf1:131398:132137 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512
mlperf1:131398:132137 [0] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
mlperf1:131398:132138 [1] NCCL INFO Connected all rings
mlperf1:131398:132138 [1] NCCL INFO Connected all trees
mlperf1:131398:132138 [1] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512
mlperf1:131398:132138 [1] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
mlperf1:131398:132138 [1] NCCL INFO comm 0x7f6064001000 rank 1 nranks 2 cudaDev 1 busId d8000 - Init COMPLETE
mlperf1:131398:132137 [0] NCCL INFO comm 0x7f6384001000 rank 0 nranks 2 cudaDev 0 busId af000 - Init COMPLETE
#
#                                                       out-of-place                       in-place
#       size         count      type   redop     time   algbw   busbw  error     time   algbw   busbw  error
#        (B)    (elements)                       (us)  (GB/s)  (GB/s)            (us)  (GB/s)  (GB/s)
mlperf1:131398:131398 [0] NCCL INFO Launch mode Parallel
     1048576        262144     float     sum    208.7    5.02    5.02  0e+00    208.1    5.04    5.04  0e+00
     2097152        524288     float     sum    395.8    5.30    5.30  0e+00    392.8    5.34    5.34  0e+00
     4194304       1048576     float     sum    760.6    5.51    5.51  0e+00    761.3    5.51    5.51  0e+00
     8388608       2097152     float     sum   1500.5    5.59    5.59  0e+00   1515.8    5.53    5.53  0e+00
    16777216       4194304     float     sum   2909.6    5.77    5.77  0e+00   2957.8    5.67    5.67  0e+00
    33554432       8388608     float     sum   5899.7    5.69    5.69  0e+00   5842.6    5.74    5.74  0e+00
    67108864      16777216     float     sum    11686    5.74    5.74  0e+00    11812    5.68    5.68  0e+00
   134217728      33554432     float     sum    23116    5.81    5.81  0e+00    23038    5.83    5.83  0e+00
   268435456      67108864     float     sum    46820    5.73    5.73  0e+00    46454    5.78    5.78  0e+00
   536870912     134217728     float     sum    91502    5.87    5.87  0e+00    90199    5.95    5.95  0e+00
  1073741824     268435456     float     sum   180842    5.94    5.94  0e+00   179809    5.97    5.97  0e+00
  2147483648     536870912     float     sum   366019    5.87    5.87  0e+00   366992    5.85    5.85  0e+00
  4294967296    1073741824     float     sum   734080    5.85    5.85  0e+00   722913    5.94    5.94  0e+00
# Out of bounds values : 0 OK
# Avg bus bandwidth    : 5.67408
#
(env) [root@c8-i12 burst-buffer]#
(env) [root@c8-i12 burst-buffer]#
(env) [root@c8-i12 burst-buffer]#
(env) [root@c8-i12 burst-buffer]# srun --export="NCCL_DEBUG=INFO" -N 2 --ntasks-per-node=2 --gpus-per-task=2 --exclusive --mpi=pmix_v3 --container-image=192.168.61.4:5000#/deepops-nccl-test:0.1 /nccl_tests/build/all_reduce_perf -b 1M -e 4G -f 2 -g 2
srun: error: Unable to allocate resources: Requested node configuration is not available
(env) [root@c8-i12 burst-buffer]#
(env) [root@c8-i12 burst-buffer]#
(env) [root@c8-i12 burst-buffer]# ENROOT_RESTRICT_DEV=n srun --export="NCCL_DEBUG=INFO" -N 2 --ntasks-per-node=2 --gpus-per-task=2 --exclusive --mpi=pmix_v3 --container-image=192.168.61.4:5000#/deepops-nccl-test:0.1 /nccl_tests/build/all_reduce_perf -b 1M -e 4G -f 2 -g 2
srun: error: Unable to allocate resources: Requested node configuration is not available
(env) [root@c8-i12 burst-buffer]#
(env) [root@c8-i12 burst-buffer]#
(env) [root@c8-i12 burst-buffer]# ENROOT_RESTRICT_DEV=n srun --export="NCCL_DEBUG=INFO" -N 2 --ntasks-per-node=1 --gpus-per-task=2 --exclusive --mpi=pmix_v3 --container-image=192.168.61.4:5000#/deepops-nccl-test:0.1 /nccl_tests/build/all_reduce_perf -b 1M -e 4G -f 2 -g 2
pyxis: importing docker image ...
slurmstepd: error: couldn't chdir to `/root/burst-buffer': No such file or directory: going to /tmp instead
slurmstepd: error: couldn't chdir to `/root/burst-buffer': No such file or directory: going to /tmp instead
pyxis: importing docker image ...
--------------------------------------------------------------------------
WARNING: Open MPI accepted a TCP connection from what appears to be a
another Open MPI process but cannot find a corresponding process
entry for that peer.

This attempted connection will be ignored; your MPI job may or may not
continue properly.

  Local host: mlperf1
  PID:        132666
--------------------------------------------------------------------------
--------------------------------------------------------------------------
WARNING: Open MPI accepted a TCP connection from what appears to be a
another Open MPI process but cannot find a corresponding process
entry for that peer.

This attempted connection will be ignored; your MPI job may or may not
continue properly.

  Local host: hpc-master
  PID:        206855
--------------------------------------------------------------------------


srun: Force Terminated job 5
srun: Job step aborted: Waiting up to 32 seconds for job step to finish.
slurmstepd: error: *** STEP 5.0 ON hpc-master CANCELLED AT 2022-07-23T04:37:18 ***
srun: error: mlperf1: task 1: Terminated
srun: error: hpc-master: task 0: Terminated

@avolkov1
Copy link
Contributor

Try not running as root. Is there a shared file system between the nodes? Ideally your home directory is a shared file system and it's mounted on all the nodes.

@github-actions
Copy link

This issue is stale because it has been open for 60 days with no activity. Please update the issue or it will be closed in 7 days.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants