Skip to content

Logging into CSCS machines

fomics edited this page Jun 6, 2016 · 17 revisions

Programming Environment for EuroHack15

Your account will be given to you either by the project lead or one of the project mentors in late May.

Accessing CSCS systems - the front end node Ela

You will be given a EuroHack account with a username of the form hckXX (for some number XX) and a password. The account is valid until July 17, 2015.

In order to gain access to CSCS systems you need to first access our front-end machine Ela which is accessible as ela.cscs.ch. Access Ela by means of ssh as follows:

ssh -Y [email protected]

Accessing CSCS systems - Piz Daint

The machine that we will use is called Piz Daint. Piz Daint consists of the main supercomputer and a set of front-end nodes which you would typically access for compilation, batch job submission, file transfers and use of performance tools. The front end nodes are accessible as daint.cscs.ch and you can access these as:

ssh -Y daint

We wish to avoid the need to use batch submission scripts and therefore we need to access the internal login nodes of Piz Daint directly. The internal nodes are named as daint01, daint02 and daint03 and from Ela you can access these as:

ssh -Y daintYY

where YY is one of 01, 02 or 03.

Default Environment on Daint

Having logged in to Daint, you will have a default basic environment and directories in 3 file systems which you can access as $HOME, $PROJECT and $SCRATCH. Note that only $SCRATCH is available on the compute nodes, however, it is volatile, and might be scrubbed if files are inactive for long periods of time.

CSCS uses the module command to change your programming environment. If you issue the command module list you will see your currently loaded modules. If you issue the command module avail you see all of the available modules that you can load. If you want to load a module then issue the command module load <modulename> for some , for example,

module load scalasca

For a simple description of what a module provides use module help <modulename>

###Compilation Environment

In order to compile codes you will need to select a programming environment for a specific compiler. Available compilers on Daint are the Cray compiler, Intel and GNU and these are loaded using the module names as shown in the following table.

  • Cray PrgEnv-cray
  • PGI PrgEnv-pgi
  • Intel PrgEnv-intel
  • GNU PrgEnv-gnu

The compiler programming environments on the Cray XC30 provides convenient wrappers that you can use for compilation, and these wrappers ensure that any libraries and header files that you have loaded through the module command are included or linked automatically.

The wrapper command for Fortran codes is ftn The wrapper command for C++ codes is CC The wrapper command for C codes is cc

You just need to use these wrappers and they will take care of adding the include paths and linking the libraries. You may need to load additional modules, and then the wrappers will again take care of adding the correct paths.

You will just need to compile an executable from a single source file as in this example for a Fortran code:

ftn -O2 mpiexercise1.f90 -o myexe1

Running your code

In order to run your code you will need to get an allocation of processors from the batch system. The mentors will help you generate batch submission scripts. For basic development an interactive session can be started on the internal login nodes of Piz Daint. We use the “salloc” command for this purpose (salloc only works if you are logged in to the internal login nodes).

When we have been granted a set of processors, we then use the “aprun” command to launch jobs on the compute nodes, and the flags that you pass to aprun differ depending upon whether you are running MPI or OpenMP parallel applications.

When you have finished your practical you should exit the “salloc” session by typing “exit” so that your processors are returned back to the pool.

Before the EuroHack, you will have to compete for the default queue:

salloc --ntasks=16 --time=01:00:00

Launching MPI jobs

During the EuroHack we will have a special reservation on the machine “eurohack” which is only available to the our accounts. For development you should use only few processors, e.g., max. 16. You should therefore issue the following command which will give you 16 processors for up to 1 hour:

salloc --res=eurohack15 --ntasks=16 --time=01:00:00

You will then have your prompt returned after a message such as the following:

salloc: Granted job allocation XXXX

You are now able to launch your MPI jobs on the compute nodes.

For MPI jobs that are to be launched on the compute nodes you need to use the “-n” flag to specify how many processes you wish to launch. For example, to launch 8 processes of the myexe1 executable you would issue the following command:

srun -n 8 ./myexe1

Running MPI/OpenMP hybrid jobs

srun offers the flexibility to run multi-threaded distributed jobs, e.g., using both MPI and OpenMP. A common configuration (but by no means always the most efficient one!) is to run one MPI process per node, and 8 threads on the 8 cores of the Intel Sandybridge socket. E.g., for 4 processes each with 8 threads, you would use:

salloc --res=course --nodes=4 --time=01:00:00

and you will then be given back your prompt.

To launch 8 threads of an OpenMP job you need to specify the number of threads using the OMP_NUM_THREADS variable and then you need to tell aprun that you want 4 processes using the “- n” flag with 8 threads using the “-d” flag as follows:

export OMP_NUM_THREADS=8

srun -n 4 -d 8 ./ompexe1