Differences between revisions 46 and 146 (spanning 100 versions)
Revision 46 as of 2010-10-13 15:30:32
Size: 5357
Editor: AndreasHaupt
Comment:
Revision 146 as of 2020-10-26 11:04:51
Size: 12486
Editor: GötzWaschk
Comment:
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
/!\ '''This web page will no longer be updated.''' Please use this link for [[https://dv-zeuthen.desy.de/services/parallel_computing/|current information]].
----
<<BR>>
Line 2: Line 5:
<<TableOfContents>>
Line 3: Line 7:
There are 8 dedicated parallel clusters (blade centers) in testing mode, but you can also run parallel MPI jobs in the SGE farm. The documentation in [[Batch_System_Usage]] applies there. == Introduction ==
There are 3 dedicated parallel clusters (blade centers, Miriquid compute nodes) available for running parallel applications, but you can also run parallel MPI jobs in the SGE farm. The documentation in [[https://dv-zeuthen.desy.de/services/batch/|Batch System Usage]] applies there.
Line 5: Line 10:
For discussions and information regarding the usage of the PAX cluster a mailing list has been introduced: <<MailTo(zn-cluster AT desy DOT de)>>. To get subscribed to that list, send an email to <<MailTo(sympa AT desy DOT de)>> with the subject: '''subscribe zn-cluster'''. For discussions and information regarding the usage of the PAX cluster a mailing list has been introduced: <<MailTo(zn-cluster AT desy DOT de)>>. To get subscribed to that list, send an email to <<MailTo(sympa AT desy DOT de)>> with the subject: '''subscribe zn-cluster'''

== Hardware ==
The batch part consists of three separate partitions that are not interconnected: pax11 (broadwell) and pax10 (haswell) each consist of 32 compute nodes, connected via a FDR Infiniband network.The older system is pax9 (sandybridge), 16 nodes connected by a QDR Infiniband network.

=== Nodes ===
All nodes have two CPUs (sockets).
||Name||CPU||Code Name||Cores||Memory||
||pax9-[00-15]||Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz||Sandybridge||8||48G||
||pax10-[00-31]||Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz||Haswell||8||64G||
||pax11-[00-31]||Intel(R) Xeon(R) CPU E5-2697A v4 @ 2.60GHz||Broadwell||16||128G||

== Software Environment ==
The pax machines have a software environment that is slightly different from the normal installation, it includes the OpenHPC software stack and a different version of the {{{module}}} command. To build on any machine in the right environment, run the {{{/project/singularity/images/pax.img}}} image. You can submit your jobs if you run the singularity container on a EL7 WGS like this:
{{{
singularity run -B /var/run/munge /project/singularity/images/pax.img
}}}

You can also submit your jobs from the machine pax9-00.
Line 8: Line 31:
Use the 'module' command to first add a compiler implementation and then a version of MPI to your path e.g.:
{{{
module add gnu mvapich2
}}}
OpenHPC provides the {{{module}}} command from the lmod project. It supports more features then the old environment-modules, including dependent modules, that are shown only after loading the prequisites, e.g. for {{{openmpi}}} you'll have to load the {{{intel}}} module first.
||module name ||version ||depends on ||
||gnu ||5.4.0 || ||
||gnu7||7.3.0 || ||
||gnu8||8.3.0 || ||
||intel ||19.1.2 || ||
||openmpi ||1.10.7 ||gnu/intel ||
||openmpi3||3.1.0||gnu7||
||openmpi3||3.1.4||gnu8/intel||
||openmpi4||4.0.3||gnu8/intel||
||mvapich2 ||2.2 ||gnu/gnu7 ||
||mvapich2 || 2.3.2||gnu8/intel||
||impi||2019||gnu/gnu8/intel||
||opencoarrays ||1.8.11 || ||
||opencoarrays||2.3.1||gnu7 openmpi3||
||opencoarrays||2.8.0||gnu8 openmpi3||
Line 9: Line 52:
=== Openmpi ===
Since SL5, all batch worker nodes have the openmpi implementation of the MPI standard installed. Recently the machines were upgraded to the default SL5.5 packages of openmpi. For 64 bit applications use the installation in /usr/lib64/openmpi/1.4-gcc/bin, for 32 bit use
the binaries from /usr/lib/openmpi/1.4-gcc/bin .
=== Interactive tests ===
You can run interactive jobs in Slurm after allocating nodes with salloc, e.g.: {{{salloc -p sandybridge -N 2 -c 2}}}. To get an interactive shell on the allocated machines, use the command {{{srun --pty bash}}}.
Line 13: Line 55:
Additional openmpi versions are installed to support the Intel and PGI compilers: ==== OpenMPI ====
To run an MPI program outside the batch system, you must specify a machinefile listing all the machines and the number of cores your application should run on. A typical machine file looks like this:
Line 15: Line 59:
/usr/lib64/openmpi/1.4-icc/bin
/usr/lib64/openmpi-1.3.2-pgi/bin
pax8a slots=8
pax8b slots=8
pax8c slots=8
pax8d slots=8
}}}
The command line would look like this:

{{{
/opt/ohpc/pub/mpi/openmpi-gnu/1.10.7/bin/mpirun -np 32 -machinefile ./machinefile ./program
}}}
More information on openmpi is in the openmpi FAQ: http://www.open-mpi.org/faq/

==== Mvapich2 ====
To use mvapich2, add one of those versions to your path and compile your application with that mpi compiler. Applications built with mvapich2 can use only Infiniband network hardware, so they will work on the pax machines, but not on more than one farm machine or WGS.

The machine file format is different from the one for openmpi, you must list the host name for every core you want to use, e.g. if you want to run four processes, two processes on each of pax89 and pax88:

{{{
pax88
pax89
pax88
pax89
}}}
The preferred way to run a application with mvapich2 is mpiexec, e.g.:
{{{
/opt/ohpc/pub/mpi/mvapich2-intel/2.2/bin/mpiexec -n 4 -machinefile ./machinefile /opt/ohpc/pub/libs/intel/mvapich2/imb/2018.1/bin/IMB-MPI1
Line 19: Line 87:
If you don't want to specify the full path to your preferred MPI implementation, configure a default by using the ini command or running mpi-selector-menu on a build machine. ==== Intel MPI ====
To use Intel MPI, add a compiler module followed by impi. Use the compiler wrappers like 'mpicc' and 'mpif90' for GNU or 'mpiicc' and 'mpiifort' for the Intel compiler. To run the resulting application, set the environment variable like this:
{{{
export FI_PROVIDER=verbs
}}}
In a Slurm job, please use the prun wrapper to start your application.
Line 21: Line 94:
==== Building applications ====
64 bit MPI Applications can be compiled on any 64 bit SL5 machine, e.g. sl5-64.ifh.de.
== Batch System Access ==
/!\ '''ATTENTION''': The PAX is now based on the SLURM scheduling system.
=== Slurm Commands ===
The most important commands:
||[[http://slurm.schedmd.com/sinfo.html|sinfo]] ||Information about the cluster ||
||[[http://slurm.schedmd.com/squeue.html|squeue]] ||Show current job list ||
||[[http://slurm.schedmd.com/srun.html|srun]] ||Parallel command execution ||
||[[http://slurm.schedmd.com/sbatch.html|sbatch]] ||Submit a batch job ||
||[[http://slurm.schedmd.com/salloc.html|salloc]] ||Reserve ressources for interactive commands ||
||[[http://slurm.schedmd.com/scancel.html|scancel]] ||Abort a job ||
||[[https://slurm.schedmd.com/sview.html|sview]]||Graphical user interface to view and modify Slurm state||
||[[http://slurm.schedmd.com/sacct.html|sacct]] ||Show accounting information ||
Line 24: Line 107:
==== Running your application ====
To run an MPI program outside the batch system, you must specify a machinefile listing all the machines and the number of cores your application should run on. A typical machine file looks like this:
{{{
pax0c slots=8
pax0d slots=8
pax0e slots=8
pax0f slots=8
}}}
The command line would look like this:
{{{
/usr/lib64/openmpi/1.4-gcc/bin/mpirun -np 32 -machinefile ./machinefile ./program
}}}
=== Allocation ===
Slurm was configured to always schedule complete nodes to each job. The pax machines have hyperthreading enabled, each hardware thread is seen as a CPU core by Slurm, so by default, on a 32 core machine with hyperthreading, 64 MPI processes are assigned. To prevent that, use the option {{{-c 2}}} for sbatch, salloc or srun.
=== Parallel Execution ===
Slurm has integrated execution support for parallel programs, replacing mpirun. To work around slight differences in needed options, use prun instead of srun for starting MPI application. You'll have to load the prun module first.
Line 37: Line 112:
More information on openmpi is in the openmpi FAQ: http://www.open-mpi.org/faq/
=== Mvapich / Mvapich2 ===
Three additional mpi implementations are installed on all pax machines:
{{{
/usr/lib64/mvapich/1.2.0-gcc/bin
/usr/lib64/mvapich2/1.4-gcc/bin
/usr/lib64/mvapich2/1.4-intel/bin
}}}
To use mvapich, add one of those versions to your path, compile your application with that mpi compiler and run it as specified here:
http://mvapich.cse.ohio-state.edu/support/user_guide_mvapich2-1.4.html#x1-160005.2
=== MPI Support ===
Before running MPI programs, the LD_LIBRARY_PATH variable must first be set, this is done by loading the right environment module, e.g. {{{module add intel openmpi}}}.
Line 48: Line 115:
The machine file format is different from the one for openmpi, you must list the host name for every core you want to use, e.g. if you want to run four processes, two processes on each of pax19 and pax18:
{{{
pax18
pax19
pax18
pax19
}}}
== Batch System Access ==
=== Job scripts ===
Parameters to slurm can be set on the sbatch command line or starting with a {{{#SBATCH}}} in the script. The most important parameters are:
||-J ||job name ||
||--get-user-env ||copy environment variables ||
||-n ||number of cores ||
||-N ||number of nodes ||
||-t ||run time of the job, default is 30 minutes ||
||-A ||account, default the same as UNIX group ||
||-p ||partition of the cluster ||
||--mail-type ||configure email notifications, e.g. use --mail-type=ALL ||
Line 57: Line 126:
/!\ '''ATTENTION''': The PAX cluster will be split off the normal Zeuthen batch farm very soon! To access the PAX batch system you will need to source a script:
 * zsh users:
 {{{
[oreade38] ~ % . /usr/gridengine/pax/common/settings.sh
}}}
 * tcsh users:
 {{{
[oreade38] ~ $ source /usr/gridengine/pax/common/settings.csh
}}}
Be careful with {{{--get-user-env}}}, it will also copy loaded modules to the job.
Line 67: Line 128:
 Switching back to use the standard farm works similarly:
 * zsh users:
 {{{
[oreade38] ~ % . /usr/gridengine/default/common/settings.sh
}}}
 * tcsh users:
 {{{
[oreade38] ~ $ source /usr/gridengine/default/common/settings.csh
}}}
==== Time format ====
The runtime of a job is given as minutes, hours, minutes and seconds (HH:MM:SS) or days and hours (DD-HH). The maximum run time was set to 48 hours.
Line 77: Line 131:
A job script designated for a parallel job needs to specify the parallel environment and the number of required CPUs. The parameter looks like this for up to 8 slots for 8 MPI processes on a single node: ==== Examples ====
An example job script is in [[attachment:slurm-mpi.job]]
Line 79: Line 134:
{{{
#$ -pe multicore-mpi 8
}}}

For more MPI processes that have no big communication overhead, use -pe mpi.
=== Accounting ===
The jobs and their resources usage is stored in a database that is used for the fair share part of the scheduler. You can view your account's jobs with the command {{{sacct}}}. With no parameters,only today's jobs are shown, to view all jobs since May 1st, use the command {{{sacct -S 2014-05-01}}} . To view jobs from other accounts as well, use the {{{--allusers}}} option.
Line 86: Line 138:
Be sure to call the right mpirun version for your architecture. If you application was compiled for 64 bit, use === Local Disk Space ===
Each node has a local directory /scratch with up to 1TB of space. It is cleared automatically at the end of the job.

=== pax10 and pax11 I/O nodes ===
Most of the pax10 and pax11 machines have external 1GB/s Ethernet connections to the storage. To allow faster storage access, four machines each in the pax10 and pax11 partitions are equipped with 10GB/s Ethernet instead. To access them, you'll have to request the 10g feature in Slurm: {{{ --constraint=10g*1}}}. That way, the first process, the one executing the job scripts, will run on one of the machines with faster connectivity.

=== Partitions and backfilling ===
The cluster consists of three separate partitions: broadwell (default, alias pax), haswell and sandybridge. Jobs can run on only one type of node. The special partition backfill is used for filling up otherwise empty nodes. Jobs running there are automatically terminated by slurm if another job on the main partition needs the nodes.

== SL7 changes ==
As the versions and paths of the MPI implementations have changed, programs are not compatible between SL6 and SL7. You should rebuild your application on SL7, but you could also try singularity.

The 'module' command was replaced by a different, more powerful implementation called lmod. It doesn't list all available modules, instead it supports dependent modules, e.g. the MPI implementations build with 'gnu7' are shown after {{{module add gnu7}}}.
==== Running EL6 software using Singularity ====
It is possible to run software built on EL6 in a [[Singularity]] container. This works with mvapich2 binaries by calling singularity in the batch script like this:
Line 88: Line 154:
/usr/lib64/openmpi/1.4-gcc/bin/mpirun -np $NSLOTS yourapp mpiexec singularity exec /project/singularity/images/SL6.img yourbinary
Line 90: Line 156:
The MPI runtime will automatically select the right network type. However, Mvapich2 2.2 isn't optimized yet for Singularity, so this is slower than running native programs.
Line 92: Line 158:
For more demanding MPI jobs you can select one of the pax blade centers like this in your job script. You can request up to 128 slots, as a blade center contains 128 CPU cores:
For Openmpi, singularity is supported in Openmpi >= 2.1, that's why you'll have to rebuild your program with openmpi3 as installed in the SL6 singularity container:
Line 94: Line 161:
#$ -pe pax? 128

/usr/lib64/openmpi/1.4-gcc/bin/mpirun -np $NSLOTS yourapp
singularity exec /project/singularity/images/SL6.img /usr/lib64/openmpi-3.0/bin/mpicc yourprog.c -o yourprog.sl6
Line 98: Line 163:

If you want to use mvapich2 instead of openmpi from a batch job, you must first create the file ~/.mpd.conf that contains of one line like this:
and in the job script:
Line 101: Line 165:
MPD_SECRETWORD=password module add gnu7 openmpi3 prun
prun singularity exec -B /scratch /project/singularity/images/SL6.img yourprog.sl6
Line 103: Line 168:
Then use this in your job script:
{{{
#$ -pe pax?-mvapich2 16
export MPD_CON_EXT="sge_$JOB_ID.$SGE_TASK_ID"
/usr/lib64/mvapich2/1.4-gcc/bin/mpiexec -n $NSLOTS your_program
}}}

Finally, here's a list of common pitfalls when using the pax batch system:
 * Please be aware that all requested resources (via the '''-l''' qsub switch) are meant '''per job slot'''. As the pax nodes only provide 24GB (8 core systems -> 3GB per job slot), you cannot request more than 3GB h_vmem in your job scripts. Otherwise your job won't start!
== Additional Software ==
The software installation is based on the [[http://openhpc.community|OpenHPC project]]. We provide only a subset of the available software. If you need any of the other [[https://github.com/openhpc/ohpc/wiki/Component-List-v1.3.5|available components]], send a request to zn-cluster@desy.de
Line 114: Line 172:
The application binary must be available to all nodes, that's why it should be placed in an AFS or Lustre directory.
Line 115: Line 174:
The application binary must be available to all nodes, that's why it should be placed in an AFS directory. == Monitoring ==
Ganglia provides a web monitoring interface. These pages are only available from the internal network.
Line 117: Line 177:
== BLAS library ==
Both ATLAS and Goto``BLAS are available.
[[http://ganglia.zeuthen.desy.de/ganglia/?c=Parallel%20Clusters&m=load_one&r=hour&s=descending&hc=4&mc=2|interactive machines]] [[http://ganglia.zeuthen.desy.de/ganglia/?c=Slurm%20PAX%20farm&m=load_one&r=hour&s=descending&hc=4&mc=2|parallel batch machines]]
Line 120: Line 179:
 * ATLAS is in /opt/products/atlas

 * libgoto is in /usr/lib or /usr/lib64 respectively.
== Known Issues ==
 1. Openmpi3 has a bug that makes the program hang in certain situations: https://www.mail-archive.com/users@lists.open-mpi.org//msg31839.html Use openmpi instead.
 1. openmpi 1.10.x jobs crash on pax10-[28-31]. This is caused by the Mellanox Ethernet cards in these nodes. There are several workarounds:
  1. Use openmpi3 or mvapich2 when using these nodes
  1. Exclude them with the {{{-x pax10-[28-31]}}} option to sbatch.
  1. If you use '''only''' the nodes pax10-[28-31] with openmpi, exclude the default Infiniband device: set {{{OMPI_MCA_btl_openib_if_exclude=mlx4_0:1}}}
 1. You need to acquire an addressless Kerberos ticket for Slurm to work. This is the default on supported DESY machines. On self-maintained machines like notebooks, simply set {{{noaddresses=true}}} in the file {{{/etc/krb5.conf}}}. To check if your ticket is addressless, call {{{klist -v}}} (Heimdal klist only).
 1. The command {{{sbcast}}} cannot be used to copy a file to /scratch, as that is a bind mounted directory. Use /batch/job.${SLURM_JOB_ID}.0/scratch as target.
 1. The {{{module}}} command might be unavailable for tcsh login shell users. As workaround, they can run {{{bash -l}}} and use the {{{--get-user-env}}} option in the job.
Line 125: Line 189:

/!\ This web page will no longer be updated. Please use this link for current information.



Usage of the Linux Clusters at DESY Zeuthen

Introduction

There are 3 dedicated parallel clusters (blade centers, Miriquid compute nodes) available for running parallel applications, but you can also run parallel MPI jobs in the SGE farm. The documentation in Batch System Usage applies there.

For discussions and information regarding the usage of the PAX cluster a mailing list has been introduced: <zn-cluster AT desy DOT de>. To get subscribed to that list, send an email to <sympa AT desy DOT de> with the subject: subscribe zn-cluster

Hardware

The batch part consists of three separate partitions that are not interconnected: pax11 (broadwell) and pax10 (haswell) each consist of 32 compute nodes, connected via a FDR Infiniband network.The older system is pax9 (sandybridge), 16 nodes connected by a QDR Infiniband network.

Nodes

All nodes have two CPUs (sockets).

Name

CPU

Code Name

Cores

Memory

pax9-[00-15]

Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz

Sandybridge

8

48G

pax10-[00-31]

Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz

Haswell

8

64G

pax11-[00-31]

Intel(R) Xeon(R) CPU E5-2697A v4 @ 2.60GHz

Broadwell

16

128G

Software Environment

The pax machines have a software environment that is slightly different from the normal installation, it includes the OpenHPC software stack and a different version of the module command. To build on any machine in the right environment, run the /project/singularity/images/pax.img image. You can submit your jobs if you run the singularity container on a EL7 WGS like this:

singularity run -B /var/run/munge /project/singularity/images/pax.img

You can also submit your jobs from the machine pax9-00.

Building Applications

Use the 'module' command to first add a compiler implementation and then a version of MPI to your path e.g.:

module add gnu mvapich2

OpenHPC provides the module command from the lmod project. It supports more features then the old environment-modules, including dependent modules, that are shown only after loading the prequisites, e.g. for openmpi you'll have to load the intel module first.

module name

version

depends on

gnu

5.4.0

gnu7

7.3.0

gnu8

8.3.0

intel

19.1.2

openmpi

1.10.7

gnu/intel

openmpi3

3.1.0

gnu7

openmpi3

3.1.4

gnu8/intel

openmpi4

4.0.3

gnu8/intel

mvapich2

2.2

gnu/gnu7

mvapich2

2.3.2

gnu8/intel

impi

2019

gnu/gnu8/intel

opencoarrays

1.8.11

opencoarrays

2.3.1

gnu7 openmpi3

opencoarrays

2.8.0

gnu8 openmpi3

Interactive tests

You can run interactive jobs in Slurm after allocating nodes with salloc, e.g.: salloc -p sandybridge -N 2 -c 2. To get an interactive shell on the allocated machines, use the command srun --pty bash.

OpenMPI

To run an MPI program outside the batch system, you must specify a machinefile listing all the machines and the number of cores your application should run on. A typical machine file looks like this:

pax8a slots=8
pax8b slots=8
pax8c slots=8
pax8d slots=8

The command line would look like this:

/opt/ohpc/pub/mpi/openmpi-gnu/1.10.7/bin/mpirun -np 32 -machinefile ./machinefile  ./program

More information on openmpi is in the openmpi FAQ: http://www.open-mpi.org/faq/

Mvapich2

To use mvapich2, add one of those versions to your path and compile your application with that mpi compiler. Applications built with mvapich2 can use only Infiniband network hardware, so they will work on the pax machines, but not on more than one farm machine or WGS.

The machine file format is different from the one for openmpi, you must list the host name for every core you want to use, e.g. if you want to run four processes, two processes on each of pax89 and pax88:

pax88
pax89
pax88
pax89

The preferred way to run a application with mvapich2 is mpiexec, e.g.:

/opt/ohpc/pub/mpi/mvapich2-intel/2.2/bin/mpiexec -n 4 -machinefile ./machinefile /opt/ohpc/pub/libs/intel/mvapich2/imb/2018.1/bin/IMB-MPI1

Intel MPI

To use Intel MPI, add a compiler module followed by impi. Use the compiler wrappers like 'mpicc' and 'mpif90' for GNU or 'mpiicc' and 'mpiifort' for the Intel compiler. To run the resulting application, set the environment variable like this:

export FI_PROVIDER=verbs

In a Slurm job, please use the prun wrapper to start your application.

Batch System Access

/!\ ATTENTION: The PAX is now based on the SLURM scheduling system.

Slurm Commands

The most important commands:

sinfo

Information about the cluster

squeue

Show current job list

srun

Parallel command execution

sbatch

Submit a batch job

salloc

Reserve ressources for interactive commands

scancel

Abort a job

sview

Graphical user interface to view and modify Slurm state

sacct

Show accounting information

Allocation

Slurm was configured to always schedule complete nodes to each job. The pax machines have hyperthreading enabled, each hardware thread is seen as a CPU core by Slurm, so by default, on a 32 core machine with hyperthreading, 64 MPI processes are assigned. To prevent that, use the option -c 2 for sbatch, salloc or srun.

Parallel Execution

Slurm has integrated execution support for parallel programs, replacing mpirun. To work around slight differences in needed options, use prun instead of srun for starting MPI application. You'll have to load the prun module first.

MPI Support

Before running MPI programs, the LD_LIBRARY_PATH variable must first be set, this is done by loading the right environment module, e.g. module add intel openmpi.

Job scripts

Parameters to slurm can be set on the sbatch command line or starting with a #SBATCH in the script. The most important parameters are:

-J

job name

--get-user-env

copy environment variables

-n

number of cores

-N

number of nodes

-t

run time of the job, default is 30 minutes

-A

account, default the same as UNIX group

-p

partition of the cluster

--mail-type

configure email notifications, e.g. use --mail-type=ALL

Be careful with --get-user-env, it will also copy loaded modules to the job.

Time format

The runtime of a job is given as minutes, hours, minutes and seconds (HH:MM:SS) or days and hours (DD-HH). The maximum run time was set to 48 hours.

Examples

An example job script is in slurm-mpi.job

Accounting

The jobs and their resources usage is stored in a database that is used for the fair share part of the scheduler. You can view your account's jobs with the command sacct. With no parameters,only today's jobs are shown, to view all jobs since May 1st, use the command sacct -S 2014-05-01 . To view jobs from other accounts as well, use the --allusers option.

Local Disk Space

Each node has a local directory /scratch with up to 1TB of space. It is cleared automatically at the end of the job.

pax10 and pax11 I/O nodes

Most of the pax10 and pax11 machines have external 1GB/s Ethernet connections to the storage. To allow faster storage access, four machines each in the pax10 and pax11 partitions are equipped with 10GB/s Ethernet instead. To access them, you'll have to request the 10g feature in Slurm:  --constraint=10g*1. That way, the first process, the one executing the job scripts, will run on one of the machines with faster connectivity.

Partitions and backfilling

The cluster consists of three separate partitions: broadwell (default, alias pax), haswell and sandybridge. Jobs can run on only one type of node. The special partition backfill is used for filling up otherwise empty nodes. Jobs running there are automatically terminated by slurm if another job on the main partition needs the nodes.

SL7 changes

As the versions and paths of the MPI implementations have changed, programs are not compatible between SL6 and SL7. You should rebuild your application on SL7, but you could also try singularity.

The 'module' command was replaced by a different, more powerful implementation called lmod. It doesn't list all available modules, instead it supports dependent modules, e.g. the MPI implementations build with 'gnu7' are shown after module add gnu7.

Running EL6 software using Singularity

It is possible to run software built on EL6 in a Singularity container. This works with mvapich2 binaries by calling singularity in the batch script like this:

mpiexec singularity exec /project/singularity/images/SL6.img yourbinary

However, Mvapich2 2.2 isn't optimized yet for Singularity, so this is slower than running native programs.

For Openmpi, singularity is supported in Openmpi >= 2.1, that's why you'll have to rebuild your program with openmpi3 as installed in the SL6 singularity container:

singularity exec /project/singularity/images/SL6.img /usr/lib64/openmpi-3.0/bin/mpicc yourprog.c -o yourprog.sl6

and in the job script:

module add gnu7 openmpi3 prun
prun singularity exec -B /scratch /project/singularity/images/SL6.img yourprog.sl6

Additional Software

The software installation is based on the OpenHPC project. We provide only a subset of the available software. If you need any of the other available components, send a request to zn-cluster@desy.de

AFS Access

The application binary must be available to all nodes, that's why it should be placed in an AFS or Lustre directory.

Monitoring

Ganglia provides a web monitoring interface. These pages are only available from the internal network.

interactive machines parallel batch machines

Known Issues

  1. Openmpi3 has a bug that makes the program hang in certain situations: https://www.mail-archive.com/users@lists.open-mpi.org//msg31839.html Use openmpi instead.

  2. openmpi 1.10.x jobs crash on pax10-[28-31]. This is caused by the Mellanox Ethernet cards in these nodes. There are several workarounds:
    1. Use openmpi3 or mvapich2 when using these nodes
    2. Exclude them with the -x pax10-[28-31] option to sbatch.

    3. If you use only the nodes pax10-[28-31] with openmpi, exclude the default Infiniband device: set OMPI_MCA_btl_openib_if_exclude=mlx4_0:1

  3. You need to acquire an addressless Kerberos ticket for Slurm to work. This is the default on supported DESY machines. On self-maintained machines like notebooks, simply set noaddresses=true in the file /etc/krb5.conf. To check if your ticket is addressless, call klist -v (Heimdal klist only).

  4. The command sbcast cannot be used to copy a file to /scratch, as that is a bind mounted directory. Use /batch/job.${SLURM_JOB_ID}.0/scratch as target.

  5. The module command might be unavailable for tcsh login shell users. As workaround, they can run bash -l and use the --get-user-env option in the job.

Further documentation

Paralleles Rechnen in Zeuthen - die neuen Cluster , 04/27/10, technical seminar

HPC-Clusters at DESY Zeuthen , 11/22/06, technical seminar

Cluster (last edited 2020-11-16 17:34:44 by GötzWaschk)