|Deletions are marked like this.||Additions are marked like this.|
|Line 5:||Line 5:|
|For discussions and information regarding the usage of the PAX cluster as mailing list has been introduced: <<MailTo(zn-cluster AT desy DOT de)>>. To get subscribed to that list, send an email to <<MailTo(sympa AT desy DOT de)>> with the subject: '''subscribe zn-cluster'''.||For discussions and information regarding the usage of the PAX cluster a mailing list has been introduced: <<MailTo(zn-cluster AT desy DOT de)>>. To get subscribed to that list, send an email to <<MailTo(sympa AT desy DOT de)>> with the subject: '''subscribe zn-cluster'''|
Usage of the Linux Clusters at DESY Zeuthen
There are 8 dedicated parallel clusters (blade centers) in testing mode, but you can also run parallel MPI jobs in the SGE farm. The documentation in Batch_System_Usage applies there.
For discussions and information regarding the usage of the PAX cluster a mailing list has been introduced: <zn-cluster AT desy DOT de>. To get subscribed to that list, send an email to <sympa AT desy DOT de> with the subject: subscribe zn-cluster
Since SL5, all batch worker nodes have the openmpi implementation of the MPI standard installed. Recently the machines were upgraded to the default SL5.5 packages of openmpi. For 64 bit applications use the installation in /usr/lib64/openmpi/1.4-gcc/bin, for 32 bit use the binaries from /usr/lib/openmpi/1.4-gcc/bin .
Additional openmpi versions are installed to support the Intel and PGI compilers:
If you don't want to specify the full path to your preferred MPI implementation, configure a default by using the ini command or running mpi-selector-menu on a build machine.
64 bit MPI Applications can be compiled on any 64 bit SL5 machine, e.g. sl5-64.ifh.de.
Running your application
To run an MPI program outside the batch system, you must specify a machinefile listing all the machines and the number of cores your application should run on. A typical machine file looks like this:
pax0c slots=8 pax0d slots=8 pax0e slots=8 pax0f slots=8
The command line would look like this:
/usr/lib64/openmpi/1.4-gcc/bin/mpirun -np 32 -machinefile ./machinefile ./program
More information on openmpi is in the openmpi FAQ: http://www.open-mpi.org/faq/
Mvapich / Mvapich2
Three additional mpi implementations are installed on all pax machines:
/usr/lib64/mvapich/1.2.0-gcc/bin /usr/lib64/mvapich2/1.4-gcc/bin /usr/lib64/mvapich2/1.4-intel/bin
To use mvapich, add one of those versions to your path, compile your application with that mpi compiler and run it as specified here: http://mvapich.cse.ohio-state.edu/support/user_guide_mvapich2-1.4.html#x1-160005.2
The machine file format is different from the one for openmpi, you must list the host name for every core you want to use, e.g. if you want to run four processes, two processes on each of pax19 and pax18:
pax18 pax19 pax18 pax19
Batch System Access
ATTENTION: The PAX cluster will be split off the normal Zeuthen batch farm very soon! To access the PAX batch system you will need to source a script:
- zsh users:
[oreade38] ~ % . /usr/gridengine/pax/common/settings.sh
- tcsh users:
[oreade38] ~ $ source /usr/gridengine/pax/common/settings.cshSwitching back to use the standard farm works similarly:
- zsh users:
[oreade38] ~ % . /usr/gridengine/default/common/settings.sh
- tcsh users:
[oreade38] ~ $ source /usr/gridengine/default/common/settings.csh
A job script designated for a parallel job needs to specify the parallel environment and the number of required CPUs. The parameter looks like this for up to 8 slots for 8 MPI processes on a single node:
#$ -pe multicore-mpi 8
For more MPI processes that have no big communication overhead, use -pe mpi.
Be sure to call the right mpirun version for your architecture. If you application was compiled for 64 bit, use
/usr/lib64/openmpi/1.4-gcc/bin/mpirun -np $NSLOTS yourapp
The MPI runtime will automatically select the right network type.
For more demanding MPI jobs you can select one of the pax blade centers like this in your job script. You can request up to 128 slots, as a blade center contains 128 CPU cores:
#$ -pe pax? 128 /usr/lib64/openmpi/1.4-gcc/bin/mpirun -np $NSLOTS yourapp
If you want to use mvapich2 instead of openmpi from a batch job, you must first create the file ~/.mpd.conf that contains of one line like this:
Then use this in your job script:
#$ -pe pax?-mvapich2 16 export MPD_CON_EXT="sge_$JOB_ID.$SGE_TASK_ID" /usr/lib64/mvapich2/1.4-gcc/bin/mpiexec -n $NSLOTS your_program
Finally, here's a list of common pitfalls when using the pax batch system:
Please be aware that all requested resources (via the -l qsub switch) are meant per job slot. As the pax nodes only provide 24GB (8 core systems -> 3GB per job slot), you cannot request more than 3GB h_vmem in your job scripts. Otherwise your job won't start!
The application binary must be available to all nodes, that's why it should be placed in an AFS directory.
Both ATLAS and GotoBLAS are available.
- ATLAS is in /opt/products/atlas
- libgoto is in /usr/lib or /usr/lib64 respectively.
Paralleles Rechnen in Zeuthen - die neuen Cluster , 04/27/10, technical seminar
HPC-Clusters at DESY Zeuthen , 11/22/06, technical seminar