RES LaPalma User manual

LaPalma User's Guide
Copyright © 2018
Table of contents
1 . Introduction.....................................................................................................................................2
2 . System Overview............................................................................................................................2
3 . Online Document tion....................................................................................................................2
3.1 . M n p ges................................................................................................................................2
4 . Connecting to L P lm ...................................................................................................................2
4.1 . Login node...............................................................................................................................3
4.2 . Tr nsferring files......................................................................................................................3
4.3 . Gr phic l pplic tions.............................................................................................................4
5 . File Systems....................................................................................................................................4
5.1 . Root Filesystem.......................................................................................................................5
5.2 . Lustre Filesystem.....................................................................................................................5
5.3 . Loc l H rd Drive.....................................................................................................................6
6 . Running Jobs...................................................................................................................................6
6.1 . Queues (QOS)..........................................................................................................................6
6.2 . Submitting jobs........................................................................................................................7
6.2.1 . SLURM comm nds..........................................................................................................7
6.2.2 . Job directives....................................................................................................................7
6.2.3 . Ex mples...........................................................................................................................9
7 . Softw re Environment..................................................................................................................10
7.1 . Modules..................................................................................................................................10
7.2 . C Compilers............................................................................................................................11
7.2.1 . Distributed Memory P r llelism.....................................................................................11
7.2.2 . Sh red Memory P r llelism............................................................................................11
7.2.3 . Autom tic P r lleliz tion................................................................................................12
7.2.4 . 64 bit ddressing.............................................................................................................12
7.2.5 . Optimiz tion...................................................................................................................12
7.3 . FORTRAN Compilers...........................................................................................................12
7.3.1 . Distributed Memory P r llelism.....................................................................................12
7.3.2 . Sh red Memory P r llelism............................................................................................12
7.3.3 . 64 bit Addressing............................................................................................................12
7.4 . Optimiz tion..........................................................................................................................13
7.5 . Debuggers..............................................................................................................................13
7.6 . Softw re in L P lm ...............................................................................................................13
8 . Getting help...................................................................................................................................13
9 . FAQ's............................................................................................................................................13
10 . Acknowledgements.....................................................................................................................14
APPENDIX..........................................................................................................................................16
A. SSH................................................................................................................................................16
A.1. Gener ting n SSH Key p ir on Linux...................................................................................16
A.1.1. Using the ssh- gent in Linux...........................................................................................17
A.1.1.1. Using the ssh- gent in Termin l Session.............................................................17
A.1.1.2. Using ssh- gent in n X Session.............................................................................17
A.2. Gener ting n SSH Key p ir on Windows.............................................................................17
A.2.1. Using the ssh- gent in Windows......................................................................................21
1

1 . Introduction
This user's guide for the L P lm Supercomputer (v3) is intended to provide the minimum mount of
inform tion needed by new user on this system. As such, it ssumes th t the user is f mili r with m ny of the
st nd rd spects of supercomputing s the Unix oper ting system.
We hope you c n find most of the inform tion you need to use our computing resources: from pplic tions nd
libr ries to technic l document tion bout L P lm ; how to include references in public tions nd so on.
Ple se re d c refully this document nd if ny doubt rises do not hesit te to cont ct our support group t
2 . System Overview
L P lm comprises 252 IBM dx360 M4 compute nodes. Every node h s sixteen E5-2670 cores t 2.6 GHz,
running Linux oper ting system with 32 GB of memory RAM nd 500GB of loc l disk stor ge.
Two Bull R423 servers re connected to p ir of Net pp E5600 stor ge systems providing tot l mount of
346 TB of disk stor ge ccessible from every node through Lustre P r llel File System.
The networks th t interconnect the L P lm re:
• Infinib nd Network: High b ndwidth network used by p r llel pplic tions communic tions.
• Gig bit Network: Ethernet network used by the nodes to mount remotely their root file system from
the servers nd the network over which Lustre works.
3 . Online Documentation
3.1 . Man pages
Inform tion bout most comm nds nd softw re inst lled is v il ble vi the st nd rd UNIX man comm nd.
For ex mple, to re d bout comm nd-n me just type on shell inside L P lm :
usertest@login1:~> man command-name
which displ ys inform tion bout th t comm nd to the st nd rd output.
If you don't know the ex ct n me of the comm nd you w nt but you know the subject m tter, you c n use the
-k fl g. For ex mple:
usertest@login1:~> man -k compiler
This will print out list of ll comm nds whose m n-p ge definition includes the word 'compiler'. Then you
could execute the ex ct m n comm nd line to know bout the ex ct comm nd you were looking for.
Just to know more bout the man comm nd itself, you could lso type:
usertest@login1:~> man man
4 . onnecting to LaPalma
You must use Secure Shell (ssh) tools to login into or tr nsfer file into L P lm . We do not ccept incoming
connections from protocols s telnet, ftp, rlogin, rcp, or rsh comm nds. Once you re logged into
L P lm you c nnot m ke outgoing connections for security re sons.
To get more inform tion bout the secure shell version supported nd how to get ssh for your system
(including windows systems) see Appendix A.
2

L P lm does not support uthentic tion b sed on user nd p ssword, but key-b sed uthentic tion
mech nism. In order to get ccess to L P lm you h ve to provide us your public ssh key vi em il
(res_support@i c.es). T ke look t Appendix B for gener ting you own public/priv te key p ir.
Once you h ve provided your public ssh key you c n get into L P lm system, connecting to the login node:
lapalma1.iac.es
Here you h ve n ex mple of logging into L P lm from UNIX environment:
localsystem$ ssh -l usertest lapalma1.iac.es
+----------------------------------------------------------------------+
| |
| Welcome to aPalma |
| |
| |
| |
| * Please contact [email protected] for questions at any time |
| |
| * User Guide located at /storage/ aPalmaUserGuide.pdf |
| |
+----------------------------------------------------------------------+
usertest@login1:~>
If you re on Windows system, you need to downlo d nd inst ll Secure Shell client to perform the
connection to the m chine (See ppendix A for more inform tion).
Most of these pplic tions re gr phic l nd you will h ve to fill some inform tion in some of the fields
offered, in the field 'Host n me' or 'Remote Host n me' you will need to introduce: lapalma1.iac.es. After this
procedure you m y be logged into L P lm .
The first time th t you connect to the L P lm system secure shell needs to interch nge some initi l
inform tion to est blish the communic tion. This inform tion consists of the ccept nce of the RSA key of the
remote host, you must nswer 'yes' or 'no' to confirm the ccept nce of this key.
If you c nnot get ccess to the system fter following this procedure, first consult Appendix A for n extended
inform tion bout Secure Shell, or you c n cont ct us, (see section 9 to know how to cont ct us).
4.1 . Login node
Once you re connected into the m chine, you will be presented with UNIX shell prompt nd you will
norm lly be in your home ($HOME) directory. If you re new to UNIX, you will h ve to le rn the b sics
before you could do nything useful.
The m chine in which you will be logged in will be L P lm (login1). This m chine cts s front ends, nd
re used typic lly for editing, compiling, prep r tion/submission of b tch executions nd s g tew y for
copying d t inside or outside L P lm .
It is not permitted the execution of cpu-bound progr ms on this node, if some compil tion needs much more
cputime th n the permitted, this needs to be done through the b tch queue system.
It is not possible to connect directly to the compute nodes from the login nodes, ll resource lloc tion is done
by the b tch queue system.
4.2 . Transferring files
For tr nsferring files to L P lm from Unix systems, you c n use secure copy (scp) or secure ftp (sftp),
both tools h ve the s me synt x s the old nd insecure tools such s rcp (remote copy) nd ftp.
As h ve been s id before no connections re llowed from inside L P lm to the outside world so ll scp nd
sftp comm nds h ve to be executed from your loc l m chines nd not inside L P lm .
3

Here there re some ex mples of e ch of this tools tr nsferring files to L P lm :
localsystem$ scp localfile [email protected]
localsystem$ sftp [email protected]
sftp> put localfile
These re the w ys to retrieve files from L P lm to your loc l m chine:
localsystem$ scp [email protected]:remotefile localdir
localsystem$ sftp [email protected]
sftp> get remotefile
On Windows system, most of the secure shell clients comes with tool to m ke secure copies or secure ftp's.
There re sever l tools th t ccomplishes the requirements, ple se refer to the Appendix A, where you will
find the most common ones nd ex mples of use.
4.3 . Graphical applications
You could execute gr phic l pplic tions from the login node, to do th t the only w y is tunnelling ll the
gr phic l tr ffic through the Secure shell connection est blished.
You will need to h ve n Xserver running on your loc l m chine to be ble to show the gr phic l inform tion.
Most of the UNIX fl vors h ve n X server inst lled by def ult. In Windows environment, you will prob bly
need to downlo d nd inst ll some type of X server emul tor (see ppendix A).
The second step in order to be ble to execute gr phic l pplic tions is to en ble in your secure shell
connection the forw rding of the gr phic l inform tion through the secure ch nnel cre ted. This is norm lly
done dding the -X fl g to your norm l ssh comm nd used to connect to L P lm .
Here you h ve n ex mple:
localsystem$ ssh -X -l usertest lapalma1.iac.es
+----------------------------------------------------------------------+
| |
| Welcome to aPalma |
| |
| |
| * Please contact [email protected] for questions at any time |
| |
| * User Guide located at /storage/ aPalmaUserGuide.pdf |
| |
+----------------------------------------------------------------------+
usertest@login1:~>
For Windows systems, you will h ve to en ble the 'X11 forw rding', th t option norm lly resides on the
'Tunneling' or 'Connection' menu of the client configur tion window (see ppendix A for further det ils).
5 . File Systems
IMPORTANT: It is your responsibility s user of the L P lm system to b ckup ll your critic l d t . NO
backup of user data will be done in any of the filesystems of LaPalma.
E ch user h s sever l re s of disk sp ce for storing files. These re s m y h ve size or time limits, ple se
re d c refully ll this section to know bout the policy of us ge of e ch of these filesystems.
There re 3 different types of stor ge v il ble inside node:
4

Root filesystem: Is the filesystem where the oper ting system resides
Lustre filesystems: Lustre is distributed networked filesystem which c n be ccessed from ll the
nodes
Local hard drive: Every node h s n intern l h rd drive
Let's see them in det il.
5.1 . Root Filesystem
The root file system, where the oper ting system is stored doesn't reside in the node, this is NFS filesystem
mounted from Network Att ched Stor ge (NAS).
As this is remote filesystem only d t from the oper ting system h s to reside in this filesystem. It is NOT
permitted the use of /tmp for tempor ry user d t . The loc l h rd drive c n be used for this purpose s you
could re d in section 5.3.
Furthermore, the environment v ri ble $TMPDIR is lre dy configured to force the norm l pplic tions to use
the loc l h rd drive to store their tempor ry files.
5.2 . Lustre Filesystem
Lustre is n open-source, p r llel file system th t c n provide f st, reli ble d t ccess from ll nodes of the
cluster to glob l filesystem, with rem rk ble sc le c p city nd perform nce. Lustre llows p r llel
pplic tions simult neous ccess to set of files (even single file) from ny node th t h s the Lustre file
system mounted while providing high level of control over ll file system oper tions. These filesystems re
the recommended to use with most jobs, bec use Lustre provides high perform nce I/O by "striping" blocks of
d t from individu l files cross multiple disks on multiple stor ge devices nd re ding/writing these blocks in
p r llel. In ddition, Lustre c n re d or write l rge blocks of d t in single I/O oper tion, thereby minimizing
overhe d.
Even though there is only one Lustre filesystem mounted on L P lm , there re different loc tions for
different purposes:
storage home: This loc tion h s the home directories of ll the users. When you log into L P lm you st rt in
your home directory by def ult. Every user will h ve their own home directory to store the execut bles, own
developed sources nd their person l d t .
storage projects: In ddition to the home directory, there is directory in /storage/projects for e ch group of
users of L P lm . For inst nce, the group i c01 will h ve /storage/projects/iac01 directory re dy to use.
This sp ce is intended to store d t th t needs to be sh red between the users of the s me group or project.
All the users of the s me project will sh re their common /storage/projects sp ce nd it is responsibility of
e ch project m n ger to determine nd coordin te the better use of this sp ce, nd how it is distributed or
sh red between their users.
storage scratch: E ch L P lm user will h ve directory over /storage/scratch, you must use this sp ce to
store tempor ry files of your jobs during its execution.
The previouse three loc tions sh re the s me quot in order to limit the mount of d t th t c n be s ved by
e ch group. Since the loc tions /stor ge/home, /stor ge/projects nd /stor ge/scr tch re in the s me
filesystem, the quot ssigned is the sum of “Disk Projects” nd “Disk Scr tch” est blished by the ccess
committee.
The quot nd the us ge of sp ce c n be consulted vi the quot comm nd:
usertest@login1:~> lfs quota -g <GROUP> /storage
For ex mple, if your group h s been gr nted the following resources:
Disk Projects: 1000GB
5

Disk Scr ch: 500GB
The comm nd quot will report the sum of the two v lues:
usertest@login1:~> lfs quota -g usergroup /storage
Disk quotas for grp usergroup (gid 666):
Filesystem kbytes quota limit grace files quota limit grace
/storage/ 5000 1500000000 1500000000 - 700 10000 100000 -
The mount of files is limited s well. By def ult the quot for files is set to 100000 file.
If you need more disk sp ce or number of files, the responsible of your project h s to m ke request for this
extr sp ce needed, specifying the requested sp ce nd the re sons why it is needed. The request c n be sent
by em il or ny other w y of cont ct to the user support te m s it is expl ined in section 9 of this document.
storage apps: Over this loc tion will reside the pplic tions nd libr ries th t h ve lre dy been inst lled on
L P lm . T ke look t the directories or to section 7 of this document to know the pplic tions v il ble for
gener l use. Before inst lling ny pplic tion th t is needed by your project, first check if this pplic tion is
lre dy inst lled on the system. If some pplic tion th t you need is not on the system, you will h ve to sk
our user support te m to inst ll it. Check section 9 how to cont ct us. If it is gener l pplic tion with no
restrictions in its use, this will be in st lled over public directory, th t is over /storage/apps so ll users on
L P lm could m ke use of it. If the pplic tion needs some type of license nd its use must be restricted,
priv te directory over /storage/apps will be cre ted, so only the required users of L P lm could m ke use of
this pplic tion.
All pplic tions inst lled on /storage/apps will be inst lled, controlled nd supervised by the user support
te m. This doesn't me n th t the users could not help in this t sk, both c n work together to get the best result.
The user support c n provide his wide experience in compiling nd optimizing pplic tions in the L P lm
pl tform nd the users c n provide his knowledge of the pplic tion to be inst lled. All th t gener l
pplic tions th t h ve been modified in some w y from its norm l beh vior by the project users' for their own
study, nd m y not be suit ble for gener l use, must be inst lled over /storage/projects or /storage/home
depending on the us ge scope of the pplic tion, but not over /storage/apps.
5.3 . Local Hard Drive
Every node h s loc l h rd drive th t c n be used s loc l scr tch sp ce to store tempor ry files during
executions of one of your jobs. This sp ce is mounted over /scr tch directory. The mount of sp ce within
the /scr tch filesystem v ries from node to node (depending on the tot l mount of disk sp ce v il ble). All
d t stored in these loc l h rd drives t the compute nodes will not be v il ble from the login nodes. Loc l
h rd drive d t is not utom tic lly removed, so e ch job should h ve to remove its d t when finishes.
6 . Running Jobs
Slurm is the utility used t L P lm for b tch processing support, so ll jobs must be run through it. This
document provides inform tion for getting st rted with job execution t L P lm .
In order to keep the login nodes in proper lo d, 10 minutes limit tion in the cpu time is set for processes
running inter ctively in these nodes. Any execution t king more th n this limit should be c rried out through
the queue system.
6.1 . Queues (QOS)
The user's limits re ssigned utom tic lly to e ch p rticul r user (depending on the resources gr nted by the
Access Committee). Anyw y you re llowed to use the speci l queue: "debug" in order to perform some f st
short tests. To use the "debug" queue you need to include the #SBATCH --qos=debug directive.
Table 1. Queues
Queues Max CPUs Wall time limit
class_a 2400 72 hours
6

class_b 1200 48 hours
class_c 1200 24 housr
debug 64 10 min
interactive 1 1 hour
The specific limits ssigned to e ch user depends on the priority gr nted by the ccess committee.
Users gr nted with "high priority hours" will h ve ccess to m ximum of 4032 CPUs nd m ximum
w ll_clock_limit of 72 hours. For users with "low priority hours" the limits re 1200 CPUs nd 24 hours. If
you need to incre se these limits ple se cont ct the support group.
class_a, class_b, class_c Queues ssigned by the ccess committee nd where norm l jobs will be
executed, no speci l directive is needed to use these queues.
debug: This queue is reserved for testing the pplic tions before submitting them to the 'production'
queues. Only one job per user is llowed to run simult neously in this queue, nd the execution time
will be limited to 10 minutes. The m ximum number of nodes per pplic tion is 32. Only limited
number of jobs m y be running t the s me time in this queue. To use this queue dd the directive
#SBATCH --qos=debug
interactive: Jobs submitted to this queue will run in the inter ctive (login) node. It is intended to run
GUI pplic tions th t m y exceed the inter ctive cpu time limit. Note th t only sequenti l jobs re
llowed. To use this queue l unch the following comm nd from login1: salloc -p
interactive
6.2 . Submitting jobs
A job is the execution unit for the SLURM. A job is defined by text file cont ining set of directives
describing the job, nd the comm nds to execute.
6.2.1 . SLURM commands
These re the b sic comm nds to submit jobs:
sbatch <job_script>
submits “job script” to the queue system (see below for job script directives).
squeue
shows ll the jobs submitted.
scancel <job_id>
remove his/her job from the queue system, c ncelling the execution of the processes, if they were
lre dy running.
scontrol show job <job_id>
obt ins det iled inform tion bout specific job, including the ssigned nodes nd the possible
re sons preventing the job from running.
scontrol hold <job_id>
sets lock to the specified job. To rele se job, the s me comm nd must be run with -r option.
7

6.2.2 . Job directives
A job must cont in series of directives to inform the b tch system bout the ch r cteristics of the job.
These directives ppe r s comments in the job script, with the following synt x:
#SBATCH –-directive=<value>
Some come directives h ve shorter version, you c n use both forms:
#SBATCH -d <value>
Addition lly, the job script m y cont in set of comm nds to execute. If not, n extern l script must be
provided with the 'execut ble' directive. Here you m y find the most common directives:
#SBATCH -J <name_of_job>
The job n me th t will be ppe r when the comm nd squeue is executed. This n me is est blished by the user
nd it is different from the job_id ( ssigned by the b tch system)
#SBATCH --qos <queue_name>
The queue where the job is to be submitted. Let this field empty unless you need to use “debug” queue.
#SBATCH -t <wall_clock_limit>
The limit of w ll clock time. This is mandatory field nd you must set it to v lue gre ter th n the re l
execution time for your pplic tion nd sm ller th n the time limits gr nted to the user. Notice th t your job
will be killed fter the el psed period. Shorter limits re likely to reduce the w iting time. Form t is
HH:MM:SS or DD-HH:MM:SS
#SBATCH -D <pathname>
The working directory of your job (i.e. where the job will run). If not specified, it is the current working
directory t the time the job w s submitted.
#SBATCH -o <output_file>
The n me of the file to collect the st nd rd output (stdout) of the job. It is recommended to use %j in the n me
of the file, then slurm will utom tic lly include the job Id to void overwriting the output files if you submit
sever l jobs.
#SBATCH -e <error_file>
The n me of the file to collect the stderr output of the job. It is recommended to use %j in the n me of the file,
then slurm will utom tic lly include the job Id to void overwriting the error files if you submit sever l jobs.
#SBATCH -N <number_nodes>
The number of nodes you re sking for. Be r in mind th t e ch node h s 16 cores, so by def ult e ch node
will execute 16 t sks, one per core.
#SBATCH --cpus-per-task=<number>
The number of cpus lloc ted for e ch t sk. This is useful for hybrid MPI+OpenMP pplic tions, where e ch
8

process will sp wn number of thre ds. The number of cpus per t sk must be n integer between 1 nd 16,
since e ch node h s 16 cores (one for e ch thre d).
#SBATCH --tasks_per_node=<number>
The number of t sks lloc ted in e ch node. When n pplic tion uses more th n 1.7 GB of memory
per process, it is not possible to h ve 16 processes in the s me node nd its 32 GB of memory. It c n be
combined with the --cpus-per-task to lloc te the nodes exclusively. The number of t sks per node
must be n integer between 1 nd 16 (be r in mind th t some cores will st y idle when setting number lower
th n 16, so if it is not possible for you to use ll 16 v il ble cores, try to minimize the number of cores th t
will be w sted).
#SBATCH –-array=ini:end:step
This will en ble Array Jobs. Arr y jobs nd t sk gener tion c n be used to run pplic tions over different
inputs like you could lso do with GREASY in p st versions of L P lm or M reNostrum. It will cre te s
m ny jobs s you specify from ini to end nd the step (step by def ult is 1). You c n get the current t sk using
$SLURM_ARRAY_TASK_ID.
There re lso few SLURM environment v ri bles you c n use in your scripts:
Variable Meaning
SLURM_JOBID Specifies the job ID of the executing job
SLURM_NPROCS Specifies the tot l number of processes in the job
SLURM_NNODES Is the ctu l number of nodes ssigned to run your job
SLURM_PROCID Specifies the MPI r nk (or rel tive process ID) for the current process. The r nge is
from 0-(SLURM_NPROCS-1)
SLURM_NODEID Specifies rel tive node ID of the current job. The r nge is from 0-
(SLURM_NNODES-1)
SLURM_LOCALID Specifies the node-loc l t sk ID for the process within job
SLURM_NODELIST Specifies the list of nodes on which the job is ctu lly running
SLURM_ARRAY_TASK_ID T sk ID inside the rr y job
SLURM_ARRAY_JOB_ID Job ID (it will be the s me for ll rr y jobs, the s me s the SLURM_JOBID of the
first t sk).
6.2.3 . Examples
Example for a sequential job:
#!/bin/bash
#
#SBATCH -J test_mpi
#SBATCH -n 1
#SBATCH -t 00:02:00
#SBATCH -o test_serial-%j.out
#SBATCH -e test_serial-%j.err
#SBATCH -D .
./serial_binary
The job would be submitted using:
usertest@login1:~/Slurm/Serial> sbatch slurm_serial.cmd
9

Example for a parallel (MPI) job with 64 tasks (4 nodes):
#!/bin/bash
#
#SBATCH -J test_mpi
#SBATCH -N 4
#SBATCH -t 00:30:00
#SBATCH -o test_mpi-%j.out
#SBATCH -e test_mpi-%j.err
#SBATCH -D .
module purge
module load gcc
module load openmpi
mpirun ./mpi_binary
7 . Software Environment
7.1 . Modules
All inst lled softw re (compilers, pplic tions, numeric l nd gr phic l libr ries, tools, etc.) c n be found t
/stor ge/ pps/ or directly / pps. There is directory for e ch pplic tion (upperc se letters) nd then subdirectory with
the inst lled versions. To use ny of this softw re remember to previously lo d the environment with modules.
•Get help
% module help
•List ll v il ble softw re
% module avail
•Lo d specific softw re (def ult version)
% module load gcc
•Lo d specific softw re nd version
% module load gcc/7.2.0
•List ll lo ded softw re
% module list
•Unlo d specific softw re
% module unload gcc
•Unlo d ll softw re
% module purge
•Ch nge version of softw re
% module load gcc/4.8.5
% module switch gcc/4.8.5
7.2 .C Compilers
In L P lm you c n find the next C/C++ compilers :
gcc /g++ -> GNU Compilers for C/C++, versions 4.8.5 nd 7.2.0 (by def ult). You c n choose the compiler
you w nt to use with modules.
% module load gcc
% module load gcc/7.2.0
10

% module load gcc/4.8.5
% man gcc
% man g++
All invoc tions of the C or C++ compilers follow these suffix conventions for input files:
.C, .cc, .cpp, or .cxx C++ source file.
.c C source file
.i preprocessed C source file
.so sh red object file
.o object file for ld comm nd
.s ssembler source file
By def ult, the preprocessor is run on both C nd C++ source files.
7.2.1 . Distributed Memory Parallelism
MPI compil tions will be done using OpenMPI compilers ( t this moment no Intel license is v il ble for
p r llel compilers). There re sever l versions of OpenMPI, by def ult 3.0.0 will be used. Invoking mpicc
en bles the progr m for running cross sever l nodes of the SP. Of course, you re responsible for using
libr ry such s MPI to rr nge communic tion nd coordin tion in such progr m. Any of the MPI compilers
sets the include p th nd libr ry p ths to pick up the MPI libr ry.
% module load openmpi
% mpicc a.c -o a.exe
7.2.2 . Shared Memory Parallelism
The GCC C nd C++ compilers support v riety of sh red-memory p r llelism. OpenMP directives re fully
supported when using -fopenmp.
% gcc -fopenmp -o exename filename.c
7.2.3 . Automatic Parallelization
The GCC compiler will ttempt to utom tic lly p r llelize simple loop constructs. Use the option -ftree-
parallelize-loops=N where N is the number of thre ds you w nt to use.
% gcc -ftree-parallelize-loops=16 filename.c
7.2.4 . 64 bit addressing
By def ult ll compilers will use the 64 bit ddressing
7.2.5 . Optimization
The level optimiz tion th t we recommend for L P lm (E5-2670 S ndy Bridge processors) is :
-O3 -march=native
Applic tions compiled with -march=native might be executed properly only on those m chines where
they were compiled.
11

7.3 . FORTRAN Compilers
In L P lm you c n find this GCC compilers for Fortr n, version 7.2.0 or 4.8.5
man gfortran
7.3.1 . Distributed Memory Parallelism
The scripts mpifort (OpenMPI) llows to use the MPI c lls to get p r llelism (there re other scripts like
mpif77 or mpif90 , but they re just symlinks to the s me wr pper.
% module load openmpi
% mpifort a.f -o a.exe
7.3.2 . Shared Memory Parallelism
OpenMP directives will be fully supported when using -fopenmp.
% gfortran -fopenmp -o exename filename.f
7.3.3 . 64 bit Addressing
By def ult ll compilers will use the 64 bit ddressing
7.4 . Optimization
The level optimiz tion th t we recommend for L P lm (Intel E5-2670) is:
-O3 -march=native
7.5 . Debuggers
GDB ( GNU DEBUGGER):
/usr/bin/gdb
7.6 .Software in LaPalma
There is number of different softw re inst lled in L P lm (compilers, pplic tions, libr ries, tools, etc.).
Ple se, use modules to see ll inst lled softw re. If you need p rticul r softw re or version th t is not lre dy
inst lled, ple se, cont ct us.
•List ll v il ble softw re
% module avail
•Lo d specific softw re nd version
% module load gcc/7.2.0
12

8 . Getting help
User questions nd support re h ndled t: [email protected]. If you need ssist nce, ple se supply us with
the n ture of the problem, the d te nd time th t the problem occurred, nd the loc tion of ny other relev nt
inform tion, such s output files.
9 . FAQ's
9.1. How can I get some help
9.2. How do I know the position of my first job in queue?
You c n use the comm nd:
scontrol show job <job_ID>
shows inform tion bout estim ted time for the specified job to be executed (check v lue of St rtTime
field)
9.3. How can I see the status of my jobs in queue?
Next comm nd will provide you inform tion bout your jobs in queues:
squeue
To obt in det iled inform tion bout specific job:
scontrol show job <job_ID>
9.6. What version of MPI is currently available at LaPalma?
Currently we h ve inst lled OpenMPI 3.0.0 with support for MPI versions up to MPI 3.1 st nd rd.
9.7. Which compilers are available at LaPalma?
You c n find GCC nd OpenMPI compilers v il ble in L P lm
9.8.What options are recommended to compile in LaPalma?
The recommended options to compile in L P lm re:
-O3 -march=native
9.9. Should I be careful wih the memory consumption of my jobs?
Yes, you should. E ch one of the L P lm nodes h s 32 Gb of RAM sh red by 16 cores. Up to 90 %
of this memory c n be consumed for user jobs, while, t le st, 10 % h s to be v il ble for the
Oper ting System nd d emons. According to th t, you must limit the memory consumption of your
job to 1.8 Gb per Process (which is 28.8 Gb per node when there is one t sk per processor).
9.10 Where should I install programs common to all the members of my project group?
Your should inst ll progr ms ccessible to ll your group members in the filesystem /stor ge/projects/
13

(All the groups will h ve directory such s /stor ge/projects/<gropu_id>).
9.11 Where should I store temporary data?
You c n use the loc l h rd disk of the node (/scr tch) to store tempor ry d t for your jobs.
9.12 Is there any way to make my jobs wait less in queue before running?
You must tune the directive #SBATCH -t w ll_clock_limit to the expected job dur tion.
This v lue will be used by to decide when to schedule your job, so, shorter v lues re likely to reduce
w iting time; However, notice th t when job exceeds its w ll_clock_limit will be c ncelled, so, it is
recommended to work with n sm ll security m rgin.
9.13 Should I be careful with the Input / Output over parallel filesystem (Lustre)?
P r llel Filesystem c n be bottleneck when different processes of one job re writing to Lustre long
the execution. In this kind of jobs, one possible w y to improve the job perform nce is to copy the
needed d t for e ch job to the loc l scr tch t the beginning nd copy b ck to lustre t the end, (with
this scheme, most of I/O will be performed loc lly). This scheme is lso recommended for m ssive sets
of sequenti l jobs.
10 . Acknowledgements
You should mention it in the cknowledgements of your p pers or ny other public tions where you h ve used
L P lm :
The author thankfully acknowledges the technical expertise and assistance provided by the Spanish
Supercomputing Network (Red Española de Supercomputación), as well as the computer resources used the
LaPalma Supercomputer, located at the Instituto de Astrofísica de Canarias."
14

APPENDIX
A.SSH
SSH is progr m th t en bles secure logins over n insecure network. It encrypts ll the d t p ssing both
w ys, so th t if it is intercepted it c nnot be re d. It lso repl ces the old n insecure tools like telnet, rlogin,
rcp, ftp,etc. SSH is client-server softw re. Both m chines must h ve ssh inst lled for it to work.
We h ve lre dy inst lled ssh server in our m chines. You must h ve inst lled n ssh client in your loc l
m chine. SSH is v il ble without ch rge for lmost ll versions of Unix. IAC recommend the use of
OpenSSH client th t c n be downlo d from http://www.openssh.org, but ny client comp tible with SSH
version 2 c n be used.
To ccomplish login in L P lm with SSH you h ve to provide public key. If you h ve not got lre dy one
you c n gener te public/priv te key p ir with the following instructions.
A.1.Generating an SSH Key pair on Linux
In your priv te linux workst tion enter the comm nd:
my-private-user@mymachine> ssh-keygen -b 4096 -t rsa
Generating public/private rsa key pair.
Accept the def ult loc tion to store the key (~/.ssh/id_rs ) by pressing Enter (strongly recommended) or enter
n ltern tive loc tion.
Enter file in which to save the key (/home/my-private-user/.ssh/id_rsa):
Created directory '/home/my-private-user/.ssh'.
Enter p ssphr se consisting of 10 to 30 ch r cters. The s me rules s for cre ting s fe p sswords pply. It is
strongly dvised to refr in from specifying no p ssphr se.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/my-private-user/.ssh/id_rsa.
Your public key has been saved in /home/my-private-user/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:9HoaWWfUmiM+uk1l9VrAX5cxR2CKd5YPqGvpOpCu5bY my-private-
user@mymachine
The key's randomart image is:
+---[RSA 4096]----+
| o=o |
| . * .= |
| . . X S.o |
| . . + * *o |
| .F = O = |
| o = O . o |
| ...o O . |
| oo .X . |
| .oE..+o |
+----[SHA256]-----+
You should m ke bsolutely sure th t the priv te key (~/.ssh/id_rs ) is not ccessible by nyone other th n
yourself ( lw ys set its permissions to 0600). The priv te key must never f ll into the h nds of nother person.
To ch nge the p ssword of n existing key p ir, use the comm nd:
my-private-user@mymachine> ssh-keygen -p.
15

Once the public/priv te key p ir is gener ted you h ve to send the public key file ~/.ssh/id_rs .pub to
res_support@i c.es
A.1.1.Using the ssh-agent in Linux
When doing lots of secure shell oper tions it is cumbersome to type the SSH p ssphr se for e ch such
oper tion. Therefore, the SSH p ck ge provides nother tool, ssh-agent, which ret ins the priv te keys for
the dur tion of n X or termin l session. All other windows or progr ms re st rted s clients to the ssh- gent.
By st rting the gent, set of environment v ri bles is set, which will be used by ssh, scp, or sftp to
loc te the gent for utom tic login. See the ssh- gent m n p ge for det ils.
After the ssh- gent is st rted, you need to dd your keys by using ssh- dd. It will prompt for the p ssphr se.
After the p ssword h s been provided once, you c n use the secure shell comm nds within the running session
without h ving to uthentic te g in.
A.1.1.1.Using the ssh-agent in a Terminal Session
In termin l session you need to m nu lly st rt the ssh-agent nd then c ll ssh-add fterw rd. There
re two w ys to st rt the gent. The first ex mple given below st rts new B sh shell on top of your existing
shell. The second ex mple st rts the gent in the existing shell nd modifies the environment s needed.
ssh-agent -s /bin/bash
eval $(ssh-agent)
After the gent h s been st rted, run ssh- dd to provide the gent with your keys.
A.1.1.2.Using ssh-agent in an X Session
To invoke ssh-add to dd your keys to the gent t the beginning of n X session, do the following:
- Log in s the desired user nd check whether the file ~/.xinitrc exists.
- If it does not exist, use n existing templ te or copy it from /etc/skel:
if [ -f ~/.xinitrc.template ]; then mv ~/.xinitrc.template ~/.xinitrc; \
else cp /etc/skel/.xinitrc.template ~/.xinitrc; fi
- If you h ve copied the templ te, se rch for the following lines nd uncomment them. If ~/.xinitrc
lre dy existed, dd the following lines (without comment signs).
# if test -S "$SSH_AUTH_SOCK" -a -x "$SSH_ASKPASS"; then
# ssh-add < /dev/null
# fi
- When st rting new X session, you will be prompted for your SSH p ssphr se.
A.2. Generating an SSH Key pair on Win ows
In windows systems IAC recommend the use of putty. It is free SSH client th t you c n downlo d from
https://www.chi rk.greenend.org.uk/~sgt th m/putty/l test.html. But you c n lso, ny client comp tible with
SSH version 2 c n be used.
In the next lines we will describe how to inst ll, configure nd use ssh client under Windows systems.
Once the client h s been inst lled, l unch PuTTygen in order to gener te the ssh key:
16

Select RSA s Type of key nd introduce 4096 in the field of “number of bits” nd click on “Gener te”:
17

The tool requires th t you move the mouse r ndomly:
Click on “S ve public key” to store the public key nd send it to res_support@i c.es
Enter p ssphr se consisting of 10 to 30 ch r cters. The s me rules s for cre ting s fe p sswords pply. It is
strongly dvised to refr in from specifying no p ssphr se.
After introducin the Key p ssphr se click on “S ve priv te key”
18

Keep the priv te key file in s fe loc tion nd do not sh re with nyone.
A.2.1.Using the ssh-agent in Windows
As in the c se of Linux, the p ck ge Putty provides tool to void typing the SSH p ssphr se for e ch
connection. The n me of the tool is P ge nt nd when you l unch it n icon is displ yed in the t skb r.
Double-click on the icon n the list of keys is shown:
19

Push “Add Key” nd select the priv te key previously gener ted:
Enter the p sshphr se:
The list is upd ted nd you c n click on “Close”:
20
Table of contents