1 HPCI Presentation Kulathep Charoenpornwattana. March 12, 2007 2 Outline Parallel programming with...
-
Upload
hugo-payne -
Category
Documents
-
view
215 -
download
0
Transcript of 1 HPCI Presentation Kulathep Charoenpornwattana. March 12, 2007 2 Outline Parallel programming with...
1
HPCI Presentation
Kulathep Charoenpornwattana
QuickTime™ and aTIFF (Uncompressed) decompressor
are needed to see this picture.
March 12, 2007
2
Outline
• Parallel programming with MPI
• Running MPI applications on Azul & Itanium
• Running MPI applications on LONI
3
Parallel Programming withMPI
March 12, 2007
4
Parallel Programming
• Parallel programming is a process of splitting task into smaller pieces and executing them simultaneously on multiple different processors to get the results faster.
• Applications should be written in one of the parallel programming models in order to take advantages of High Performance Computing cluster.
• The most commonly used parallel programming model in high performance computing environments is Message Passing Interface(MPI).
March 12, 2007
5
MPI Programming
MPI is a library of routines that provides the functionality needed for developer to write a parallel applications.
MPI has total of 126 functions but many MPI applications are written with not more than 10 basic functions.
March 12, 2007
6
Example of MPI application#include <stdio.h>
#include “mpi.h”
int main (int argc, char * argv[]){
int processID; // Identifier of process (rank)
int noprocess; // Number of processes
char compName[100]; // Hostnameint nameSize; // Length of name
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &num_process);
MPI_Comm_rank(MPI_COMM_WORLD,&processID);
MPI_Get_processor_name(compName, &nameSize);
printf(“Hello from process %d of %d on %s\n”,processID,noprocess,compName);
MPI_Finalize();
return 0;
}
March 12, 2007
7
MPI functions
MPI_Init(*argc, **argv)
is used to initialize an MPI session. Every MPI application must call to MPI_Init()
MPI_Finalize()
is called to shutdown MPI session. MPI_Finalize() should be called in the end of MPI application to free memory.
March 12, 2007
8
MPI Programming
MPI_Comm_size(comm, size)
is used to determine the total number of processes running in a communicator.
MPI_Comm_rank(comm, rank)
is used to determine the unique identifier (rank) of current process.
March 12, 2007
9
MPI Programming
25
4
0 3
1
MPI_COMM_WORLD
- MPI_COMM_WORLD is a communicator created by MPI_Init()- All processes are member of MPI_COMM_WORLD by default
March 12, 2007
10
MPI Programming (Cont.)
MPI_Send(buf,count,datatype,dest,tag, comm)is used to send information from one process to another process.
MPI_Recv(buf,count,datatype,source,tag,comm,status)is used to receive information from another process.
buf = Address of the buffercount = number of element to send/receive (i.e. 1,2,3,… )datatype = datatype of element (i.e. MPI_CHAR, MPI_INT, MPI_UNSIGNED_INT, etc)dest/source = process id of destination/source processtag = message tag for distinguish among multiple messagescomm = communicator (i.e. MPI_COMM_WORLD)
* MPI_Send must be matched with a corresponding call to MPI_Recv in the receiving process
March 12, 2007
11
Example of MPI programming// Count how many 0’s and 1’s in an array…If (rank == 0){
tsize = array_size / num_process; for(i = 1 ; i < num_process ; i++){
MPI_Send(&tsize, 1, MPI_INT, i, 0, MPI_COMM_WORLD);
MPI_Send(array + (i * tsize), tsize, MPI_INT, i, 0, MPI_COMM_WORLD); …
}}else{ // compute nodes // Receive size of the array MPI_Recv(&tsize, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, &status);
// Allocate memory to hold the arrayint *array = (int*) malloc(sizeof(MPI_INT)* tsize);
// Receiving array MPI_Recv(array, tsize, MPI_INT, 0, 0, MPI_COMM_WORLD, &status);
for(j = rank;j < tsize;j++){ // Start countingif (array[j] == 0)
zero += 1; }}…
12
Compiling & Executing MPI on
Azul & Itanium
March 12, 2007
13
Compiling MPI Applications
• Azul supports 3 MPI programming environments.– LAM/MPI (default)– MPICH– OpenMPI
• Itanium supports 2 MPI programming environments.– LAM/MPI (default)– MPICH
• User can switch between environment by using switcher – $ switcher mpi –show // display current environment– $ switcher mpi –list // display available environments– $ switcher mpi = [lammpi, mpich, openmpi] // switch environment
March 12, 2007
14
Compiling MPI Applications
To compile MPI use
-C
$ mpicc source.c –o your_app.o
-C++
$ mpic++ source.cpp –o your_app.o
-Fortran
$ mpif77 source.f –o your_app.o
March 12, 2007
15
Executing MPI Applications
• Using LAM/MPI– Obtain host file from /home/azul/host.lst
• host.lst contains hostname of compute nodes in the cluster.
– Booting the runtime system with lamboot• $ lamboot –v host.lst
– Execute application with mpirun (xx = number of processes)• $ mpirun –np xx your_app.o
– Clean up any processes with lamclean• $ lamclean
– Shutdown the runtime system with lamhalt• $ lamhalt
March 12, 2007
16
Executing MPI Applications
• Using MPICH– Obtain host file from /home/azul/host.lst– Execute application with mpirun
• $ mpirun –np xx –machinefile host.lst your_app.o
March 12, 2007
17
Executing MPI Applications
• Using OpenMPI– Obtain host file from /home/azul/host.lst– $ mpirun –np xx --hostfile host.lst your_app.o
18
Submitting jobs to
Azul & Itanium
March 12, 2007
19
Submiting job
• Jobs can be submitted through default scheduling system (Torque)
• $ qsub qsubfile.qsub // submits jobs to pbs
• $ qstat // displays jobs status
• $ qdel jobID // deletes job
• $ pbsnodes –a // displays nodes status
March 12, 2007
20
Submiting job
• qsub command only accepts qsub scriptsExample of qsub file#!/bin/sh#PBS –N Myjob // Jobs’s name#PBS –o output.txt // Output file#PBS –e error.txt // Error file#PBS –q workq // submit to workq
lamboot –v /home/ter/hosts.lstmpirun –np 5 /home/ter/my_app.o
exit 0
March 12, 2007
21
Demo …
22
Compiling & Executing MPI on
LONI
March 12, 2007
23
Compiling MPI on LONI
To compile MPI use
-C
$ mpcc_r source.c –o your_app.o
-C++
$ mpCC_r source.cpp –o your_app.o
-Fortran
$ mpxlf_r source.f –o your_app.o
March 12, 2007
24
Executing MPI Apps on LONI
• Currently, LONI doesn’t allow users to run jobs interactively.
• All of the jobs must be submitted through jobs scheduler (Loadleveler).
• If you have to run interactive jobs (mpirun), use Azul or Itanium.
25
Submitting jobs to
LONI
March 12, 2007
26
Submitting job to LONI
• Jobs can be submitted through LONI’s scheduling system (Loadleveler)
• $ llsubmit script // submits jobs
• $ llq // displays jobs status
• $ llcancel jobID // deletes job
• $ llstatus // displays nodes status
March 12, 2007
27
Example of Loadleveler script
# @ shell= /bin/ksh# @ initialdir= ~/# @ output = ~/output.txt# @ error = ~/error.txt# @ class = workq# @ job_type = parallel # @ node = 2 # @ tasks_per_node = 8 # @ queue
./helloworld
March 12, 2007
28
Questions/Answers
Got Questions?
March 12, 2007
29
References
• LONI’s user guide: https://www.loni.org/systems/help/users_guide.php
• Azul & Itanium user guide
http://hpci.latech.edu
• MPI:
http://webct.ncsa.uiuc.edu:8900/public/MPI/