1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing...
-
Upload
ella-riley -
Category
Documents
-
view
249 -
download
6
Transcript of 1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing...
![Page 1: 1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.](https://reader037.fdocuments.in/reader037/viewer/2022102910/56649e9c5503460f94b9d34f/html5/thumbnails/1.jpg)
1
MPI: Message-Passing Interface
Chapter 2
![Page 2: 1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.](https://reader037.fdocuments.in/reader037/viewer/2022102910/56649e9c5503460f94b9d34f/html5/thumbnails/2.jpg)
2
MPI - (Message Passing Interface)
Message passing library standard (MPI) is developed by group of academics and industrial partners to foster more widespread use and portability
Defines routines, not implementation.Several free implementations exist.
![Page 3: 1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.](https://reader037.fdocuments.in/reader037/viewer/2022102910/56649e9c5503460f94b9d34f/html5/thumbnails/3.jpg)
3
Using SPMD Computational Model
main (int argc, char *argv[]){MPI_Init (&argc, &argv);
/*find process rank */MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
if (myrank == 0) master();else slave();
MPI_Finalize ();}
where master() and slave() are to be executed by master process and slave process(es), respectively.
![Page 4: 1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.](https://reader037.fdocuments.in/reader037/viewer/2022102910/56649e9c5503460f94b9d34f/html5/thumbnails/4.jpg)
4
Communicators
Defines the scope of a communication operation.Processes have ranks associated with the communicator.
Initially, all processes enrolled in a “universe” called MPI_COMM_WORLD, and each process is given a unique rank, a number from 0 to p - 1, with p processes.
Other communicators can be established for groups of processes.A set of MPI routines exists for forming communicators.
![Page 5: 1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.](https://reader037.fdocuments.in/reader037/viewer/2022102910/56649e9c5503460f94b9d34f/html5/thumbnails/5.jpg)
5
MPI Blocking Routines
Return when “locally complete” - when location used to hold message can be used again or altered without affecting message being sent.
Blocking send will send message and return - does not mean that message has been received, just that process free to move on without adversely affecting message.
Process 1 Process 2
Message buffer
Continue process
Time
Message buffer : needed between source and destination to hold message.It helps the sender to return before the message is read by the receiver.
recv(); Read Message buffer
send();
![Page 6: 1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.](https://reader037.fdocuments.in/reader037/viewer/2022102910/56649e9c5503460f94b9d34f/html5/thumbnails/6.jpg)
6
Blocking Send
MPI_Send(buf, count, datatype, dest, tag, comm)
Address ofSend buffer
Number of items to send
Datatype of each item
Rank of destination process
Message tag
Communicator
MPI_Recv(buf, count, datatype, dest, tag, comm, status)
Address ofReceive buffer
Max. number of items to receive
Datatype of each item
Rank of sourceprocess
Message tag
Communicator
Status afteroperation
Blocking Receive
![Page 7: 1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.](https://reader037.fdocuments.in/reader037/viewer/2022102910/56649e9c5503460f94b9d34f/html5/thumbnails/7.jpg)
7
Message Tag
Used to differentiate between different types of messages being sent
If special type matching is not required, a wild card message tag is used, so that the recv() will match with any send().
Process 1 Process 2
send(&x, 2, 5)
recv(&y,1, 5)
xy
Movementof data
Waits for a message from process 1 with a tag of 5
![Page 8: 1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.](https://reader037.fdocuments.in/reader037/viewer/2022102910/56649e9c5503460f94b9d34f/html5/thumbnails/8.jpg)
8
Example
To send an integer x from process 0 to process 1 :
MPI_Comm_rank(MPI_COMM_WORLD,&myrank); /* find rank */
int x;
if(myrank == 0) { MPI_Send(&x, 1, MPI_INT,1, msgtag, MPI_COMM_WORLD);} else if (myrank == 1) { MPI_Recv(&x,1,MPI_INT,0,msgtag,MPI_COMM_WORLD,status);}
![Page 9: 1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.](https://reader037.fdocuments.in/reader037/viewer/2022102910/56649e9c5503460f94b9d34f/html5/thumbnails/9.jpg)
9
MPI Nonblocking Routines
Nonblocking send - MPI_Isend() - will return “immediately” even before source location is safe to be altered
MPI_Isend(buf,count,datatype,dest,tag,comm,request)
Nonblocking receive - MPI_Irecv() - will return even if no message to accept
MPI_Irecv(buf,count,datatype,source,tag,comm,request)
Completion detected by MPI_Wait() and MPI_Test()
by accessing request parameter
MPI_Wait() Waits for an MPI send or receive to complete
int MPI_Wait ( MPI_Request *request, MPI_Status *status)
MPI_Test() A call to MPI_TEST returns flag = true if the operation identified by request is complete
int MPI_Test( MPI_Request *request, int *flag, MPI_Status *status );
![Page 10: 1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.](https://reader037.fdocuments.in/reader037/viewer/2022102910/56649e9c5503460f94b9d34f/html5/thumbnails/10.jpg)
10
Example
Send an integer x from process 0 to process 1 using Non-Blocking ops:
/* find rank */
MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
int x;
if (myrank == 0) {
MPI_Isend(&x,1,MPI_INT, 1,msgtag,MPI_COMM_WORLD,req1);
compute();
MPI_Wait(req1, status);
}
else if (myrank == 1) {
MPI_Recv(&x,1,MPI_INT,0,msgtag,MPI_COMM_WORLD,status);
}
![Page 11: 1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.](https://reader037.fdocuments.in/reader037/viewer/2022102910/56649e9c5503460f94b9d34f/html5/thumbnails/11.jpg)
11
Collective Communication
MPI_Bcast() - Broadcast from root to all other processes
MPI_Barrier() - synchronize processes by stopping each one
until they all have reached to “Barrier” call. MPI_Gather() - Gather values for group of processes
MPI_Scatter() - Scatters buffer in parts to group of processes
MPI_Alltoall() - Sends data from all processes to all processes
MPI_Reduce() - Combine values on all processes to single value
MPI_Scan() - Compute prefix reductions of data on processes
![Page 12: 1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.](https://reader037.fdocuments.in/reader037/viewer/2022102910/56649e9c5503460f94b9d34f/html5/thumbnails/12.jpg)
12
Broadcast
Sending same message to all processes concerned with problem.
http://linux.die.net/man/3/mpi_bcast
MPI_Bcast();
buf
data datadata
Process 0 Process p - 1Process 1
Action
Code MPI_Bcast(); MPI_Bcast();
![Page 13: 1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.](https://reader037.fdocuments.in/reader037/viewer/2022102910/56649e9c5503460f94b9d34f/html5/thumbnails/13.jpg)
13
Scatter
scatter();
buf
scatter();
data
scatter();
datadata
Process 0 Process p - 1Process 1
Action
Code
Sending each element of an array in root process to a separate process. Contents of ith location of array sent to ith process.
http://linux.die.net/man/3/mpi_scatter
![Page 14: 1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.](https://reader037.fdocuments.in/reader037/viewer/2022102910/56649e9c5503460f94b9d34f/html5/thumbnails/14.jpg)
14
Gather
gather();
buf
gather();
data
gather();
datadata
Process 0 Process p - 1Process 1
Action
Code
Having one process collect individual values from set of processes.
http://linux.die.net/man/3/mpi_gather
![Page 15: 1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.](https://reader037.fdocuments.in/reader037/viewer/2022102910/56649e9c5503460f94b9d34f/html5/thumbnails/15.jpg)
15
Reduce
reduce();
buf
reduce();
data
reduce();
datadata
Process 0 Process p - 1Process 1
+
Action
Code
Gather operation combined with specified arithmetic/logical operation.
http://linux.die.net/man/3/mpi_reduce
![Page 16: 1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.](https://reader037.fdocuments.in/reader037/viewer/2022102910/56649e9c5503460f94b9d34f/html5/thumbnails/16.jpg)
16
Parallel ADD
![Page 17: 1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.](https://reader037.fdocuments.in/reader037/viewer/2022102910/56649e9c5503460f94b9d34f/html5/thumbnails/17.jpg)
17
Example
To gather items from group of processes into process 0, using dynamically allocated memory in root process:
int data[10]; /*data to be gathered from processes*/
MPI_Comm_rank(MPI_COMM_WORLD, &myrank); /* find rank */
if (myrank == 0) {
MPI_Comm_size(MPI_COMM_WORLD, &grp_size); /*find group size*/
buf = (int *)malloc(grp_size*10*sizeof (int)); /*allocate memory*/
}
MPI_Gather(data,10,MPI_INT,buf,grp_size*10,MPI_INT,0,MPI_COMM_WORLD);
![Page 18: 1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.](https://reader037.fdocuments.in/reader037/viewer/2022102910/56649e9c5503460f94b9d34f/html5/thumbnails/18.jpg)
18
#include “mpi.h”
#include <stdio.h>
#include <math.h>
#define MAXSIZE 1000
void main(int argc, char *argv)
{
int myid, numprocs;
int data[MAXSIZE], i, x, low, high, myresult, result;
char fn[255];
char *fp;
MPI_Init(&argc,&argv);
MPI_Comm_size(MPI_COMM_WORLD,&numprocs);
MPI_Comm_rank(MPI_COMM_WORLD,&myid);
if (myid == 0) { /* Open input file and initialize data */
strcpy(fn,getenv(“HOME”));
strcat(fn,”/MPI/rand_data.txt”);
if ((fp = fopen(fn,”r”)) == NULL) {
printf(“Can’t open the input file: %s\n\n”, fn);
exit(1);
}
for(i = 0; i < MAXSIZE; i++) fscanf(fp,”%d”, &data[i]);
}
MPI_Bcast(data, MAXSIZE, MPI_INT, 0, MPI_COMM_WORLD); /* broadcast data */
x = n/nproc; /* Add my portion Of data */
low = myid * x;
high = low + x;
for(i = low; i < high; i++)
myresult += data[i];
printf(“I got %d from %d\n”, myresult, myid); /* Compute global sum */
MPI_Reduce(&myresult, &result, 1, MPI_INT, MPI_SUM, 0, MPI_COMM_WORLD);
if (myid == 0) printf(“The sum is %d.\n”, result);
MPI_Finalize();
}
Sample MPI program
![Page 19: 1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.](https://reader037.fdocuments.in/reader037/viewer/2022102910/56649e9c5503460f94b9d34f/html5/thumbnails/19.jpg)
19
Sequential execution time ts or Tseq
Estimate by counting computational steps of best sequential algorithm.
Parallel execution time tp or Tpar
In addition to number of computational steps, tcomp, need to estimate communication overhead, tcomm:
tp = tcomp + tcomm
Evaluating Parallel Programs
![Page 20: 1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.](https://reader037.fdocuments.in/reader037/viewer/2022102910/56649e9c5503460f94b9d34f/html5/thumbnails/20.jpg)
20
Computational Time
Count number of computational steps.When more than one process executed simultaneously, count computational steps of the slowest process.
Generally, function of n and p, i.e.
tcomp = f (n, p)
Break down computation time into parts. Then
tcomp = tcomp1 + tcomp2 + tcomp3 + …
Analysis usually done assuming that all processors are the same and operating at the same speed.
![Page 21: 1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.](https://reader037.fdocuments.in/reader037/viewer/2022102910/56649e9c5503460f94b9d34f/html5/thumbnails/21.jpg)
21
Communication Time
Many factors, including network structure and network contention. As a first approximation, use
tcomm = tstartup + ntdata
tstartup : essentially time to send a message with no data. Constant.
tdata is transmission time to send one data word, also assumed constant, and there are n data words.
Number of data items, n
Timet
Startup Time
![Page 22: 1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.](https://reader037.fdocuments.in/reader037/viewer/2022102910/56649e9c5503460f94b9d34f/html5/thumbnails/22.jpg)
22
Final communication time tcomm
Summation of communication times of all sequential messages from a process, i.e.
tcomm = tcomm1 + tcomm2 + tcomm3 + …
Communication patterns of all processes assumed same and take place together so that only one process need be considered.
Parallel Execution Time:
tp or Tpar = tcomp + tcomm
![Page 23: 1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.](https://reader037.fdocuments.in/reader037/viewer/2022102910/56649e9c5503460f94b9d34f/html5/thumbnails/23.jpg)
23
Measuring Execution Time
To measure the execution time between point L1 and point L2 in the code, we might have a construction such as
.
L1: time(&t1); /* start timer */::
L2: time(&t2); /* stop timer */::
Elapsed_time = (t2 - t1);printf(“Elapsed time = %5.2f seconds”, elapsed_time);
MPI provides MPI_Wtime() call for returning time (in seconds)
![Page 24: 1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.](https://reader037.fdocuments.in/reader037/viewer/2022102910/56649e9c5503460f94b9d34f/html5/thumbnails/24.jpg)
24
Compiling/executing (SPMD) MPI program
Use the slides provided on the class website:
http://web.mst.edu/~ercal/387/slides/MST-Cluster-MPICH.ppt
![Page 25: 1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.](https://reader037.fdocuments.in/reader037/viewer/2022102910/56649e9c5503460f94b9d34f/html5/thumbnails/25.jpg)
25
Sample Programs – parallel add
![Page 26: 1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.](https://reader037.fdocuments.in/reader037/viewer/2022102910/56649e9c5503460f94b9d34f/html5/thumbnails/26.jpg)
26
Sample Programs – parallel add
![Page 27: 1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.](https://reader037.fdocuments.in/reader037/viewer/2022102910/56649e9c5503460f94b9d34f/html5/thumbnails/27.jpg)
27
Broadcast
![Page 28: 1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.](https://reader037.fdocuments.in/reader037/viewer/2022102910/56649e9c5503460f94b9d34f/html5/thumbnails/28.jpg)
28
Parallel add & Broadcast
![Page 29: 1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.](https://reader037.fdocuments.in/reader037/viewer/2022102910/56649e9c5503460f94b9d34f/html5/thumbnails/29.jpg)
29
Parallel add & Bcast simultaneously