Introduction to Concurrency
description
Transcript of Introduction to Concurrency
Introduction to ConcurrencyConcurrency:Execute two or more pieces of code “at the same time”
Why?No choice:- Geographically distributed data- Interoperability of different machines- A piece of code must”serve” many other client processes- To achieve reliability
By Choice:to achieve speedupsometimes makes programming easier (e.g. UNIX pipes)
Possibilities for Concurrency
Architecture: Architecture:
Uniprocessor with:-I/O channel- I/o processor- DMA
Multiprogramming, multiple process system Programs
Network of uniprocessors
Distributed programming
Multiple CPU’s Parallel programming
DefinitionsConcurrent process execution can be:- interleaved,or- physically simultaneous
Interleaved:Multiprogramming on uniprocessor
Physically simultaneous:Uni-or multiprogramming on multiprocessor
Process, thread, or task:Schedulable unit of computation
Granularity:Process “size” or computation to communication ratio- Too small: excessive overhead- Too large: less concurrency
Precedence GraphConsider writing a program as a set of tasks.
Precedence graph:
Specifies execution ordering among tasks
S0: A:= X – YS1: B:= X + YS2: C := Z + 1S3: C := A - BS4: W:= C + 1
S1
S3
S2S0
S4
Parallel zing compilers for computers with vector processors build dependency graphs
Cyclic Precedence GraphsWhat does the following graph represent ?
S1
S2
S3
Examples of Concurrency in Uniprocessors
Example 1: Unix pipesMotivations:-fast to write code-Fast to execute
Example2: BufferingMotivation:-required when two asynchronous processes must communicate
Example3: Client/ Server modelMotivation:- geographically distributed computing
Operating System Issues
Synchronization:What primitives should OS provide?
Communication:What primitives should OS provide to interface communication protocol?
Hardware support:Needed to implement OS primitives
Remote execution:What primitives should OS provide?- Remote procedure call(RPC)- Remote command shell
Sharing address spaces:Makes programmer easier
Lightweight threads:Can a process creation be as cheap as a procedure call?
Parallel Language ConstructsFORK and JOIN
FORK LStarts parallel execution at the statement labeled L and at the statement following the fork
JOIN CountRecombines ‘Count’ concurrent computations
Count:=Count-1;
If(Count>0)
Then Quit;
Join is an atomic operation
Definition: Atomic OperationIf I am a process executing on a processor, and I execute an
atomic operation, then all other processes executing on this or any other processor:
• Can see state of system before I execute or after I execute
• but cannot see any intermediate state while I am executing
Example: bank teller/* Joe has $1000, split equally between savings and checking accounts*/
1. Subtract $100 from Joe’s savings account
2. Add $100 to Joe’s checking account
Other processes should never read Joe’s balances and find he has 900 in both accounts.
Concurrency ConditionsLet Si denote a statementRead set of Si:
R(Si) = {a1,a2,…,an)Set of all variables referenced in Si
Write set of Si:W(Si) = { b1, b2, …, bm},Set of all variables changed by si
C := A - BR(C := A - B) = {A , B}W(C := A - B) = {C}
Scanf(“%d” , &A)R(scanf(“%d” , &A))={}W(scanf(“%d” , &A))={A}
Bernstein’s ConditionsThe following conditions must hold for two statements S1 and S2
to execute concurrently with valid results:
1) R(S1) INTERSECT W(S2)={}
2) W(S1) INTERSECT R(S2)={}
3) W(S1) INTERSECT W(S2)={}
These are called the Berstein Conditions.
Fork and Join Example #1
S1
S4
S3
S2S1: A := X + YS2: B := Z + 1S3: C := A - BS4: W :=C + 1
Count := 2;FORK L1;A := X + Y;Goto L2;
L1: B := Z + 1;L2: JOIN Count;
C := A - B;W := C + 1;
Structured Parallel ConstructsPARBEGIN /
PAREND
PARBEGIN Sequential execution splits off into several concurrent sequences
PAREND Parallel computations merge
PARBEGINStatement 1;Statement 2;::::::::::::::::Statement Nl
PAREND;
PARBEGINQ := C mod 25;
BeginN := N - 1;T := N / 5;
End; Proc1(X , Y);PAREND;
Fork and Join Example #2
S1
S3S2
S4
S6S5
S7
S1;Count := 3;FORK L1;S2;S4;FORK L2;S5;Goto L3;
L2: S6;Goto L3
L1: S3;L3: JOIN Count
S7;Up to three tasks may concurrently execute
Parbegin / Parend Examples
Begin PARBEGIN
A := X + Y;B := Z + 1;
PAREND;C := A - B;W := C + 1;
End
Begin S1; PARBEGIN
S3BEGIN S2; S4; PARBEGIN
S5;S6;
PAREND; End;
PAREND; S7;End;
ComparisonUnfortunately, the structured concurrent statement is not powerful enough to model all precedence graphs.
S1
S1 S1
S1
S1S1
S1
Comparison(contd)
Fork and Join code for the modified precedence graph:
S1;Count 1:=2;FORK L1;S2;S4;Count2:=2;FORK L2;S5Goto L3;
L1: S3;L2: JOIN Count1;
S6;L3: JOIN Count2;
S7;
Comparison(cont’d)
There is no corresponding structured construct code for the same graph
However, other synchronization techniques can supplement
Also, not all graphs need implementing for real-world problems
Overview
System Calls
- fork( )
- wait( )
- pipe( )
- write( )
- read( )
Examples
Process CreationFork( )NAME
fork() – create a new processSYNOPSIS
# include <sys/types.h># include <unistd.h>pid_t fork(void)
RETURN VALUEsuccess
parent- child pidchild- 0
failure-1
Fork() system call- example#include <sys/types.h>
#include <unistd.h>
#include <stdio.h>
Main()
{
printf(“[%ld] parent process id: %ld\n”, getpid(), getppid());
fork();
printf(“\n[%ld] parent process id: %ld\n”, getpid(), getppid());
}
Fork() system call- example
[17619] parent process id: 12729
[17619] parent process id: 12729
[2372] parent process id: 17619
Fork()- program structure#include <sys/types.h>#include <unistd.h>#include <stdio.h>Main(){
pid_t pid;if((pid = fork())>0){/* parent */}else if ((pid==0){/*child*/}else {
/* cannot fork*}exit(0);
}
Wait() system call
Wait()- wait for the process whose pid reference is passed to finish executing
SYNOPSIS
#include<sys/types.h>
#include<sys/wait.h>
pid_t wait(int *stat)loc)
The unsigned decimal integer process ID for which to wait
RETURN VALUE
success- child pid
failure- -1 and errno is set
Wait()- program structure#include <sys/types.h>#include <unistd.h>#include <stdlib.h>#include <stdio.h>Main(int argc, char* argv[]){
pid_t childPID;if((childPID = fork())==0){/*child*/}else {
/* parent*wait(0);
}exit(0);
}
Pipe() system call
Pipe()- to create a read-write pipe that may later be used to communicate with a process we’ll fork off.
SYNOPSIS
Int pipe(pfd)
int pfd[2];
PARAMETERPfd is an array of 2 integers, which that will be used to save the two file descriptors used to access the pipe
RETURN VALUE:0 – success;-1 – error.
Pipe() - structure/* first, define an array to store the two file descriptors*/Int pipe[2];
/* now, create the pipe*/int rc = pipe (pipes); if(rc = = -1) {
/* pipe() failed*/Perror(“pipe”);exit(1);
}
If the call to pipe() succeeded, a pipe will be created, pipes[0] will contain the number of its read file descriptor, and pipes[1] will contain the number of its write file descriptor.
Write() system callWrite() – used to write data to a file or other object identified
by a file descriptor.SYNOPSIS
#include <sys/types.h>Size_t write(int fildes, const void * buf, size_t nbyte);
PARAMETERfildes is the file descriptor,buf is the base address of area of memory that data is copied from,nbyte is the amount of data to copy
RETURN VALUEThe return value is the actual amount of data written, if this differs from nbyte then something has gone wrong
Read() system callRead() – read data from a file or other object identified by a file descriptor
SYNOPSIS
#include <sys/types.h>
Size_t read(int fildes, void *buf, size_t nbyte);
ARGUMENTfildes is the file descriptor,buf is the base address of the memory area into which the data is read, nbyte is the maximum amount of data to read.
RETURN VALUEThe actual amount of data read from the file. The pointer is incremented by the amount of data read.