Chapter 11: Distributed Processing Parallel programming
description
Transcript of Chapter 11: Distributed Processing Parallel programming
![Page 1: Chapter 11: Distributed Processing Parallel programming](https://reader035.fdocuments.in/reader035/viewer/2022062218/56815b62550346895dc94b63/html5/thumbnails/1.jpg)
Chapter 11: Distributed Processing
Parallel programming
• Principles of parallel programming languages
• Concurrent execution– Programming constructs– Guarded commands– Tasks
• Persistent systems• Client-server computing
![Page 2: Chapter 11: Distributed Processing Parallel programming](https://reader035.fdocuments.in/reader035/viewer/2022062218/56815b62550346895dc94b63/html5/thumbnails/2.jpg)
Parallel processing
The execution of more than one program/subprogram simultaneously.
A subprogram that can execute concurrently with other subprograms is called a task or a process.
Hardware supported:multiprocessor systemsdistributed computer systems
Software simulated - : time-sharing
![Page 3: Chapter 11: Distributed Processing Parallel programming](https://reader035.fdocuments.in/reader035/viewer/2022062218/56815b62550346895dc94b63/html5/thumbnails/3.jpg)
Principles of parallel programming languages
Variable definitions
mutable : values may be assigned to the variables and changed during program execution (as in sequential languages).
definitional: variable may be assigned a value only once
![Page 4: Chapter 11: Distributed Processing Parallel programming](https://reader035.fdocuments.in/reader035/viewer/2022062218/56815b62550346895dc94b63/html5/thumbnails/4.jpg)
Principles….
Parallel composition: A parallel statement, which causes additional threads of control to begin executing
Execution models (Program structure)
Transformational: E.G. parallel matrix
multiplicationReactive
![Page 5: Chapter 11: Distributed Processing Parallel programming](https://reader035.fdocuments.in/reader035/viewer/2022062218/56815b62550346895dc94b63/html5/thumbnails/5.jpg)
Principles….
Communication
shared memory with common data objects accessed by each parallel program;
messages
Synchronization:Parallel programs must be able to
coordinate actions
![Page 6: Chapter 11: Distributed Processing Parallel programming](https://reader035.fdocuments.in/reader035/viewer/2022062218/56815b62550346895dc94b63/html5/thumbnails/6.jpg)
Concurrent execution
Programming constructs
• Using parallel execution primitives of the operating system (C can invoke the fork operation of Unix )
• Using parallel constructs.
A programming language parallel construct indicates parallel execution
![Page 7: Chapter 11: Distributed Processing Parallel programming](https://reader035.fdocuments.in/reader035/viewer/2022062218/56815b62550346895dc94b63/html5/thumbnails/7.jpg)
Example
AND statement (programming language level)
Syntax: statement1 and statement2 and …
statementNSemantics:
All statements execute in parallel.
call ReadProcess andcall Write process andcall ExecuteUserProgram ;
![Page 8: Chapter 11: Distributed Processing Parallel programming](https://reader035.fdocuments.in/reader035/viewer/2022062218/56815b62550346895dc94b63/html5/thumbnails/8.jpg)
Guarded commands
• Guard: a condition that can be true or false
• Guards are associated with statements
• A statement is executed when its guard becomes true
![Page 9: Chapter 11: Distributed Processing Parallel programming](https://reader035.fdocuments.in/reader035/viewer/2022062218/56815b62550346895dc94b63/html5/thumbnails/9.jpg)
Example
Guarded if:
if B1 S1 | B2 S2 | … | Bn Sn fi
Guarded repetition statement
do B1 S1 | B2 S2 | … | Bn Sn od
Bi - guards, Si - statements
![Page 10: Chapter 11: Distributed Processing Parallel programming](https://reader035.fdocuments.in/reader035/viewer/2022062218/56815b62550346895dc94b63/html5/thumbnails/10.jpg)
Tasks
• Subprograms that run in parallel with the program that has initiated them
• Dependent on the initiating program
• The initiating program cannot terminate until all of its dependents terminate
• A task may have multiple simultaneous activations
![Page 11: Chapter 11: Distributed Processing Parallel programming](https://reader035.fdocuments.in/reader035/viewer/2022062218/56815b62550346895dc94b63/html5/thumbnails/11.jpg)
Task interaction
• Tasks unaware of each other
• Tasks indirectly aware of each other– use shared memory
• Tasks directly aware of each other
![Page 12: Chapter 11: Distributed Processing Parallel programming](https://reader035.fdocuments.in/reader035/viewer/2022062218/56815b62550346895dc94b63/html5/thumbnails/12.jpg)
Control Problems
• Mutual exclusion • Deadlock
•P1 waits for an event to be produced by P2•P2 waits for an event to be produced by P1
• Starvation•P1, P2, P3 need non-shareable resource. •P1 and P2 alternatively use the resource, •P3 - denied access to that resource.
![Page 13: Chapter 11: Distributed Processing Parallel programming](https://reader035.fdocuments.in/reader035/viewer/2022062218/56815b62550346895dc94b63/html5/thumbnails/13.jpg)
Mutual exclusion
Two tasks require access to a single non-shareable resource.
Critical resource - the resource in question. Critical section in the program - the portion in the program that uses the resource
The rule: only one program at a time can be allowed in its critical section
![Page 14: Chapter 11: Distributed Processing Parallel programming](https://reader035.fdocuments.in/reader035/viewer/2022062218/56815b62550346895dc94b63/html5/thumbnails/14.jpg)
Synchronization of TasksInterrupts - provided by OS.
Semaphores - shared data objects, with two primitive operations - signal and wait.
Messages - information is sent from one task to another. The sending task may continue to execute.
Guarded commands - force synchronization by insuring conditions are met before executing tasks.
Rendezvous - similar to messages, the sending task waits for an answer.
![Page 15: Chapter 11: Distributed Processing Parallel programming](https://reader035.fdocuments.in/reader035/viewer/2022062218/56815b62550346895dc94b63/html5/thumbnails/15.jpg)
Semaphores
• May be initialized to a nonnegative number
• Wait operation decrements the semaphore value
• Signal operation increments semaphore value
Semaphore is a variable that has an
integer value
![Page 16: Chapter 11: Distributed Processing Parallel programming](https://reader035.fdocuments.in/reader035/viewer/2022062218/56815b62550346895dc94b63/html5/thumbnails/16.jpg)
Mutual exclusion with semaphores
Each task performs:wait(s);
/* critical section */signal(s);
B[0] B[1] B[2] B[3] B[4] …
OUT IN
The Producer/Consumer problem with infinite buffer
![Page 17: Chapter 11: Distributed Processing Parallel programming](https://reader035.fdocuments.in/reader035/viewer/2022062218/56815b62550346895dc94b63/html5/thumbnails/17.jpg)
Solution :s - semaphore for entering the critical
sectiondelay - semaphore to ensure reading
from non-empty buffer
Producer: Consumer:
produce(); wait(delay); wait (s); wait(s);
append(); take(); signal(delay); signal(s); signal(s); consume();
![Page 18: Chapter 11: Distributed Processing Parallel programming](https://reader035.fdocuments.in/reader035/viewer/2022062218/56815b62550346895dc94b63/html5/thumbnails/18.jpg)
Persistent systems
Traditional software:
Data stored outside of program - persistent data
Data processed in main memory - transient data
Persistent languages
do not make distinction between persistent and transient data,
automatically reflect changes in the database
![Page 19: Chapter 11: Distributed Processing Parallel programming](https://reader035.fdocuments.in/reader035/viewer/2022062218/56815b62550346895dc94b63/html5/thumbnails/19.jpg)
Design issues
• A mechanism to indicate an object is persistent
• A mechanism to address a persistent object
• Simultaneous access to an individual persistent object - semaphores
• Check type compatibility of persistent objects - structural equivalence
![Page 20: Chapter 11: Distributed Processing Parallel programming](https://reader035.fdocuments.in/reader035/viewer/2022062218/56815b62550346895dc94b63/html5/thumbnails/20.jpg)
Client-server computing
Network models
centralized, where a single processor does the scheduling
distributed or peer-to-peer, where each machine is an equal, and the process of scheduling is spread among all of the machines
![Page 21: Chapter 11: Distributed Processing Parallel programming](https://reader035.fdocuments.in/reader035/viewer/2022062218/56815b62550346895dc94b63/html5/thumbnails/21.jpg)
Client-server mediator architecture
Client machine:•Interacts with user•Has protocol to communicate with server
Server: Provides services: retrieves data and/or programsIssues:
May be communicating with multiple clients simultaneously
–Need to keep each such transaction separate–Multiple local address spaces in server