Concurrent Programming. Concurrency Concurrency means for a program to have multiple paths of...

Post on 02-Jan-2016

234 views 1 download

Transcript of Concurrent Programming. Concurrency Concurrency means for a program to have multiple paths of...

Concurrent Programming

Concurrency

Concurrency means for a program to have multiple paths of execution running at (almost) the same time.

Examples:

A web server can handle connections from several clients while still listening for new connections.

A multi-player game may allow several players to move things around the screen at the same time.

How can we do this?

How to the tasks communicate with each other?

Performing Concurrency

A computer can implement concurrency by...

parallel execution: run tasks on different CPU in the same computer (requires multi-processor machine)

time-sharing: divide CPU time into slices, and multi-process several tasks on the same CPU

distributed computing: use CPU of several computers in a cluster to run different tasks

tasks

tasks

time

Design for Concurrency

If we have multiple processes or threads of execution for a single job, we must decide issues of...

Allocating CPU: previous slide

Memory: do all tasks share the same memory? Or separate memory?

Communication: how can tasks communicate?

Control: how do we start a task? stop a task? wait for a task?

Memory

shared memory:

if one thread changes memory, it effects the others.

problem: how to you share the stack?

separate memory:

each thread has its own memory area

combination:

each thread has a separate stack area

they share a static area, may share the current environment, may compete for same heap.

Shared Memory and Environment

/* fork( ) creates new process */

pid = vfork( );

if ( pid == 0 ) {

/* child process, pid=0 */

task1( );

task3( );

}

else {

/* parent process, pid=child */

task2( );

}

freespace

task1()

frame

main

stack:

SP

Tasks don't share registers. After the parent calls task1() where is the Stack Pointer (SP) of the child process? Where will task2( ) be placed?

Processes and memory

Heavy-weight processes:

each process gets its own memory area and own environment.

Unix "process" fits this model. Light-weight processes:

processes share the same memory area; but each has its own context and maybe its own stack.

"threads" in Java, C, and C# fit this model

Heavy-weight Processes

UNIX fork( ) system call: child process gets a copy of parent's memory pages

Example: Web Server

/* bind to port 80 */bind( socket, &address, sizeof(address) );

while ( 1 ) { /* run forever *//* wait for a client to connect */client = accept(socket, &clientAddress, &len );

/* fork a new process to handle client */pid = fork( );if ( pid == 0 )

handleClient( client, clientAddress );}

Server forks a new process to handle clients, so server can listen for more connections.

Example: fork and wait for child

pid = fork( );

if ( pid == 0 ) childProcess( );

else {

wait( *status ); // wait for child to exit

}

wait( ) cause the process to wait for a child to exit.

Threads: light-weight processes

Threads share memory area. Conserve resources, better communication between

tasks.

task1 = new Calculator( );

task2 = new AlarmClock( );

Thread thread1 = new Thread( task1 );

Thread thread2 = new Thread( task2 );

thread1.start( );

thread2.start( );

States of a Thread

Stack Management for Threads

In some implementations (like C) threads share the same memory, but require their own stack space.

Each thread must be able to call functions separately.

thread2

stack

space

main

stack:

thread3

stack

space

thread1

stack

thread4

stack

thread5

stack

Cactus Stack:

Dynamic and static links can refer to parent's stack.

Communication between Tasks

Reading and writing to a shared buffer. Producer - consumer model (see Java Tutorial)

Using an I/O channel called a pipe. Signaling: exceptions or interrupts.

pin = new PipedInputStream( );pout = new PipedOutputStream( pin );task1 = new ReaderTask( pin );task2 = new WriterTask( pout );Thread thread1 = new Thread( task1 );Thread thread2 = new Thread( task2 );thread1.start( ); thread2.start( );

task1 task2pipe

Thread Coordination

thread1 thread2

processingsleeping

notify( );wait( );

processing

notify( );

wait( );

sleeping

yield( );

yield( );

yield gives other

threads a chance to use CPU.

Critical Code: avoiding race conditions

Example: one thread pushes data onto a stack, another thread pops data off the stack.

Problem: you may have a race condition where one thread starts to pop data off stack. but thread is interrupted (by CPU) and other thread pushes data onto stack.

race

this

a

is

push("problem?") {

n = top;

stack[n] = "problem?";

top=n++;

top

pop() {

return stack[top--];

}

thread1:

this

a

is

problem?

thread2:

Exclusive Access to Critical Code

programmer control: use a shared flag variable or semaphore to indicate when critical block is free

executer control: use synchronization features of the language to restrict access to critical code

public void synchronized push(Object value) {if ( top < stack.length )

stack[top++] = value;}

public Object synchronized pop( ) {if ( top >= 0 ) return stack[top--];

}

Avoiding Deadlock Deadlock: when two or more tasks are waiting for each other

to release a required resource.

Program waits forever.

Rule for Avoiding Deadlock:exercise for students

Design Patterns and Threads

Observer Pattern:

one task is a source of events that other tasks are interested in. Each task wants to be notified when an interesting event occurs.

Solution:

wrap the source task in an Observable object.

Other tasks register with Observable as observers.

Observable task calls notifyObservers( ) when interesting event occurs

WeatherViewer

* ******

Observers are notified when a new prediction is readyForecaster

Observable

«ConcreteObservable» «ConcreteObserver»

«Observable»

addObservernotifyObservers

«interface»«Observer»

update

* ****** «interface»Observer

Simple Producer-Consumer Cooperation Using SemaphoresFigure 11.2

Multiple Producers-ConsumersFigure 11.3

Producer-Consumer MonitorFigure 11.4

States of a Java ThreadFigure 11.5

Ball ClassFigure 11.6

Initial Application ClassFigure 11.7

Final Bouncing Balls init MethodFigure 11.8

Final Bouncing Balls paint MethodFigure 11.9

Bouncing Balls Mouse HandlerFigure 11.10

Bouncing Balls Mouse HandlerFigure 11.11

Buffer ClassFigure 11.12

Producer ClassFigure 11.13

Consumer ClassFigure 11.14

Bounded Buffer ClassFigure 11.15

Sieve of EratosthenesFigure 11.16

Test Drive for Sieve of EratosthenesFigure 11.17