Parallelism and Concurrency - Computer Science · Parallelism •Wasn’t very prevalent until...

20
Parallelism and Concurrency Motivation, Challenges, Impact on Software Development CSE 110 – Winter 2016

Transcript of Parallelism and Concurrency - Computer Science · Parallelism •Wasn’t very prevalent until...

Page 1: Parallelism and Concurrency - Computer Science · Parallelism •Wasn’t very prevalent until recently. •Multi-processor systems were prevalent for servers and in scientific computing

Parallelism and

Concurrency

Motivation, Challenges, Impact on Software Development

CSE 110 – Winter 2016

Page 2: Parallelism and Concurrency - Computer Science · Parallelism •Wasn’t very prevalent until recently. •Multi-processor systems were prevalent for servers and in scientific computing

About These Slides

• Due to the nature of this material, this lecture was delivered via the

chalkboard. Every was advised that slides would not be available and that

notes should be taken.

• These slides aren’t designed to capture the discussion, just to provide some

bullet points.

Page 3: Parallelism and Concurrency - Computer Science · Parallelism •Wasn’t very prevalent until recently. •Multi-processor systems were prevalent for servers and in scientific computing

Parallelism vs Concurrency

• Parallelism: Running at the same time upon parallel resources

• Concurrency: Running at the same time, whether via parallelism or by turn-

taking, e.g. interleaved scheduling.

Page 4: Parallelism and Concurrency - Computer Science · Parallelism •Wasn’t very prevalent until recently. •Multi-processor systems were prevalent for servers and in scientific computing

Parallelism

• Wasn’t very prevalent until recently.

• Multi-processor systems were prevalent for servers and in scientific computing

• But, they weren’t commonly used by end user devices, e.g. personal computers, etc.

• For many years, clock speed was the de facto measure of system performance – and increased rapidly.

• In the mid-2000s we hit the end of Moore’s law in some ways.

• Transistors became less limiting than connections

• Hot spots.

• Density limits

Page 5: Parallelism and Concurrency - Computer Science · Parallelism •Wasn’t very prevalent until recently. •Multi-processor systems were prevalent for servers and in scientific computing

Multi-Core Became Standard

• Many cores on same die

• Many smaller chips less compolex than one big one

• “Out of the box” gains from multitasking – different tasks on different cores

• Better for power management. Easier to save energy by turning off a core than by

slowing one down

• Gains can be had by slowing down tractor-trailers. But, idling a tractor-trailer takes more

energy than idling a car. Essentially the same idea

• Workstations, personal computer, phones, tablets, etc.

Page 6: Parallelism and Concurrency - Computer Science · Parallelism •Wasn’t very prevalent until recently. •Multi-processor systems were prevalent for servers and in scientific computing

Bigger Gains When Software is Aware

• Imagine one huge task: Game, CAD, simulation, rendering, etc

• Not helped by multiple cores if it can only run on one at a time

• Modern world requires threading: Dividing big tasks into independent parts

that can run at the same time on different core.

• Thread for each player in game, thread for scene, thread for communications, etc.

• This is as true on a high-performance workstation as phone.

Page 7: Parallelism and Concurrency - Computer Science · Parallelism •Wasn’t very prevalent until recently. •Multi-processor systems were prevalent for servers and in scientific computing

Classic Concurrency Problem:

Lost Update

(0) x=0

(5,6) x=1

// x++

(1) mov addr,%eax

(4) inc %eax

(5) mov %eax, addr

// x++

(2) mov addr,%eax

(3) inc %eax

(6) mov %eax, addr

Memory

Per-Thread

Registers(1)%eax=0

(4)%eax=1

(2) %eax=0

(3) %eax=1

Code

Page 8: Parallelism and Concurrency - Computer Science · Parallelism •Wasn’t very prevalent until recently. •Multi-processor systems were prevalent for servers and in scientific computing

Concurrency Problems

• Can occur as the result of actual parallelism

• Can occur just by interleaved execution

• Need a “sharing discipline”

• In other words, all interacting code needs to play nice with others

Page 9: Parallelism and Concurrency - Computer Science · Parallelism •Wasn’t very prevalent until recently. •Multi-processor systems were prevalent for servers and in scientific computing

Critical Resources, Critical Sections

• Critical Resources: Shared resources, whether hardware or abstract, that can’t

naturally be shared in the intended way

• Toilets can’t be shared. Variables can’t be shared if written

• Sound can be shared. Air can be shared. Variables can be shared if only read.

• Critical Sections (of code): The code that manipulates the critical resources.

• This is the code that needs to be disciplined

• Common technique is some form of locking

Page 10: Parallelism and Concurrency - Computer Science · Parallelism •Wasn’t very prevalent until recently. •Multi-processor systems were prevalent for servers and in scientific computing

One Classic Concurrency Solution:

Some Form of Locking

(0) x=0

(4) x=0

(9) x=1

// x++

(1) lock_aquire(x_lock)

(2) mov addr,%eax

(3) inc %eax

(4) mov %eax, addr

(5) loc_release(x_lock)

// x++

(6) lock_aquire(x_lock)

(7) mov addr,%eax

(8) inc %eax

(9) mov %eax, addr

(10) loc_release(x_lock)

Memory

Per-Thread

Registers(1)%eax=0

(3)%eax=1

(6) %eax=1

(8) %eax=1

Code

Page 11: Parallelism and Concurrency - Computer Science · Parallelism •Wasn’t very prevalent until recently. •Multi-processor systems were prevalent for servers and in scientific computing

Concurrency Control Primitives

• You’ll learn more in CSE 120 and/or CSE 160

• Semaphores

• Mutexes

• Condition variables

• Etc

Page 12: Parallelism and Concurrency - Computer Science · Parallelism •Wasn’t very prevalent until recently. •Multi-processor systems were prevalent for servers and in scientific computing

Higher-Level Constructs

• Monitor: A higher-level synchronization abstraction:

• Put related critical resources into a “box”, i.e. “monitor”.

• Only allow them to be accesses by the monitor’s “entry” methods

• The abstraction is implemented using lower-level primitives to ensure that only one entry

method can be active within the monitor at the same time

• Slightly different semantics are possible when complications, such as blocking (needing to

wait for a resource outside of the monitor), are considered, e.g. Mesa semantics, Hoare

semantics, etc. This level of detail is for another class.

Page 13: Parallelism and Concurrency - Computer Science · Parallelism •Wasn’t very prevalent until recently. •Multi-processor systems were prevalent for servers and in scientific computing

Monitors In Java:

Synchronized Methods

• Essentially turns an instance into a monitor

• Only one “synchronized” method can be active upon the instance at a time

• Just add “synchronized” qualifier to method that you want to discipline, e.g.

• synchronized void increment()

• Still things to watch

• Constructors cannot be synchronized

• Unsynchronized methods

• Also note:

• Java has many, many more ways of supporting concurrency control: Out of scope for this class

Page 14: Parallelism and Concurrency - Computer Science · Parallelism •Wasn’t very prevalent until recently. •Multi-processor systems were prevalent for servers and in scientific computing

Properties of Good Solutions

• Mutual Exclusion: In many cases, critical resources can not be shared at all. In these cases we want to have “mutual exclusion”, i.e. the situation where the use by one thread of the critical resource excludes use by the other threads.

• In some cases, other policies, such as “At Most N” are appropriate, e.g. only N buffers exist. In these cases, we want to ensure whatever the constraint happens to be.

• Progress: If the critical resource is available, and a thread wants to use it, it should be able to do that. We don’t want a situation where no one can use it

• Bounded Wait: It is often desirable to ensure that every thread will eventually get a turn. This is called bounded wait.

• This isn’t always desirable, though. In some cases, we want a strict priority system, in which case, if there isn’t enough resource time, some threads will “starve”. But, at the least, the more important things got done.

Page 15: Parallelism and Concurrency - Computer Science · Parallelism •Wasn’t very prevalent until recently. •Multi-processor systems were prevalent for servers and in scientific computing

Deadlock:

4 Necessary and Sufficient Conditions

• Necessary: Deadlock can not occur without any one of these

• Sufficient: Deadlock will necessarily occur if all of these are true.

• The “Necessary and Sufficient” conditions for deadlock:

• Unsharable resource: If the resource can be shared, there is no reason to wait for it.

• “Hold and wait”: It is possible to hold some resources (denying them to other threads), while waiting for other resources, potentially held by other threads

• Circular wait: The “hold and wait” forms a cycle, so that it can never be resolved.

• No pre-emption: No way to take resources from thread holding them to break cycle.

Page 16: Parallelism and Concurrency - Computer Science · Parallelism •Wasn’t very prevalent until recently. •Multi-processor systems were prevalent for servers and in scientific computing

Chopstick Example

• I have the left chopstick and need the right chopstick

• You have the right chopstick and need the left chopstick

• We both starve.

Page 17: Parallelism and Concurrency - Computer Science · Parallelism •Wasn’t very prevalent until recently. •Multi-processor systems were prevalent for servers and in scientific computing

Deadlock Detection

• It is possible, but really involved and really intensive

• Once detected, there are usually no good options

• Can’t really reach into code logic and change it

• Only real option is usually killing a thread, freeing its resources, breaking the cycle

Page 18: Parallelism and Concurrency - Computer Science · Parallelism •Wasn’t very prevalent until recently. •Multi-processor systems were prevalent for servers and in scientific computing

Avoiding Deadlock

• Unsharable resource:

• We can’t usually change the nature of the resource. Toilets aren’t, for example, sharable

• Hold and wait:

• We might be able to require that all resources be allocated at once. But, this is very limiting. Consider branching in code. Itwould take resources off-line that would never be used, etc. We’d also need to deal with contention for the basket.

• No-Premeption:

• We could do deadlock detection and then kill off one of the threads within a cycle. But, this can waste a lot of work (and annoy whoever the thread was serving)

• Circular wait:

• This is the one we usually manage. If we request resources in the same order, and there is no cycle in our ordering, there can be no circular dependency. It is more limiting than allowing resources to be requested in arbitrary order, but much more flexible than an all-at-once policy.

Page 19: Parallelism and Concurrency - Computer Science · Parallelism •Wasn’t very prevalent until recently. •Multi-processor systems were prevalent for servers and in scientific computing

Chopstick Example: Fixed

• Everyone agrees to get left chopstick before right chopstick

• Now, whoever gets the left chopstick can get the right chopstick

• Whoever doesn’t get the left chopstick will wait

• The user of the two chopsticks will finish, releasing them for later

Page 20: Parallelism and Concurrency - Computer Science · Parallelism •Wasn’t very prevalent until recently. •Multi-processor systems were prevalent for servers and in scientific computing

Last Word

• Concurrent programming, once the domain only of those working in

scientific computing, is now the domain of almost every programmer.

• It is important to explore and learn

• Some challenges and solutions can be really complex

• But, most situations are relatively straight-forward