Lecture6

17

Click here to load reader

description

 

Transcript of Lecture6

Page 1: Lecture6

High Performance Computing

Jawwad ShamsiLecture #6

27th January 2010

Page 2: Lecture6

Recap

• Cache Coherence• NUMA

Page 3: Lecture6

Today’s topics

• Cache Coherence – Continuation• Vector Processing

Page 4: Lecture6

Cache Coherence

• In SMP or NUMA, multiple copies of cache– Each copy may have a different value of data item– Maintain Coherency• How?

Page 5: Lecture6

Cache Coherence: Two Approaches

• Write back: Update Main memory once cache is flushed.

• Write through: Write is updated to cache as well as to the main memory.

Page 6: Lecture6

Implementations

• Software Solutions: – Compile time decision– Conservative– Inefficient cache utilization

• Hardware Solutions:– Runtime decision– More effective

Page 7: Lecture6

Hardware based solution

• Directory Protocol• Snoopy Protocol

Page 8: Lecture6

Directory

• Centralized Controller– Individual cache controller makes a request• Centralized controller checks and issues command

– Updates information

Page 9: Lecture6

Directory

• Write– Processor requests exclusive writes– Controller sends message– Invalidates

• Read– Issues command to the processor – Holding Processor• Writes back to MM• Read permitted

Page 10: Lecture6

Directory

• Disadvantage– Centralized Controller– Bottleneck

• Advantage– Useful in large –scale system

Page 11: Lecture6

Snoopy Protocol

• Update operation announced• All Cache controllers snoop• Bus architecture– Careful• Increased Bus Traffic

Page 12: Lecture6

Snoopy Protocol

• Two approaches– Write Invalidate• One write• Multiple readers• Exclusive: Writer invalidates others entries

– Write Update• Multiple writers• All writes are updated

Page 13: Lecture6

Write Invalidate

• The MESI Protocol : P4 processor– Data cache: Two status bits, 4 states• Modified• Exclusive• Shared• Invalid• See Table

Page 14: Lecture6

4 Possibilities• Read Miss:

– EX to SH– SH to SH– MO to SH

• Read-Hit• Write-Miss

– RWITM– MO to IN– SH to IN

• Write Hit– SH to IN– EX – Mo

Page 15: Lecture6

L1- L2 Cache Consistency

Page 16: Lecture6

Parallel programming and Amdahl's Law

Suppose 1/N time for sequential codeAnd 1-1/N for the parallel

Page 17: Lecture6

Amdahl's Law

• Speedup: speed gain of using parallel processor vs. single processor

• Speed= 1/(s+(p/N))– S=sequential code, p = parallel code, N= no. of

processors– S= T(1)/ T(j)• For j parallel processorsAs problem size increases, p may rise and s may decrease