© 2009 IBM Corporation Motivation for HPC Innovation in the Coming Decade Dave Turek VP Deep...

5
© 2009 IBM Corporation Motivation for HPC Innovation in the Coming Decade Dave Turek VP Deep Computing, IBM

Transcript of © 2009 IBM Corporation Motivation for HPC Innovation in the Coming Decade Dave Turek VP Deep...

Page 1: © 2009 IBM Corporation Motivation for HPC Innovation in the Coming Decade Dave Turek VP Deep Computing, IBM.

© 2009 IBM Corporation

Motivation for HPC Innovation in the Coming Decade

Dave TurekVP Deep Computing, IBM

Page 2: © 2009 IBM Corporation Motivation for HPC Innovation in the Coming Decade Dave Turek VP Deep Computing, IBM.

© 2005 IBM CorporationPage 22

High Performance Computing Trends

Three distinct phases . • Past: Exponential growth in processor performance mostly through CMOS technology advances• Near Term: Exponential (or faster) growth in level of parallelism.• Long Term: Power cost = System cost ; invention required

Curve is not only indicative of peak performance but also performance/$

Supercomputer Peak Performance

1940 1950 1960 1970 1980 1990 2000 2010

Year Introduced

1E+2

1E+5

1E+8

1E+11

1E+14

1E+17

Pea

k S

pee

d (

flo

ps)

Doubling time = 1.5 yr.

ENIAC (vacuum tubes)UNIVAC

IBM 701IBM 704

IBM 7090 (transistors)

IBM StretchCDC 6600 (ICs)

CDC 7600CDC STAR-100 (vectors) CRAY-1

Cyber 205 X-MP2 (parallel vectors)

CRAY-2X-MP4 Y-MP8

i860 (MPPs)

ASCI White

Blue Gene/L

Blue Pacific

DeltaCM-5 Paragon

NWT

ASCI Red OptionASCI Red

CP-PACS

Earth

VP2600/10SX-3/44

Red Storm

ILLIAC IV

SX-2

SX-4

SX-5

S-810/20

T3D

T3E

ASCI Purple

2020

Blue Gene/P

Blue Gene/Q

Past

Near Term

Long Term

1PF: 2008

10PF: 2011

1EF: 201X?

Page 3: © 2009 IBM Corporation Motivation for HPC Innovation in the Coming Decade Dave Turek VP Deep Computing, IBM.

© 2005 IBM CorporationPage 33

10G 100G 1T 10T 100T 1P 10P 100P 1E

Power6 QS22 Blade QS22 Rack

BG/P Rack Roadrunner

BlueGene/P

P505Q Rack 10 BG/P Racks

2 Days forecast 10 Days forecast Multi-Scale Multi-Physics Climate Models

Astrophysics Ocean models Global warming

Hurricane Models

Eng

inee

ring

Geo

scie

nce

sE

nerg

y (n

ucle

ar) Vibroacustic analysis Comp. Photo-Lithogr. Plasma ( Fusion )

Full aircraft design

Solid Earth (Petroleum, Water, Voids)

Nuclear FissionFull automobile

Earthquake

Airfoil design

Wea

ther

C

lima

teLi

fe

Sci

ence

s

In vivo bone anal.

Peptide analysis Mouse brain

Full bone anal.

Rat brain Protein Folding

Human brainG-receptors

Rigid docking Massive rigid docking First principle dockingFree Energy based docking

Mat

eria

ls

Mod

elin

g

First Principle device simulationsPhase TransitionsComp. SpectroscopyElectron Transfer

Electronic Structure Calculations Nano-scale modeling

Multi-scale Material Simulations

High-k materials

P575 Rack

Page 4: © 2009 IBM Corporation Motivation for HPC Innovation in the Coming Decade Dave Turek VP Deep Computing, IBM.

© 2005 IBM CorporationPage 44

Core Frequencies ~ • 2-4 GHz, will not change significantly as we go

forward• 100,000,000 Cores to deliver an Exaflop

Power• At today’s MegaFlops / Watt: 2 GW needed

(~$2B/yr)• Power reduction will force simpler chips, longer

latencies, more caches, nearest neighbor network Memory and Memory Bandwidth

• Much less memory / core (price)• Much less bandwidth / core (power / technology)

Network Bandwidth• Much less network bandwidth per core (price /

core) (Full fat tree ~$1B to $4B) • Local network connectivity

Reliability• Expect algorithms / applications will have to

permit / survive hardware fails. I/O Bandwidth

• At 1 Byte / Flop, an EXAFLOP system will have 1 EXABYTE of Memory.

• No disk system can read / write this amount of data in reasonable time. (BG/P 4TB ~1min but disk array ingest at ~15min)

Computer Design Challenges

Exascale Computing• O(100 M) compute engines working

together

Capability delivered has the potential to be truly revolutionary

However• Systems will be complex• Software will be complex• Applications will be complex• Data Centers will be complex• Maintenance / Management will be

complex

Page 5: © 2009 IBM Corporation Motivation for HPC Innovation in the Coming Decade Dave Turek VP Deep Computing, IBM.

© 2005 IBM CorporationPage 55

Summary Why Exascale?

Applications not possible with smaller machines

Applications with multiple integrated components for complex systems

Applications needing many iterations for sensitivity analysis, etc

Exascale has enormous challenges!• Power

• Cost

• Memory requirements

• Usability

Users will need time on a successive series of larger platforms to get to the exascale. • Code development will be a large undertaking and tools to assist in this effort are critical

.

Thank You

Capability

Understanding

Complexity