Welcome To The 33 rd HPC User Forum Meeting September 2009.

53
Welcome To The 33 rd HPC User Forum Meeting September 2009

Transcript of Welcome To The 33 rd HPC User Forum Meeting September 2009.

Page 1: Welcome To The 33 rd HPC User Forum Meeting September 2009.

Welcome To The 33rd HPC User Forum

MeetingSeptember 2009

Page 2: Welcome To The 33 rd HPC User Forum Meeting September 2009.

Important Dates For Your Calendar Important Dates For Your Calendar

FUTURE HPC USER FORUM MEETINGS:

October 2009 International HPC User Forum Meetings: HLRS/University of Stuttgart, October 5-6, 2009

(midday to midday) EPFL, Lausanne, Switzerland, October 8-9, 2009

(midday to midday)

US Meetings: April 12 to 14, 2010 Dearborn, Michigan at the

Dearborn Inn September 2010, Seattle Washington

Page 3: Welcome To The 33 rd HPC User Forum Meeting September 2009.

Thank You To Our Meal Sponsors! Thank You To Our Meal Sponsors!

Wednesday Breakfast -- Hitachi Cable America

Wednesday Lunch -- Altair Engineering & AMD

Wednesday Break -- Appro International

Thursday Breakfast -- Mellanox Technologies

Thursday Lunch -- Microsoft

Thursday Break -- ScaleMP

Page 4: Welcome To The 33 rd HPC User Forum Meeting September 2009.

A Petascale Triva Question A Petascale Triva Question

How many years would 1,000 scientists have to calculate by hand to equal 1 second of work on a 0.1 PFLOPS supercomputer?

Assuming that they can do 1 calculation every second, with no rest time (and a long life)

Page 5: Welcome To The 33 rd HPC User Forum Meeting September 2009.

A Petascale Triva AnswerA Petascale Triva Answer

3,200 Years

0.1 PF = 1,000x365x24x60x60x3,200

From DOD’s new Mana supercomputer in Hawaii: A Dell PowerEdge M610 with 1,152 nodes Each node contains two 2.8 Ghz Intel Nehalem

processors for a total of 9,216 computer cores That gives it a PEAK performance of 103 TFLOPS

or 0.1 PFLOPS From MHPCC Acting Director: David L. Stinson

Page 6: Welcome To The 33 rd HPC User Forum Meeting September 2009.

Tuesday Dinner Vendor Updates: 10 Min. Only Tuesday Dinner Vendor Updates: 10 Min. Only

IBM

Appro

Hitachi Cable

Luxtera

Mellanox

ScaleMP

Tech-X

Mitrionics

Page 7: Welcome To The 33 rd HPC User Forum Meeting September 2009.

Welcome To The 33rd HPC User Forum

MeetingSeptember 2009

Page 8: Welcome To The 33 rd HPC User Forum Meeting September 2009.

Introduction: LogisticsIntroduction: Logistics

Ask Mary if you need a receipt Meals and events

Wednesday tour and dinner plans

We have a very tight agenda (as usual) Please help us keep on time!

Review handouts Note: We will post most of the presentations on

the web site Please complete the evaluation form

Page 9: Welcome To The 33 rd HPC User Forum Meeting September 2009.

HPC User Forum MissionHPC User Forum Mission

To improve the health of the high-performance computing industry

through open discussions, information-sharing and initiatives involving HPC

users in industry, government and academia, along with HPC vendors and

other interested parties.

Page 10: Welcome To The 33 rd HPC User Forum Meeting September 2009.

HPC User Forum GoalsHPC User Forum Goals

Assist HPC users in solving their ongoing computing, technical and business problemsProvide a forum for exchanging information, identifying areas of common interest, and developing unified positions on requirements

By working with users in other sectors and vendors To help direct and push vendors to build better products Which should also help vendors become more successful

Provide members with a continual supply of information on: Uses of high end computers, new technologies, high end

best practices, market dynamics, computer systems and tools, benchmark results, vendor activities and strategies

Provide members with a channel to present their achievements and requirements to interested parties

Page 11: Welcome To The 33 rd HPC User Forum Meeting September 2009.

Important Dates For Your Calendar Important Dates For Your Calendar

FUTURE HPC USER FORUM MEETINGS:

October 2009 International HPC User Forum Meetings: HLRS/University of Stuttgart, October 5-6, 2009

(midday to midday) EPFL, Lausanne, Switzerland, October 8-9, 2009

(midday to midday)

US Meetings: April 12 to 14, 2010 Dearborn, Michigan at the

Dearborn Inn September 2010, Seattle Washington

Page 12: Welcome To The 33 rd HPC User Forum Meeting September 2009.

Thank You To Our Meal Sponsors! Thank You To Our Meal Sponsors!

Wednesday Breakfast -- Hitachi Cable America

Wednesday Lunch -- Altair Engineering

Wednesday Break -- Appro International & AMD

Thursday Breakfast -- Mellanox Technologies

Thursday Lunch -- Microsoft

Thursday Break -- ScaleMP

Page 13: Welcome To The 33 rd HPC User Forum Meeting September 2009.

1Q 2009 HPC

Market Update

1Q 2009 HPC

Market Update

Page 14: Welcome To The 33 rd HPC User Forum Meeting September 2009.

Q109 HPC Market Result – Down 16.8%Q109 HPC Market Result – Down 16.8%

Departmental ($250K - $100K)

$754M

Divisional ($250K - $500K)

$237M

Supercomputers(Over $500K)

$802M

Workgroup(under $100K)

$282M

HPC Servers $2.1B

Source IDC, 2009

Page 15: Welcome To The 33 rd HPC User Forum Meeting September 2009.

Q109 Vendor Share in RevenueQ109 Vendor Share in Revenue

Other11.7%

Dawning0.3%

Fujitsu3.4%

Appro0.4%

Bull0.6%

NEC9.4%

Sun3.6%

Dell12.0%

HP28.9%

IBM25.3%

SGI1.5%Cray

2.9%

Page 16: Welcome To The 33 rd HPC User Forum Meeting September 2009.

Q109 Cluster Vendor SharesQ109 Cluster Vendor Shares

HP30.8%

Dell21.7%

Other21.6%

IBM12.3%

Bull1.1%

Fujitsu2.6%

Dawning0.6%

NEC1.8%

Sun5.7%

SGI1.1%

Appro0.7%

Page 17: Welcome To The 33 rd HPC User Forum Meeting September 2009.

HPC Compared

To IDC

Server Numbers

HPC Compared

To IDC

Server Numbers

Page 18: Welcome To The 33 rd HPC User Forum Meeting September 2009.

HPC Qview Tie To Server Tracker:1Q 2009 DataHPC Qview Tie To Server Tracker:1Q 2009 Data

All WW Servers As Reported In IDC Server

Tracker

$9.9B

HPC Qview

Compute Node

Revenues~$1.05B*

HPC Special Revenue Recognition Services

Includes those sold through custom engineering, R&D offsets, or paid for over multiple quartersHPC Special

Revenue Recognition

Services ~$474M

HPC Computer System Revenues Beyond The Base Compute Nodes:Includes interconnects and switches,

inbuilt storage, scratch disks, OS, middleware, warranties, installation fees, service nodes, special cooling

features, etc.

Revenue Beyond

Base Nodes~$576M

* This number ties the two data sets on an apples-to-apples basis

Tracker QST Data Focus: Compute Nodes

HPC Qview Data Focus: The Complete System: “Everything needed to turn it on”

3HPC

1QST

2HPC

Page 19: Welcome To The 33 rd HPC User Forum Meeting September 2009.

OEM Mix Of HPC Special Revenue Recognition ServicesOEM Mix Of HPC Special Revenue Recognition Services

Non-SEC Reported Product Revenues = $474M

HP30%

IBM43%

Dell12%

Sun4%

Other11%

Notes:• Includes product sales that are not reported by OEMs as product revenue in a

given quarter Sometimes HPC systems are paid for across a number of quarters or even

years• Includes NRE – if required for a specific system • Includes custom engineering sales• Some examples – Earth Simulator, ASCI Red, ASCI Red Storm, DARPA systems,

and many small and medium HPC systems that are sold through a custom engineering or services group because that need extra things added

2

Page 20: Welcome To The 33 rd HPC User Forum Meeting September 2009.

Areas Of HPC “Uplift” Revenues Areas Of HPC “Uplift” Revenues

How The $576M "Uplift" Revenues Are Distributed

Computer hardware (in

cabinet) 45%

External interconnects

12%

External storage 12%

Software16%

Bundled warranties 8%

Misc. items7%3

Page 21: Welcome To The 33 rd HPC User Forum Meeting September 2009.

Areas Of HPC “Uplift” Revenues Areas Of HPC “Uplift” Revenues

Notes:

* Computer hardware (in cabinet) -- hybrid nodes, service nodes, accelerators, GPGPUs, FPGAs, internal interconnects, in-built disks, in-built switches, special cabinet doors, special signal processing parts, etc.

* External interconnects -- switches, cables, extra cabinets to hold them, etc.

* External storage -- scratch disks, interconnects to them, cabinets to hold them, etc. (This excludes user file storage devices)

* Software -- includes both bundled and separately charged software if sold by the OEM, or on the purchase contract -- includes the operating system, license fees, the entire middleware stack, compilers, job schedules, etc. (it excludes all ISV applications unless sold by the OEM and in the purchase contract)

* Bundled warranties * Misc. items -- Since the HPC taxonomy includes

everything required to turn on the system and make it operational, items like bundled installation services, special features and other add-on hardware, and even a special paint job if required

3

Page 22: Welcome To The 33 rd HPC User Forum Meeting September 2009.

Special Paint Jobs Are Back … Special Paint Jobs Are Back …

http://www.afrl.hpc.mil/consolidated/hardware.php

Page 23: Welcome To The 33 rd HPC User Forum Meeting September 2009.

2010 IDC HPC Research Areas 2010 IDC HPC Research Areas

• Quarterly HPC Forecast Updates Until the world economy recovers

• New HPC End-user Based Reports: Clusters, processors, accelerators, storage, interconnects,

system software, and applications The evolution of government HPC budgets China and Russia HPC trends

• Power and Cooling Research

• Developing a Market Model For Middleware and Management Software

• Extreme Computing

• Data Center Assessment and Benchmarking

• Tracking Petascale and Exascale Initiatives

Page 24: Welcome To The 33 rd HPC User Forum Meeting September 2009.

Agenda: Day One, Wednesday Morning Agenda: Day One, Wednesday Morning

8:10am Introductions and Welcome, Steve Finn and Earl Joseph Morning Session Chair: Steve Finn

8:15am Weather/climate presentation from ORNL, Jim Hack8:45am Weather/climate presentation from NCAR, Henry Tufo9:15am Weather/climate presentation from NASA/Goddard, Phil

Webster 9:45am Two short vendor technology updates (Altair and Sun)10:15am Break10:30am Weather/climate presentation from NRL Monterey, Jim

Doyle11:00am Weather and Climate Directions from an IBM

perspective, Jim Edwards11:25am Panel on HPC Weather/Climate/Earth Sciences

Requirements & DirectionsModerators: Steve Finn and Earl Joseph

12:00pm Networking Lunch

Page 25: Welcome To The 33 rd HPC User Forum Meeting September 2009.

Lunch BreakThanks to

Altair Engineering

Please Return Promptly at 1:00pm

Lunch BreakThanks to

Altair Engineering

Please Return Promptly at 1:00pm

Page 26: Welcome To The 33 rd HPC User Forum Meeting September 2009.

Thank You Altair Engineering

For Lunch

Page 27: Welcome To The 33 rd HPC User Forum Meeting September 2009.

Agenda: Day One, Wednesday Afternoon Agenda: Day One, Wednesday Afternoon

Afternoon Session Chair: Paul Muzio1:00pm HPC in Europe, HECToR Update, Andrew

Jones, NAG1:30pm DOD HPCMP Program Update, Larry Davis2:00pm Weather/climate Research at Northrop

Grumman, Glenn Higgins2:25pm Weather and Climate Directions from a Cray

perspective, Per Nyberg2:50pm Panel on Government and Political Issues,

Concerns and Ideas for New Directions Moderator: Charlie Hayes

3:30pm DICE Parallel File System Project, Tracey Wilson

4:00pm NCAR HPC User Site Tour, return by 6:00pm6:00pm Networking break and time for 1-on-1 meetings 6:30pm Special Dinner Event

Page 28: Welcome To The 33 rd HPC User Forum Meeting September 2009.

WelcomeTo Day 2 Of TheHPC User Forum

Meeting

Page 29: Welcome To The 33 rd HPC User Forum Meeting September 2009.

Thank You To Our Meal Sponsors! Thank You To Our Meal Sponsors!

Wednesday Breakfast -- Hitachi Cable America

Wednesday Lunch -- Altair Engineering

Wednesday Break -- Appro International

Thursday Breakfast -- Mellanox Technologies

Thursday Lunch -- Microsoft

Thursday Break -- ScaleMP

Page 30: Welcome To The 33 rd HPC User Forum Meeting September 2009.

Agenda: Day Two, Thursday Morning Agenda: Day Two, Thursday Morning

8:10am Welcome, Earl Joseph and Steve FinnMorning Session Chair: Douglas Kothe

8:15am Power Grid Research at PNNL, Mo Khaleel8:45am HPC Data Center Power and Cooling Issues,

and New Ways to Measure HPC Systems, Roger Panton, Avetec

9:15am Compiler and Tools: User Requirements from ARSC, Edward Kornkven

9:45am New HPC Directions at Microsoft, Roger Barga10:15am Break10:30am Technical Panel on HPC Front-End Compiler

Requirements and DirectionsModerators: Robert Singleterry, Vince Scarafino

12:15pm Networking Lunch

Page 31: Welcome To The 33 rd HPC User Forum Meeting September 2009.

73 ?73 ?

Page 32: Welcome To The 33 rd HPC User Forum Meeting September 2009.

Lunch BreakThanks to Microsoft

Please Return Promptly at 1:00pm

Lunch BreakThanks to Microsoft

Please Return Promptly at 1:00pm

Page 33: Welcome To The 33 rd HPC User Forum Meeting September 2009.

Thank YouMicrosoft For Lunch

Page 34: Welcome To The 33 rd HPC User Forum Meeting September 2009.

Agenda: Day Two, Thursday Afternoon Agenda: Day Two, Thursday Afternoon

Afternoon Session Chair: Jack Collins 1:00pm ARL HPC User Site Update, Thomas Kendall

1:30pm Weather/climate presentation from NCAR, John Michelakes

2:00pm Technical Panel on HPC Application Scaling Issues, Requirements and Trends Moderators: Doug Kothe and Paul Muzio. Panel members:

3:15pm Short vendor technology update (Microsoft)3:30pm Break4:00pm Weather/climate presentation from NASA

Langley, Mike Little4:30pm "Spider" the Largest Lustre File Stem, ORNL,

Galen Shipman5:00pm Meeting Wrap-Up and Future Meeting Dates,

Earl Joseph and Steve Finn 5:00pm Meeting Ends

Page 35: Welcome To The 33 rd HPC User Forum Meeting September 2009.

Important Dates For Your Calendar Important Dates For Your Calendar

FUTURE HPC USER FORUM MEETINGS:

October 2009 International HPC User Forum Meetings: HLRS/University of Stuttgart, October 5-6, 2009

(midday to midday) EPFL, Lausanne, Switzerland, October 8-9, 2009

(midday to midday)

US Meetings: April 12 to 14, 2010 Dearborn, Michigan at the

Dearborn Inn September 2010, Seattle Washington

Page 36: Welcome To The 33 rd HPC User Forum Meeting September 2009.

Thank YouFor Attending The 33rd

HPC User ForumMeeting

Page 37: Welcome To The 33 rd HPC User Forum Meeting September 2009.

Please email:[email protected]

Or check out:www.hpcuserforum.com

Questions?Questions?

Page 38: Welcome To The 33 rd HPC User Forum Meeting September 2009.

Please email:[email protected]

Or check out:www.hpcuserforum.com

Questions?Questions?

Page 39: Welcome To The 33 rd HPC User Forum Meeting September 2009.

HPC User ForumSteering Committee

MeetingSeptember 2009

Page 40: Welcome To The 33 rd HPC User Forum Meeting September 2009.

How Did The Meeting Go?How Did The Meeting Go?

What worked well?

What needs to be changed or improved?

Dates and locations for the next Steering Committee meetings? SC09 – Monday January, 2010 at NASA

Page 41: Welcome To The 33 rd HPC User Forum Meeting September 2009.

Important Dates For Your Calendar Important Dates For Your Calendar

FUTURE HPC USER FORUM MEETINGS:

October 2009 International HPC User Forum Meetings: HLRS/University of Stuttgart, October 5-6, 2009

(midday to midday) EPFL, Lausanne, Switzerland, October 8-9, 2009

(midday to midday)

US Meetings: April 12 to 14, 2010 Dearborn, Michigan at the

Dearborn Inn September 2010, Seattle Washington

Page 42: Welcome To The 33 rd HPC User Forum Meeting September 2009.

Please email:[email protected]

Or check out:www.hpcuserforum.com

Questions?Questions?

Page 43: Welcome To The 33 rd HPC User Forum Meeting September 2009.

Q408 vs. Q109Q408 vs. Q109

Segment Q408 Q109 Sequential Revenue ($K) Revenue ($K) Growth

Supercomputer 769,055 801,874 4.3%Divisional 373,534 237,496 -36.4%Departmental 953,635 753,893 -20.9%Workgroup 399,757 282,198 -29.4%Grand Total 2,495,981 2,075,460 -16.8%

Segment Q408 Q109 Sequential Shipments Shipments Growth

Supercomputer 464 342 -26.3%Divisional 1,083 852 -21.3%Departmental 6,762 4,291 -36.5%Workgroup 27,697 16,189 -41.5%

Grand Total 36,006 21,845 -39.3%

Page 44: Welcome To The 33 rd HPC User Forum Meeting September 2009.

HPC Qview Tie To Server Tracker:1Q 2009 DataHPC Qview Tie To Server Tracker:1Q 2009 Data

 HPC Server

Revenue

HPC Special Revenue

Recognition Services

Revenue Beyond Base

Nodes Q1 HPC Total Q1 Share

HP $268 $140 $190 $598 29%

IBM $178 $207 $140 $525 25%

Dell $102 $57 $90 $249 12%

Sun $36 $20 $19 $75 4%

Other $464 $50 $137 $651 31%

Total $1,048 $474 $576 $2,098 100%

44

This Number Ties to the Server TrackerThis Number Ties to the Server Tracker This Number Ties

to the HPC QviewThis Number Ties to the HPC Qview

Page 45: Welcome To The 33 rd HPC User Forum Meeting September 2009.

Government Panel

Questions

Page 46: Welcome To The 33 rd HPC User Forum Meeting September 2009.

Government Panel QuestionsGovernment Panel Questions

#1 If you believe that the US’s greatest asset in the next 25 years will be our ability to lead to lead the world in the development of intellectual property: Do you believe the USG is providing sufficient

investment to ensure US competitiveness in science and technology, in general, and HPC in particular? Elaborate.

What do you think the USG should or should not do to help HPC?

Page 47: Welcome To The 33 rd HPC User Forum Meeting September 2009.

Government Panel QuestionsGovernment Panel Questions

#2 Most hardware vendors will agree that profit margins on USG HPC procurements, especially those at the high end, are often negligible at best.

a. While it is generally understood that the USG is obligated to try to get the best value for its money, is there a greater obligation beyond a specific procurement for the USG’s behavior towards the industry in general?

b. If you believe a healthy US HPC community is important for US competitiveness, what, if anything, should the USG specifically do to help the financial or business health of the US HPC vendors?

c. Should the vendors, via one or more of the industry groups, lobby for more lenient procurement terms, less stringent benchmarks, and lower penalties in advanced system procurements?

d. Or, should the vendors simply “no bid” more frequently, until the USG relaxes its procurement terms?

Page 48: Welcome To The 33 rd HPC User Forum Meeting September 2009.

Government Panel QuestionsGovernment Panel Questions

#3 Do you agree that the USG emphasis, especially within DOE and the NSF, in the area of petascale and exascale computing is appropriate and the best use of USG funding for support of the US HPC industry and HPC technology development?

Please Elaborate

Page 49: Welcome To The 33 rd HPC User Forum Meeting September 2009.

Government Panel QuestionsGovernment Panel Questions

#4 Over the past forty years or so, up to about the middle 1990s, industry traditionally followed the lead of the USG in adopting HPC technology. For example: Cray Research sold more YMP supercomputers to industry than to governments. Why hasn’t US industry followed the lead of the USG in the race to petascale computing?

a. Is it because their traditional applications don’t need to scale that high?

b. Is it because ISV software (and their own s/w) doesn’t scale?

c. Is it because of the software per CPU costs?d. Will this effect US competitiveness?e. What action should the USG take, if any, to encourage

industry adoption of high end HPC specifically, or HPC of any size, in general?

Page 50: Welcome To The 33 rd HPC User Forum Meeting September 2009.

Government Panel QuestionsGovernment Panel Questions

#5 At the National Science Foundation there have been two major HPC system funding programs over the past three years:

a. The Track 1 Program, to fund the worlds most powerful “leadership class” petascale supercomputers, an IBM system, developed under the DARPA HPCS Program, planned for installation at NCSA in 2011. ( It is important to note that Cray is also developing a multi petaFLOP system under the DARPA HPCS Program, which is currently expected to be installed at the Oak Ridge National Laboratory, funded by DOE.)

b. The Track 2 Program, four annual procurements to install “mid range” systems smaller than the Track 1 system but of a size to bridge the gap between current HPC systems and more advanced petascale systems. The first Track 2 system was installed at TACC at the University of Texas. The second and third systems are scheduled for the University of Tennessee at ORNL and the University of Pittsburgh and Carnegie Mellon at the Pittsburgh Supercomputing Center. The results of the fourth annual procurement, promised to be a multiple buy of up to four systems, has yet to be announced.

Questions: a. Do you agree specifically with the NSF Track 1 and 2 programs, or do you

think NSF’s resources should have been or should, in the future, be distributed more broadly throughout academia? Why?

b. Now that the forth and last Track 2 procurement is about over, what do you recommend NSF should do next with respect to HPC?

Page 51: Welcome To The 33 rd HPC User Forum Meeting September 2009.

Government Panel QuestionsGovernment Panel Questions

#6 Do you believe the USG is funding HPC software to the degree necessary to ensure US leadership?

a. Specifically with respect to the petascale programs?

b. With respect to assisting the ISVs or industrial corporations themselves?

c. What other actions would you recommend to improve the US posture with respect to software, especially for US competitiveness?

Page 52: Welcome To The 33 rd HPC User Forum Meeting September 2009.

Government Panel QuestionsGovernment Panel Questions

#7 Do you think programs like DARPA HPCS will lead the mainstream HPC industry toward higher productivity and performance, or will the technologies developed for these programs split off from most of HPC and go their own way?

Page 53: Welcome To The 33 rd HPC User Forum Meeting September 2009.

Government Panel QuestionsGovernment Panel Questions

#8 In summation, if any panel members have comments regarding HPC public policy not previously expressed heretofore, please take a few minutes to summarize those points of importance to you.