HPC IS USEFUL ONLY FOR CFD SIMULATIONS - Ansys · CFD simulations can clearly scale up to higher...

Post on 05-Apr-2020

43 views 0 download

Transcript of HPC IS USEFUL ONLY FOR CFD SIMULATIONS - Ansys · CFD simulations can clearly scale up to higher...

Advances have made tasks that were previously solvable only by a

supercomputer tractable for SMALLER COMPUTERS.

CFD simulations can clearly scale up to higher core counts than structural

mechanics or electromagnetics simulations because of the nature of the

underlying numerical algorithms. But this does not mean that structural and

electromagnetics simulations are not amenable to HPC solutions.

In a recent ANSYS survey, about ONE THIRD OF THE ALMOST 3,000 RESPONDENTS said that they limit the size or amount of detail for nearly

every model they run due to compute capacity and/or turnaround time

limitations. An additional 57 PERCENT OF RESPONDENTS said they

impose these limits on at least some of their models.

Twenty years ago, HIGH-PERFORMANCE COMPUTING (HPC) was primarily used

by the larger enterprise organizations for engineering simulations. HPC is now gradually

becoming more common in smaller and mid-sized companies as well. Nevertheless, many

misconceptions still surround HPC, preventing its adoption and/or growth in product

development scenarios where it could clearly provide a higher return on investment.

Years ago, only SUPERCOMPUTERS had enough

computing power and capacity to perform even routine simulations.

HIGH-PERFORMANCE COMPUTING IS AVAILABLE ON SUPERCOMPUTERS ONLY

myth #1

SOLUTION: ANSYS has teamed up with HPC strategic partners

to make specification and deployment of HPC easier for you.

PROBLEM: Lack of expertise

and time to specify hardware

configurations is a leading barrier.

WITHOUT INTERNAL IT SUPPORT, HPC CLUSTER ADOPTION IS UNDOABLE

myth #4

AVERAGE OF 10 DIFFERENT ANSYS MECHANICAL BENCHMARKS

HPC IS USEFUL ONLY FOR CFD SIMULATIONS

myth #2

I DON’T NEED HPC—MY JOB IS RUNNING FAST ENOUGH

myth #3

MORE COMPUTING

CORES PER CPU

INTEGRATED I/O ON A PROCESSOR

DIE (YIELDING HIGHER MEMORY

BANDWIDTH)

MORE AND FASTER MEMORY

CHANNELS

LARGER L3 CACHE SIZE, FASTER DISK

STORAGE

FASTER INTERCONNECTS

AVX SUPPORT

SO

LVE

R S

PE

ED

UP

2 cores (1 node) 16 cores (1 node) 32 cores (2 nodes) 64 cores (4 nodes) 128 cores (8 nodes)

Continually Improving Core Solver Rating To 128 Cores

FREQUENCY OF LIMITING SIZE/DETAIL IN SIMULATION MODELS DUE TO COMPUTER INFRASTRUCTURE OR TURNAROUND TIME LIMITATIONS

57%

34%

9%

For some models

Nearly every model

Almost never

At ANSYS, an intense focus on HPC software development has

produced capabilities that set us apart and enabled breakthrough productivity

on current and emerging hardware solutions.

If your CFD software can only scale down to about 100,000 cells per core, then a typical two million cell model can only run on 20 cores efficiently.

Having more cores available is useless in this case because they do not increase the

simulation speed. However, if the CFD software can scale to about 5,500 cells per

core (like ANSYS Fluent) the same two million cell model can then run on 360 cores, resulting in an almost 18-fold faster turnaround!

PARALLEL SCALABILITY IS ALLABOUT THE SAME, RIGHT?

myth #5

The key is that the cost of HPC is matched to value — and HPC offers one of the

greatest returns on investment possible.

ANSYS HPC software licensing is designed on pricing models that ensure the

highest value for engineering simulation workloads while allowing ANSYS to

continue our HPC software developments.

HPC SOFTWARE AND HARDWAREARE RELATIVELY EXPENSIVE

myth #6

WRONG

AVX

VALUE PROPOSITION OF HPC

ENHANCES ENGINEERING

PRODUCTIVITY BY

ACCELERATING SIMULATION

THROUGHOUT

ENABLES ENGINEERS TO

CONSIDER MORE DESIGN

IDEAS AND MAKE EFFICIENT

PRODUCT DEVELOPMENT

DECISIONS

ALLOWS ENGINEERS TO

SIMULATE LARGER, MORE

COMPLEX MODELS SO THAT

MORE ACCURATE DESIGN

DECISIONS CAN BE MADE

CU

MU

LA

TIV

E T

IME

SA

VIN

GS

US

ING

MU

LTIP

LE

MU

LTI-

CO

RE

JO

BS

LIC

EN

SE

FE

E P

ER

MU

LTIP

LE

MU

LTI-

CO

RE

JO

B

AB

SO

LUT

E T

IME

SA

VIN

GS

PE

R J

OB

LIC

EN

SE

FE

E P

ER

CO

RE

PE

R J

OB

NUMBER OF CORES NUMBER OF CORES

NUMBER OF CORES NUMBER OF CORES

Ideal s

calabili

ty HPC Pack

HPC Workgroup

to read the full white paper visit ansys.com/hpc-myths

Brought to you by

Demonstrated scalability of ANSYS Fluent above 80 percent efficiency with as low as 5,500 cells per compute core

Speedup

Ideal

11.1Kcells/core

7.4Kcells/core

5.5Kcells/core

4,096 8,192 12,288 16,384

16,000

14,000

12,000

10,000

8,000

6,000

4,000

2,000

0