Exascale Computing - hlrs.de · Exascale Computing 4 Exascale Computing The shift from petascale...

12
Meeting Exascale-potential with research on highly pa- rallel systems, efficient scalability, energy efficient algo- rithms and large-scale simulations. Exascale Computing

Transcript of Exascale Computing - hlrs.de · Exascale Computing 4 Exascale Computing The shift from petascale...

Page 1: Exascale Computing - hlrs.de · Exascale Computing 4 Exascale Computing The shift from petascale computing to exascale computing—a thousandfold increase in computing power—constitutes

Meeting Exascale-potential with research on highly pa-rallel systems, efficient scalability, energy efficient algo-rithms and large-scale simulations.

Exascale Computing

Page 2: Exascale Computing - hlrs.de · Exascale Computing 4 Exascale Computing The shift from petascale computing to exascale computing—a thousandfold increase in computing power—constitutes

2

Page 3: Exascale Computing - hlrs.de · Exascale Computing 4 Exascale Computing The shift from petascale computing to exascale computing—a thousandfold increase in computing power—constitutes

2 3HLRS

HLRSHigh Performance Computing Center Stuttgart

The High Performance Computing Center of Stuttgart (HLRS) of the University of Stuttgart is the first National Supercomputing Center in Ger-many and is offering services to both academic users and industry. Apart from the operation of supercomputers HLRS activities include teaching and training in distributed systems, software en-gineering and programming models, as well as development of new technologies. HLRS is an ac-tive player in the European research arena with special focus on Scientific Excellence and Indust-rial Leadership initiatives.

Our Network: HLRS is tightly connected to acade-mia and industry through long term partnerships with global market players such as Porsche and T-Systems, as well as worldwide companies, HPC centres and Universities. Particular attention is gi-ven to collaboration with Small and Medium Enter-prises (SMEs).

Our Infrastructure: HLRS operates a CRAY XC40 supercomputer (peak performance > 7 PetaFlops), as well as a variety of smaller systems, reaching from clusters to cloud resources.

Our Experience: HLRS has been at the forefront of regional, national and European research and innovation over the last 20 years. During this time, HLRS has participated successfully in more than 90 European research and innovation projects.

Our Expertise: HLRS is a leading innovation cen-ter, applying software engineering methods to HPC and Cloud for the benefit of multiple application do-mains such as automotive, engineering, health, mobility, security, and energy. Thanks to the close interaction with industry, the center’s capabilities and expertise support the whole lifecycle of simu-lation covering research aspects, pre-competitive development and preparation for production. The HLRS innovation group, which actively examines and tests new technologies, can bring into projects expertise on leading edge technologies hardware and scale up data analysis techniques.

ProgrammingModels & Tools

CloudComputing

Optimization & Scalability

Energy Efficiency

Exascale Computing

Services

Big Data, Analytics & Management

Visualization

Featured Topics

Director HLRSProf. Dr. Michael Resch

Page 4: Exascale Computing - hlrs.de · Exascale Computing 4 Exascale Computing The shift from petascale computing to exascale computing—a thousandfold increase in computing power—constitutes

4Exascale Computing

Exascale ComputingThe shift from petascale computing to exascale computing—a thousandfold increase in computing power—constitutes the start of a new era within the community of High-Performance Computing (HPC). The paradigm shift from petascale to exascale will not only provide faster HPC systems, but also influ-ence the path of designing hardware components, software, applications, and platforms. These as-pects of supercomputing will need to be adapted, optimized, or, in some cases, even reinvented. After all, it is the ultimate goal to efficiently solve computational problems, which are so far too com-plex for recent systems.To this end, the High-Performance Computing Center Stuttgart (HLRS) takes part in various re-search activities that are topics of interest on the path to exascale. Our research activities will

improve scalability of applications and enable them to run them on massively parallel systems. We ta-ckle large problems with high numeric complexity and work toward energy-efficient algorithms, re-ducing highly parallelized systems’ power consump-tion. With this brochure, we invite you to discover not only how traditional HPC-applications, such as computational fluid dynamics (CFD), can be im-proved on their path to exascale, but also how improvements need to be delivered, such as sup-porting the evolution of application-specific codes. Furthermore, there is a clear need to discover the full potential to manage the emergence of increa-singly higher data volumes in exascale leading to the emergence of High Performance Data Ana-lytics (HPDA), which will become more and more important in the future.

Project Overview

Page 5

Page 6

Page 7

Page 8

Page 9

Page 10

POP - Performance Optimisation and Productivity (A Centre of Excellence in Computing Applications)

Mont-Blanc 2/3

EXPERTISE - EXperiments and high PERformance computing for Turbine mechanical Integrity and Structural dynamics in Europe

EXASOLVERS - Extreme Scale Solvers for Coupled Problems

ExaFLOW - Enabling Exascale Fluid Dynamics Simulations

CATALYST - Combining HPC and High Performance Data Analytics for Academia and Industry

Page 5: Exascale Computing - hlrs.de · Exascale Computing 4 Exascale Computing The shift from petascale computing to exascale computing—a thousandfold increase in computing power—constitutes

5POP

POP

High performance computing is a fundamental tool for the progress of science and engineering and as such for the economic competitiveness. The grow-ing complexity of parallel computers is leading to a situation where code owners and users are not aware of the detailed issues affecting the perfor-mance of their applications. The result is often an inefficient use of computing resources. Code de-velopers often do not have sufficient insight in its detailed causes in order to address the problem properly. The objective of POP is to operate a Center of Excel-lence in performance optimisation and productiv-ity and to share our expertise in the field with the computing community. In particular, POP will offer the service of precisely assessing the performance of computing applications of any sort, from a few hundred to many thousands of processors. Also, POP will show users the specific issues affecting the performance of their code and the best way to alleviate them. POP will target and offer such ser-vices to code owners and users from all domains, including infrastructure operators, academic and industrial users.

The estimated population of such applications in Europe is 1500 and within the project life-time POP has the ambition of serving over 150 such codes. The Added Value of POP’s ser-vices is the savings generated in the operation and use of a code, which will result in a significant Return on Investment (fixing a code costs less than running it below its optimal levels) by em-ploying best-in-class services and release capacity for resolv-ing other priority issues. POP will be a best-in-class centre.

By bringing together the European world-class ex-pertise in the area and combining excellent acade-mic resources with a practical, hand-on approach, it will improve the access to computing applica-tions, thus allowing European researchers and in-dustry to be more competitive.

Project Partners � Barcelona Supercomputing Center, Spain � Numerical Algorithm Group,UK � RWTH Aachen � HLRS � Teratec, FR � Forschungszentrum Jülich

Project details � Funding Agency: EU-H2020 � Runtime: 10/2015 - 03/2018

Performance Optimisation and Productivity (A Centre of Excellence in Computing Applications)

ContactDr. José Gracia / Christoph NiethammerPhone: +49 (0) 711/ 685-87208 +49 (0) 711/ 685 87203E-Mail: [email protected] [email protected]

Further Informationwww.pop-coe.eu

Page 6: Exascale Computing - hlrs.de · Exascale Computing 4 Exascale Computing The shift from petascale computing to exascale computing—a thousandfold increase in computing power—constitutes

6Mont-Blanc 2/3

Mont-Blanc 2/3

Mont-Blanc 2The limiting factor in the development of an Exasca-le High Performance Computer System is power consumption. The Mont-Blanc2 project focused on the task to develop a next generation HPC system using embedded technologies to reach this difficult task. After the development of the hardware archi-tecture in the first phase of the Mont-Blanc pro-ject, Mont-Blanc2 focused more on the developed of the necessary system software stack and evolu-tion of the system design. It examined a new pro-gramming model allowing to write efficient code for the new computer architecture. It emphasized tools for the programmer like debugger and per-formance analysis tools, which increase the usabil-ity of such a system for the users.The main contribution of HLRS is the development of scalable debugging tools. In particular, HLRS ex-tended the task-based graphical debugger Tema-nejo with support for the OmpSs programming model, and support for multi-node debugging. In addition, HLRS also contributed to evaluation of the programming model and prototype system by port-ing and benchmarking an application from the en-gineering domain.Funding Agency: EC FP7Runtime: 01.10.2013 – 31.1.2017

Mont-Blanc 3The Mont-Blanc project aims to design a new type of computer architecture capable of setting future HPC standards, built from energy efficient solu-tions used in embedded and mobile devices. The project has been running since 2011 and was ex-tended in 2013 (Mont Blanc 2) and 2015 (Mont Blanc 3), respectively. In particular, Mont Blanc 3 will enable further development of the OmpSs pro-gramming model to automatically exploit multiple cluster nodes, transparent application checkpoin-ting for fault-tolerance, support for ARMv8 64-

bit processors, and the initial design of the Mont-Blanc Exascale architecture. HLRS contribution to the project is twofold. Firstly, we will participate in the development of the programming model, in particular combining MPI and OmpSs into a hybrid, task-aware MPI/OmpSs. This will allow to overlap MPI communication with computation with minimal effort for the application programmer. Secondly, HLRS will contribute to the evaluation of the pro-gramming model and the architecture by porting a repres ventative scientific application.Funding Agency: EC H2020Runtime: 01.10.2015 – 31.09.2018

Project Partners

European approach towards Energy Efficient High Performance

ContactDr. José GraciaPhone: +49 (0) 711/ 685-87208E-Mail: [email protected]

Further Informationwww.montblanc-project.eu

Page 7: Exascale Computing - hlrs.de · Exascale Computing 4 Exascale Computing The shift from petascale computing to exascale computing—a thousandfold increase in computing power—constitutes

7EXPERTISE

EXPERTISE

EXPERTISE is a European Training Network (ETN) that will contribute to train the next generation of mechanical and computer science engineers. Within the network 15 Early Stage Researchers (ESRs) will work on the big challenges on the way to a fully validated nonlinear dynamic model of turbo-machin ery components. Along their way they are supervised by experts at world leading institutions from across Europe in this multidisciplinary pro-ject. The ultimate research objective of EXPERTISE is to develop advanced tools for the dynamics ana-lysis of large-scale models of turbine components to pave the way towards the virtual testing of the entire machine. Key aspects addressed thereby will be the understanding and accurate modeling of physics of frictional contact interfaces, new, highly efficient and accurate nonlinear dynamic analysis tools as well as the integration of all this into high performance computing (HPC) techniques, enab-ling for the first time the accurate dynamic analysis of a large scale turbomachinery model.

The research program of EXPERTISE is based on the following Work Packages (WPs):

� WP1 – Advanced modeling of friction contacts � WP2 – Identification of contact interfaces � WP3 – Structural dynamics of turbine and its components

� WP4 – High Performance Computing for struc-tural dynamics

HLRS as expert in the field of high performance computing (HPC) will lead the HPC activities in EX-PERTISE. Also, HLRS will have a key role in the network by training all the researchers in modern HPC techniques and furthermore add its own re-search project, addressing the tremendous prob-lem of handling the huge amounts of data that are produced during these full model simulations and bring HPC systems to their limits.

BeneficiariesImperial College of Science Technology and Medi-cine London | Universität Stuttgart | University of Oxford | CRAY UK Limited| École Centrale de Lyon | Middle East Technical University | Vysoka Skola Banska – Technicka Univerzita Ostrava | Barcelona Supercomputing Center – Centro Nacional de Su-percomputacion | Mavel AS | Technische Universi-tät München|

Project InformationRuntime: March 2017 – February 2021Funding Organization: Horizon 2020, Marie Sklo-dowska-Curie Actions, Innovative Training Network (H2020-MSCA- ITN)

EXperiments and high PERformance computing for Turbine mechanical Integrity and Structural dynamics in Europe

ContactDr. José Gracia / Christoph NiethammerPhone: +49 (0) 711/ 685-87208 +49 (0) 711/ 685 87203E-Mail: [email protected] [email protected]

Further Informationwww.msca-expertise.eu

Page 8: Exascale Computing - hlrs.de · Exascale Computing 4 Exascale Computing The shift from petascale computing to exascale computing—a thousandfold increase in computing power—constitutes

8EXASOLVERS

EXASOLVERS

Exascale systems will be characterized by billion- way parallelism. Computing on such extreme sca-les requires suitable methods. The ExaSolvers 2 project hence investigates such methods:

Parallel adaptive multigrid (G-CSC, University Frankfurt)The multigrid method is of optimal complexity and hence suited for extreme scale parallelism. The group from Frankfurt develops their own parallel multigrid framework ug4 which also adapts mesh resolution in order to increase the solution efficien-cy.

Time parallelization (ICS, USI Lugano)In transient simulations, not only the simulation do-main but also the investigated time frame can be divided and handled on different execution units in parallel in order to efficiently use the massive par-allelism of future systems.

Optimization and inverse problems (Trier University)By means of inverse problems, it is possible to de-termine simulation parameters that can’t be mea-sured due to e.g. subminiature structures, inac-cessible environments, etc. However, usage of the aforementioned methods for optimization and in-verse problems provides further potential to use exascale systems efficiently.

Uncertainty quantification (RWTH Aachen)The group from Aachen uses low rank hierarchi-cal tensors to quantify uncertainties of simulations, which allows to further increase the amount of pa-rallelism that can be used efficiently.

Energy efficiency (HLRS, University Stuttgart)Due to their massive parallelism, Exascale systems will require huge amounts of energy. We hence in-vestigate methods to increase the energy effi ciency of such systems on multiple levels, i.e. algorithmic efficiency, efficiency-aware implementation as well as adaption of hardware parameters (e.g. redu-cing the CPU’s core frequency, known as Dynamic Voltage and Frequency Scaling)

A collaboration with the Japanese ADVENTURE project has been established in order to deploy the performance engineering expertise of the project partners from Japan on codes developed by the ExaSolvers 2 project. In return, ADVENTURE is go-ing to integrate our methods into their framework.In order to assess the developed methods, a simu-lation of transdermal drug delivery through the hu-man skin with detailed resolution of the lipid scale is used as benchmark application.

Extreme Scale Solvers for Coupled Problems

ContactBjörn Dick / Dr. Ralf SchneiderPhone: +49 (0) 711/ 685-87189 +49 (0) 711/ 685-87236E-Mail: [email protected] [email protected]

Further Informationwww.hlrs.de/about-us/research/current-projects/exasolvers

Page 9: Exascale Computing - hlrs.de · Exascale Computing 4 Exascale Computing The shift from petascale computing to exascale computing—a thousandfold increase in computing power—constitutes

9ExaFLOW

ExaFLOW

We are surrounded by moving fluids (gases and liquids), be it breathing or the blood flow in our ar-teries; the flow around cars, ships, and airplanes; the changes in cloud formations or plankton trans-port in oceans; even formations of stars and gala-xies are modelled as phenomena in fluid dynamics. Fluid dynamics simulations provide a powerful tool for the analysis of fluid flows and are an essenti-al element of many industrial and academic prob-lems. In fluid dynamics there is no limit to the size of the systems to be studied via numerical simula-tions. The complexities and nature of fluid flows, of-ten combined with problems set in open domains, imply that the resources needed to computational-ly model problems of industrial and academic re-levance are virtually unbounded. The main goal of this project is to address algorithmic challenges to enable the use of accurate simulation models in ex-ascale environments.The main goal of ExaFLOW is to address key algo-rithmic challenges in CFD (Computational Fluid Dy-namics) to enable simulation at exascale, guided by a number of use cases of industrial relevance, and to provide open-source pilot implementations. Thus, driven by problems of practical engineering interest we focus on important simulation aspects, including:

� error control and adaptive mesh refinement in complex computational domains

� resilience and fault tolerance in complex simu-lations

� solver efficiency via mixed discontinuous and continuous Galerkin methods and appropriate optimised preconditioners

� heterogeneous modelling to allow for different solution algorithms in different domain zones

� evaluation of energy efficiency in solver design � parallel input/output and in-situ compression for extreme data

In ExaFlow the High-Performance Computing Cen-ter Stuttgart (HLRS), in cooperation with the insti-tute for Aero and Gas Dynamics (IAG) of the Univer-sity of Stuttgart, forms the second biggest partner in the ExaFlow Consortium. In terms of Data reduc-tion, HLRS is especially responsible for the evaluati-on and development of data reduction algorithms_ based on dynamic-mode decomposition (DMD) and emerging new ideas related to the Koopman Operator. Additionally, the task of researching ener-gy efficiency and awareness is located at HLRS. Within this scope, the power consumption of dif-ferent implementations using both high-resolu-tion component level and lower-resolution node- level methods is measured as well as the impact of system level features (from frequency scaling to vectorization) on the total energy to solution is in-vestigated.

Enabling Exascale Fluid Dynamics Simulations

ContactDr. Ralf SchneiderPhone: +49 (0) 711/ 685-87236E-Mail: [email protected]

Further Informationwww.exaflow-project.eu

Page 10: Exascale Computing - hlrs.de · Exascale Computing 4 Exascale Computing The shift from petascale computing to exascale computing—a thousandfold increase in computing power—constitutes

10CATALYST

At the High Performance Computing Center Stutt-gart (HLRS), customers tend to execute more and more data-intensive applications. One Petabyte of re-sult data for large-scale simulations is not uncom-mon anymore, and since systems grow, data will as well. Since it no longer becomes feasible that data is processed and analysed manually by domain experts, HLRS and Cray Inc. have launched the CATALYST project to advance the field of data-intensive comput-ing by converging HPC and Big Data to allow a seam-less workflow between compute-intensive simulations and data-intensive analytics. For that purpose, Cray Inc. designed the Urika-GX data analytics hardware, which supports Big Data technologies and further-more, enhances the analysis of semantic data. This system has been installed as an extension of Hazel Hen, the current HPC-flagship system of HLRS.Main objective of CATALYST is to evaluate the hard-ware as well as the software stack of the Urika-GX and its usefulness with a particular focus on applica-tions from the engineering domain. As the majority of today’s data analytics algorithms are oriented to-wards text processing (e.g. business analytics) and graph analysis (e.g., social network studies), we are further in need to evaluate existing algorithms with respect to their applicability for engineering. Thus, CATALYST will examine future concepts for both hardware and software. The first case study conducted in collaboration with Cray Inc. addresses the performance variations of our Cray XC40 system. Performance variability on HPC platforms is a critical issue with serious implica-tions for the users: irregular runtimes prevent users from correctly assessing performance and from effi-ciently planning allocated machine time. Thus, moni-toring today’s IT infrastructures has actually become a big data challenge on its own. The analysis work-flow used to identify the causes of runtime variations consists of three steps including different configura-tion parameters:

1. Data filtering2. Detection of applications that show high variability (Victim)3. Detection of applications that potentially causing the variability (Aggressor)

Outlook � Big Data application evaluation � Close cooperation with partners from both, in-dustry and academia

� Seamless integration of the Big Data system into our existing HPC infrastructure

� Develop and evaluate practical case studies to advertise the solution

Project InformationRuntime: October 2016 – September 2019Funding Organization: Ministry of Science, Research and the Arts Baden-WürttembergPartners: HLRS, Cray Inc. & Daimler AG (associated)

Combining HPC and High Performance Data Analytics for Academia and Industry

ContactMichael GiengerPhone: +49 (0) 711/ 685-65891E-Mail: [email protected]

Further Informationwww.hlrs.de/en/about-us/research/current-projects/data-analytics-for-hpc

CATALYST

Page 11: Exascale Computing - hlrs.de · Exascale Computing 4 Exascale Computing The shift from petascale computing to exascale computing—a thousandfold increase in computing power—constitutes
Page 12: Exascale Computing - hlrs.de · Exascale Computing 4 Exascale Computing The shift from petascale computing to exascale computing—a thousandfold increase in computing power—constitutes

High Performance Computing Center Stuttgart (HLRS)

University of StuttgartNobelstrasse 19 | 70569 Stuttgart | Germany

Phone: +49 (0)711 / 685 87 269Fax: +49 (0)711 / 685 87 209

Mail: [email protected]

Editor: Lena Bühler, Eric Gedenk, Dr. Bastian KollerDesign: Janine Jentsch, Ellen Ramminger

© HLRS 2017