CONFERENCE PROGRAMME · CONFERENCE PROGRAMME 1 - 5 ... • Interaction forces involved in...

28
CONFERENCE PROGRAMME 1 - 5 December 2014 technology science & Department: Science and Technology

Transcript of CONFERENCE PROGRAMME · CONFERENCE PROGRAMME 1 - 5 ... • Interaction forces involved in...

Page 1: CONFERENCE PROGRAMME · CONFERENCE PROGRAMME 1 - 5 ... • Interaction forces involved in Drug-Receptor ... Influence Of Nanoparticle Surface Modification Through Solvent Interactions

CONFERENCEPROGRAMME

1 - 5 December 2014

technologyscience& Department:Science and Technology

Page 2: CONFERENCE PROGRAMME · CONFERENCE PROGRAMME 1 - 5 ... • Interaction forces involved in Drug-Receptor ... Influence Of Nanoparticle Surface Modification Through Solvent Interactions

– 2 –

Page 3: CONFERENCE PROGRAMME · CONFERENCE PROGRAMME 1 - 5 ... • Interaction forces involved in Drug-Receptor ... Influence Of Nanoparticle Surface Modification Through Solvent Interactions

– 3 –

WELCOME

Dear delegate

Welcome to the CHPC’s 8th National Meeting! Skukuza Rest Camp is the largest camp in Kruger National Park and offers a perfect backdrop to this year’s conservation-centred theme, “Towards an Energy Efficient HPC System.”

CHPC 2014 will explore the contributions and expectations of research communities, policy makers, information communication vendors, industry and academia through a series of contributed and invited papers, presentations and open discussion forums.This year’s national meeting is again preceded by a series of preconference workshops/tutorials (1st and 2nd December 2014) that speak to the heart of HPC. The main session will be 3rd to 5th December 2014 and will feature plenary talks and parallel breakaway sessions. The conference will also host two forums: the National Industrial HPC Advisory Forum will sit to assess and advise how the centre can best service industry in South Africa and the SADC HPC Collaboration Forum will discuss how the region can utilise HPC to work together on matters of Research and Development, Infrastructure, Human Capital Development and how the member states can harness and influence policy development in their respective countries.This year we are also joined by the international HPC Advisory Council, this half-day conference will focus on HPC usage models and benefits, the future of supercomputing, latest technology developments, best practices and advanced HPC topics. The forum is open to all registered attendees of this meeting and will bring together system managers, researchers, developers, computational scientists and industry affiliates.

As you may already know, South Africa has successfully defended its championship of the International Student Cluster Competition in Germany, June 2014. The next generation of HPC gurus will battle it out on the exhibition floor, out here in the wild, for the chance to represent South Africa at the International Student Cluster Competition in Germany, June 2015. The teams started their work on Sunday, 30 November 2014 and must build and configure cluster computers, then run a series of applications to see which team will get the best performance. It is our hope that the winning team will continue to fly the South African flag for the third time in a row! To our regular attendees, always a pleasure to see you and to our first time delegates, we hope you make the CHPC your computing home!

A warm African welcome! Dr Happy Marumo SitholeDirector: Centre for High Performance Computing

Page 4: CONFERENCE PROGRAMME · CONFERENCE PROGRAMME 1 - 5 ... • Interaction forces involved in Drug-Receptor ... Influence Of Nanoparticle Surface Modification Through Solvent Interactions

– 4 –

CHPC National Meeting and Conference

1–5 December 2014

Workshops Conference

Monday Tuesday Wednesday Thursday Friday

08:00 Registration Registration

09:00 W1 W2 W3 W4 W5 W1 W2 W3 W7 F1Opening & Welcome

Plenary 1

Plenary 3

Plenary 4

Plenary 6

Plenary 7

10:30 Break Break

11:00 W1 W2 W3 W4 W5 W1 W2 W3 W7 F1 1A 2A 3A 4A F2 5A 6A 7A 8A Industrial Q&A Crossfire

12:30 Lunch Lunch

13:30 W1 W2 W3 W4 W6 W1 W2 W3 W8 F1 1B 2B 3B 4B F2 5B 6B 7B 8BPlenary 8

Closing

15:00 Break Break

15:30 W1 W2 W3 W4 W6 W1 W2 W3 W8 F1 Plenary 2 Plenary 5

16:30

18:00 Braai Cocktails and Prize-Giving

Workshops Forums Breakaway Sessions

W1: Amber F1: SADC HPC Forum Meeting 1A: Material Science 5AB: Computational Chemistry

W2: Accelrys F2: HPC Advisory Council 1B: Pharma- & Biochemistry 6AB: HPC Techniques

W3: Gaussian F3: Industry Advisory Council 2AB: HPC Technology Vendors 7A: Life & Health Sciences

W4: Opensource Software

3A: Astronomy 7B: Earth Systems

W5: InfiniBand 3B: HPC Energy Efficiency 8A: Computational Mechanics

W6: Hadoop 4AB: Space Physics 8B Comp. Bio. Mechanics

W7: HPC Sys Admin

W8: Ranger

Page 5: CONFERENCE PROGRAMME · CONFERENCE PROGRAMME 1 - 5 ... • Interaction forces involved in Drug-Receptor ... Influence Of Nanoparticle Surface Modification Through Solvent Interactions

– 5 –

CONFERENCE HIGHLIGHTS

WORKSHOPS MONDAYW1. Amber 1. Lecturer: Mahmoud Soliman, University of KwaZulu-NatalW2. Gaussian 1. Lecturers: Mike Bearpark & Alexandra Simperler, Imperial College LondonW3. Accelrys 1 — Material Studio. Lecturer: Marc Meunier, AccelrysW4. Open Source Software: Lecturers: TBA, CHPCW5. InfiniBand and High-Speed Ethernet for Dummies. Lecturer: Dhabaleswar K. Panda, Ohio State University. [Half day]W6. Accelerating Big Data Processing with Hadoop and Memcached on Data centers with Modern Networking and Storage Architecture. Dhabaleswar K. Panda, Ohio State University. [Half day]

TUESDAYW1. Amber 2. Lecturer: Mahmoud Soliman, University of KwaZulu-NatalW2. Gaussian 2. Lecturers: Mike Bearpark & Alexandra Simperler, Imperial College LondonW3. Accelrys 2 — Discovery Studio. Lecturer: Thomas Blarre, AccelrysW7. HPC System Administration Best Practices: Automating HPC system administration: experiences from the ICTP. Lecturer: Clement Onime, ICTP [Half day]W8. Ranger Deployment and Discussion. Lecturer: Dan Stanzione, Texas Advanced Computing Center [Half day] FORUMSF1. SADC HPC WorkshopF2. International HPC Advisory CouncilF3. Industry Advisory Council EXHIBITION AND STUDENT CLUSTER COMPETITION Guided tours of the Student Cluster Competition in exhibition hall from Monday to Thursday.

PLENARY TALKSWEDNESDAYP1. Title TBA, Speaker TBA, IntelP2. Algorithmic and Software Challenges at Extreme Scales, Jack Dongarra , ORNL

THURSDAYP3. Technology Update and HPC Developments, Thomas Sterling, CRESTP4. Cloud methodologies and data driven paradigms for SKA data, Peter Braam, [abstract missing]P5. Designing Software Libraries and Middleware for Exascale Computing: Opportunities and Challenges, Dhabaleswar K. Panda, Ohio State UniversityFRIDAYP6. Real World Examples of HPC Workloads and Daily Practices, Martin Hilgeman, DellP7. HPC Technology to Drive Innovations – Infinite Design Exploration, Detlef Schneider, Altair EngineeringP8. Improving Wildlife Tracking via HPC-Enabled Applications, Robert Sinkovits, SDSC/UCSD

HPC and Data Vendors Cross-Fire Panel-led interrogation of vendors’ technical experts

CHPC Technical Helpdesk Technical queries answered at CHPC stand in exhibition hall from Monday to Friday.

Student Poster Competition Display of posters in exhibition hall and presentation of prizes for top posters.

Page 6: CONFERENCE PROGRAMME · CONFERENCE PROGRAMME 1 - 5 ... • Interaction forces involved in Drug-Receptor ... Influence Of Nanoparticle Surface Modification Through Solvent Interactions

– 6 –

WORKSHOP DAY 1

DAILY SCHEDULE FOR WORKSHOPS:

Registration 08:00 to 15:30 in foyer.

Refreshment break 10:30 to 10:59 in foyer

Lunch 12:30 to 13:29 in exhibition hall.

Refreshment break 15:00 to 15:29 in foyer.

W1: Amber IMahmoud Soliman, University of KwaZulu-NatalMonday 1 December 09:00–16:30 VENUE: 2.1• Introduction to Bimolecular Simulations:

- What Can we simulate? - Different computational tools - Force Fields: Amber Force field

• Interaction forces involved in Drug-Receptor complexes- Amber software structure/modules – part 1

W2: Accelrys I - Materials StudioMarc Meunier, BIOVIA Monday 1 December 09:00–16:30 VENUE: NARI

• Materials Studio: General overview, recent and future developments: Materials Studio is a complete modelling and simulation environment designed to allow researchers in materials science and chem istry to predict and understand the relationships of a material’s atomic and molecular structure with its properties and behaviour. Using Materials Studio, researchers in many industries are engineering better performing materials of all types, including pharmaceuticals, catalysts, polymers and composites, metals and alloys, batteries and fuel cells, and more.

• Dr Karl Wilkinson (Southampton University), ONETEP And Its Application On Modern HPC Platforms: Recent software implementations in the ONETEP linear-scaling density functional theory code utilize the MPI, OpenMP and OpenACC paradigms and result in significantly improved strong-scaling and a shorter time to solution than was possible using CPUs and MPI alone. This facilitates the application of the ONETEP code to systems larger than previously feasible and permits the use of the code in ab initio molecular dynamics calculations on over a thousand atoms. Here we describe these developments, the performance they have unlocked and the scale of the calculations that may now be achieved with the ONETEP package

- Dr R.A. Harris (Mintek), Influence Of Nanoparticle Surface Modification Through Solvent Interactions On Magnetization Of Iron Oxide Nanoparticles

- Prof. E. Lombardi (Unisa), Transition Metal defects in Carbon Materials• Materials Studio 7.0 Hands-on Session Part 1

- Materials Studio 7.0 Hands-on Session Part 2 - ONETEP tutorial: LiF density of States (DOS) with ONETEP and CASTEP with Dr K. Wilkinson

W3: Gaussian IMike Bearpark, Imperial College LondonMonday 1 December 09:00–16:30 VENUE: MHELEMBE• Lecture 1:

- Energies and Geometries - Frequencies and Thermochemistry. This session will cover the Gaussian input and output files and visit special points on the Potential Energy Surface.

• Demo and Discussion 1: We will demo how to setup a calculation using GaussView. We will discuss what Chemistry model to choose (HF-DFT-MP2 … and what about the basis set? – ECPs will be covered). We will inspect the output of a geometry optimisation and of a frequency calculation.

• Lecture 2:

Page 7: CONFERENCE PROGRAMME · CONFERENCE PROGRAMME 1 - 5 ... • Interaction forces involved in Drug-Receptor ... Influence Of Nanoparticle Surface Modification Through Solvent Interactions

– 7 –

- Reactions - Solvation - Orbitals and Bonding.

• Demo and Discussion 2: We will demo how to set up a transition state calculation and using solvent models. We also demo how to analyse orbitals, look into charges and dipole moments.

W4: Open Source Software for HPCLecturer(s) TBA, CHPCMonday 1 December 09:00–16:30 VENUE: NDLOPFU • Morning: Python for HPC• Afternoon: HPC simulation ○ OpenFOAM ○ Fire ○ others.

W5: InfiniBand and High-Speed Ethernet for DummiesDhabaleswar K. (DK) Panda, The Ohio State UniversityMonday 1 December 09:00–12:30 VENUE: INGWE• InfiniBand (IB) and High-speed Ethernet (HSE) technologies are generating a lot of excitement towards building next

generation High-End Computing (HEC) systems including clusters, data centres, file systems, storage, and cloud computing (Hadoop, HBase and Memcached) environments. RDMA over Converged Enhanced Ethernet (RoCE) technology is also emerging.

• This tutorial will provide an overview of these emerging technologies, their offered architectural features, their current market standing, and their suitability for designing HEC systems. It will start with a brief background behind IB and HSE. In-depth overview of the architectural features of IB and HSE (including iWARP and RoCE), their similarities and differences, and the associated protocols will be presented.

• Next, an overview of the emerging OpenFabrics stack which encapsulates IB, HSE and RoCE in a unified manner will be presented. Hardware/software solutions and the market trends behind IB, HSE and RoCE will be highlighted. Finally, sample performance numbers of these technologies and protocols for different environments will be presented.

W6: Accelerating Big Data Processing with Hadoop and MemcachedDhabaleswar K. (DK) Panda, The Ohio State University

Monday 1 December 13:30–14:30 VENUE: INGWEApache Hadoop is gaining prominence in handling Big Data and analytics. Similarly, Memcached in Web 2.0environment is becoming important for large-scale query processing. These middleware are traditionally written withsockets and do not deliver best performance on datacenters with modern high performance networks. Inthis tutorial, we will provide an in-depth overview of the architecture of Hadoop components (HDFS, MapReduce,RPC, HBase, etc.) and Memcached. We will examine the challenges in re-designing the networking and I/Ocomponents of these middleware with modern interconnects, protocols (such as InfiniBand, iWARP,RoCE, and RSocket) with RDMA and storage architecture. Using the publicly available RDMA for Apache Hadoop(http://hadoop-rdma.cse.ohio-state.edu) software package, we will provide case studies of the new designs for severalHadoop components and their associated benefits. Through these case studies, we will also examine theinterplay between high performance interconnects, storage systems (HDD and SSD), and multi-core platforms

to achieve the best solutions for these components.

Page 8: CONFERENCE PROGRAMME · CONFERENCE PROGRAMME 1 - 5 ... • Interaction forces involved in Drug-Receptor ... Influence Of Nanoparticle Surface Modification Through Solvent Interactions

– 8 –

WORKSHOPS DAY 2

DAILY SCHEDULE FOR WORKSHOPS:

Registration 08:00 to 15:30 in foyer.

Refreshment break 10:30 to 10:59 in foyer

Lunch 12:30 to 13:29 in exhibition hall.

Refreshment break 15:00 to 15:29 in foyer.

W1: Amber IIMahmoud Soliman, University of KwaZulu-Natal

Tuesday 2 December 09:00–16:30 VENUE: NDAU• Amber software structure/modules – part 2

- Practical: MD Simulations of Ritonavir-HIV PR complex - Practical: MD Simulations of Ritonavir-HIV PR complex

- Practical: Analysis of the MD trajectory

W2: Accelrys IIThomas Blarre, BIOVIA

Tuesday 2 December 09:00–16:30 VENUE: NARI• Discovery Studio: General overview, What’s New in DS.Discovery Studio is BIOVIA’ comprehensive predictive science application for the Life Sciences. It offers 3D visualization capabilities along with tools and algorithms in domains such as Macromolecule Design and Analysis, Antibody Modelling, Structure Based Design, Pharmacophore and Ligand-Based Design, QSAR, ADMET and Predictive Toxicology.• Pipeline Pilot: Short overview of the product. Improve the use of Discovery Studio through Pipeline Pilot.Pipeline Pilot is BIOVIA’s graphical scientific workflow authoring application.• Dr Thomas Blarre (BIOVIA) Molecular determinants of the differing pharmacology of two glutamate-gatedchloride channels ○ Lizelle Lubbe (University of Cape Town) Elucidating thestructural basis of N-domain selective angiotensinconverting enzyme inhibition○ Dr Telisha Traut (MINTEK) Discovery of HIV-1 Integrasestrand transfer inhibitors through molecular modelling• Discovery Studio 4.1 Hands-on Session Part 1○ Discovery Studio 4.1 Hands-on Session Part 2

W3: Gaussian IIAlexandra Simperler, Imperial College LondonTuesday 2 December 09:00–16:30 VENUE: MHELEMBE• Lecture 3:

- Oniom - NMR - Chiroptical Methods.

• Demo and Discussion 3: - Setting up an Oniom Calculation; - Setting up an NMR calculation.

• Lecture 4: - How to converge your SCF? - Intro to Excited States - Intro to TDDFT and UV-Vis Spectroscopy

• Demo and Discussion 4: We will demo and analyse a TDDFT calculation. Thereafter we will ask the audience to raise a few topics and give some hints and tips.

Page 9: CONFERENCE PROGRAMME · CONFERENCE PROGRAMME 1 - 5 ... • Interaction forces involved in Drug-Receptor ... Influence Of Nanoparticle Surface Modification Through Solvent Interactions

– 9 –

W7: HPC System Administration Best PracticesClement E.O. Onime, ICTS, ICTPTuesday 2 December 09:00–12:30 VENUE: NDLOPFU• Optional 15 minute crash course on shell-scripting. Participants are expected to have some knowledge of basic shell-

scripting, some of the utilities to be presented were developed in the Python scripting language.• Avoiding job scheduler lockups for heterogeneous (multi-queue) clusters: Analysis of a CRON based UNIX shell script that

checks the job-scheduler for anomalies and reacts accordingly. - Authenticating and authorizing HPC users from an external source: Scripts and tools to obtain and build user information from external sources, with minimal dependency on the external sources. - Distributing system configuration files for small to medium sized HPC clusters: A look at CRON based automation of c3tools, rsync over ssh as alternatives to centralized solutions.

• Automating solutions to common problems affecting batch-queue nodes including disk full, kernel panics, memory leaks: Based on a custom health-check utility triggered by the batch-queue-system daemon. - Re installing nodes after a hard-disk failure or other failures, integrating new nodes: A look at scripts for managing network-boot and PXE configuration. - Energy efficient computing: A look at scripts for gracefully managing cpu power-states from the batch queue manager. - Automating response to power-outages: A look at scripts that monitor the UPS (battery) and trigger cluster-wide reaction(s) to events.

W8: Ranger Deployment and DiscussionTuesday 2 December 13:30–14:30 VENUE: NDLOPFU• An overview of clustering and the Ranger project (Dan)• Cluster hardware: what you must have and what you might want (Tommy) : Covers the Ranger hardware, and general

rules for cluster hardware design; interconnects, storage, compute nodes and how much memory per core, etc. Ranger can be used as an example of how we put our design principles in practice, and what we are doing on today’s hardware).

• Getting your software stack started: Provisioning and the Basic Software Stack (Nick)• Keeping your cluster up to date: Change management (Nick)• Libraries and applications for the modern cluster (Tommy)• So, you have a cluster, now what? — Discussions of use cases and collaborations (Dan)

HPC Industry Advisory ForumProgramme TBA.

SADC HPC ForumProgramme TBA.

Page 10: CONFERENCE PROGRAMME · CONFERENCE PROGRAMME 1 - 5 ... • Interaction forces involved in Drug-Receptor ... Influence Of Nanoparticle Surface Modification Through Solvent Interactions

– 10 –

WEDNESDAY 3 DECEMBER

08:00

Registration

09:00

VENUE: PLENARY HALL Opening and Welcome

Chair: Dr Werner Janse van Rensburg, CHPC Opening Speaker, Dept. Science & Technology [10min]

Dr Happy Sithole, Director, Centre for High Performance Computing [10min] Big Data Applications in Nature Conservation, TBA, SANParks [15min]

Intel Talk TBA, Speaker TBA, Intel [40min]

10:30

Break

VENUE: INGWE

1A: Material Science Chair:

VENUE: NDLOPFU

2A: HPC Technology State of the Art Chair:

VENUE: MHELEMBE

3A: Astronomy Chair:

VENUE: NDAU

4A: Computational and Space Physics Chair:

VENUE: NARI

F2: HPC Advisory Council

11:00

Effects of structural transformation on lithium-ion diffusion in spinel Li1+xTi2-xO4 (x=0.33) anode materials Sylvia Ledwaba, Universtiy of Limpopo

Interconnect Your Future. Yossi Avni, Mellanox

Formation of structure in the Universe

Stefan Gottloebber

Modelling of heliospheric modulation of cosmic rays by means of Stochastic Differential Equations

Andreas Kopp, North West University

11:00 HPC Advisory Council Gilad Shainer, HPC Advisory Council chairman

11:30 Advances in MPI Dhabaleswar K. Panda, Ohio State University

11:30

On the use of optimized cubic spline interpolated atomic form factor potentials for Layered structure bandstructure calculation K. Mpshe,UNISA

Lustre based Enterprise Storage – Tuning for optimal performance

Torben Kling Petersen, Seagate

Big Data and the Coming of Age of Multi-Wavelength Astrophysics

Mattia Vaccari, University of the Western Cape

A New Approach to Modeling the Effects of the Wavy Current Sheet on Cosmic Rays in the Heliosphere, Jan-Louis Raath, North West University

12:00

Electronic structures on adsorption of thiol collectors on nickel-rich pentlandite (Fe4Ni5S8) mineral surface

Peace Prince Mhonto, University of Limpopo

IME:Towards very high performing, scalable and energy efficient storage solutions

James Coomer, DDN

Astronomy at the CHPC

Catherine Cress, CHPC

Particle Acceleration at Heliospheric Shocks: Beams and Instabilities

Urs Ganse, North West University

Page 11: CONFERENCE PROGRAMME · CONFERENCE PROGRAMME 1 - 5 ... • Interaction forces involved in Drug-Receptor ... Influence Of Nanoparticle Surface Modification Through Solvent Interactions

– 11 –

12:30

Lunch

VENUE: INGWE

1B: Pharma- and Biochemistry Chair:

VENUE: NDLOPFU

2B: HPC Technology State of the Art Chair:

VENUE: MHELEMBE

3B: HPC Energy Efficency Chair:

VENUE: NDAU

4B: Computational and Space Physics Chair:

VENUE: NARI

F2: HPC Advisory Council

13:30

Structural Insight of Glitazone for Hepato-toxicity: Resolving Mystery by PASS

Harun Patel, University of KwaZulu Natal

Predictive Materials Science,Workflows Automation and Scientific Data Management at Accelrys Marc Meunier, Accelerys

An insight on energy-to-solution running GPU accelerated application on Wilkes

Filippo Spiga, Unversity of Cambridge

A numerical modelling study of cosmic ray modulation in a global MHD heliosphere

Xi Luo, North West University

13:30 HPC at the Swiss Supercomputing Center Hussein Harake, Swiss Supercomputing Centre

14:00 HPC Market trends Addison Snell, Intersect360

14:00

In-silico identification of irreversible cathepsin B inhibitors as anti-cancer agents: virtual screening, covalent docking Analysis and molecular dynamics simulations

Sbongile Mbatha, University of KwaZulu Natal

Amazon EC2: Born in South Africa

Gustav Mauer, Amazon

Don’t forget that the datacentre is part of an energy-efficient HPC system

Paul Hatton, University of Birmingham

Wave-Particle-Interaction in Kinetic Simulations

Cedric Schreiner, North west University

14:30

Elucidating the structural basis of N-selective angiotensin converting enzyme inhibition

Lizelle Lubbe, University of Cape Town

TBA Building an energy efficient enterprise HPC storage solution

Torben Kling Petersen, Seagate

Simulation of electrostatic instabilities in plasmas through pair beams

Mehdi Jenab, North West University

15:00

Break

15:30

VENUE: NDLOPFU PLENARY HALL Chair: Dr Happy Sithole, CHPC

Algorithmic and Software Challenges at Extreme Scales, Jack Dongarra, ORNL

Discussion

16:30 End of Day 1

18:00VENUE: Kruger National Park

Intel Bush Braai

Note: Lunch is served in the Exhibition Hall. Refreshment breaks will be in the foyer.

Page 12: CONFERENCE PROGRAMME · CONFERENCE PROGRAMME 1 - 5 ... • Interaction forces involved in Drug-Receptor ... Influence Of Nanoparticle Surface Modification Through Solvent Interactions

– 12 –

THURSDAY 4 DECEMBER

08:00

Registration

09:00

VENUE: NDLOPFU PLENARY HALL

Chair: Technology Update and HPC Developments, Thomas Sterling, CREST

09:45

The exa-scale challenge of SKA imaging software,, Peter Braam

10:30

Break

VENUE: INGWE

5A: Computational Chemistry Chair:

VENUE: NDLOPFU

6A: HPC Techniques and Computer Science Chair:

VENUE: MHELEMBE

7A: Health and Life Sciences Chair:

VENUE: NDAU

8A: Computational Mechanics Chair:

11:00

DFT study of Fischer-type metal carbenes

Cornie Van Sitterd, North West University

TBA

Dan Stanzione,TACC

Region-free HPC -- a case study of connecting medical image data in Boston, USA to the CHPC in Cape Town

Rudolph Pienaar, Children’s Hospital Boston

Fast Collision Detection on the GPU for particle simulations

Nicolin Govender, CSIR

11:30

Investigation of Solvent Extraction – A DFT study

Marietjie Ungerer, North West University

Parallel, Realistic and Controllable Terrain Synthesis on Graphics Hardware

James Gain, University of Cape Town

The influence of nucleotide secondary structure and substitution rate estimates on the spatial diffusion dynamics of contemporary Rubella viruses

Leendert Cloete, SANBI

HPC in Pyrometallurgy: Applications, Challenges and Opportunities

Quinn Reynolds, MINTEK

11:50

Density Functional Theory insight into the electrochemical behaviour of Group 6 Fischer carbenes

Jeanet Conradie, University of the Free State

Topology-Aware HPC Network Resource Mapping for Users and Providers

Liwen Shih, University of Houston Clear Lake

Sequence Demarcation Tool (SDT)

Brejnev Muhire, University of Cape Town

Low power HPC: power consumption implications of the parallel FDTD method on the Samsung S4 Smartphone

Bob Ilgner, University of Stellenbosch

12:30

Lunch

Page 13: CONFERENCE PROGRAMME · CONFERENCE PROGRAMME 1 - 5 ... • Interaction forces involved in Drug-Receptor ... Influence Of Nanoparticle Surface Modification Through Solvent Interactions

– 13 –

VENUE: INGWE

5B: Computational Chemistry Chair:

VENUE: NDLOPFU

6B: HPC Techniques and Computer Science Chair:

VENUE: MHELEMBE

7B: Earth Systems Modelling Chair:

VENUE: NDAU

8B: Computational Biomechanics Chair:

13:30

Computational and Experimental Structural Studies of Selected Molybdenum(0) Monocarbene Complexes

Marile Landman, University of Pretoria

Efficiency and application of the ONETEP Linear-Scaling Density Functional Theory on Modern High Performance Computing Platforms

Karl Wilkinson, University of Southhampton

Towards the IPCC 6th Assessment Report: building Africa’s first global model for climate change projections

Nicolette Chang, CSIR

Image-based cerebrovascular modelling for advanced diagnosis and interventional planning

Alejandro F. Frangi, CISTIB, Sheffield

14:00

Theoretical and Experimental Determination of the magnetic properties of selected Organic-Inorganic copper (II) halide hybrid materials.

Stefan Coetzee, University of Pretoria

Accelerated Cooperative Co-Evolution on Multi-core Architectures

Edmore Moyo, University of Cape Town

Interaction of the Antarctic Circumpolar Current with topography: Impacts on the Southern Ocean : Eddy Dynamics Nomkwezane Sanny Kobo, CSIR

Challenge of visualising large airways flows

Hadrien Calmet, Barcelona Supercomputing Center

14:30

Density Functional Theory studies for the ring opening mechanism of 1,2-epoxy-2-aryl ethylgembisphosphonate derivatives

Ephraim Marondedze, University of Johannesburg

Energy Efficient Clustering Algorithm for High Performance Computing in Cluster-Based Wireless Sensor Networks

Shadreck Mudziwepasi, University of Fort Hare

High-resolution dynamic downscalings for the Coordinated Regional Downscaling Experiment (CORDEX) using the CHPC clusters Mavhungu Muthige, CSIR

Computational Modelling of Thrombus Development in Cerebral Aneurysms

Malebogo Ngoepe, University of Cape Town

15:00

Break

15:30

VENUE: NDLOPFU PLENARY HALL

Chair: Designing Software Libraries and Middleware for Exascale Computing: Opportunities and Challenges,

Dhabaleswar K. Panda, Ohio State University Discussion

16:30 End of Day 2

18:00

VENUE: EXHIBITION HALL

Cocktails and Prize-giving Presentation of the Student Poster Awards

Presentation of the Third CHPC Student Cluster Competition Awards

Note: Lunch is served in the Exhibition Hall. Refreshment breaks will be in the foyer.

Page 14: CONFERENCE PROGRAMME · CONFERENCE PROGRAMME 1 - 5 ... • Interaction forces involved in Drug-Receptor ... Influence Of Nanoparticle Surface Modification Through Solvent Interactions

– 14 –

FRIDAY 5 DECEMBER

08:00

Registration

09:00

VENUE: NDLOPFU PLENARY HALL

Chair:

Real World Examples of HPC Workloads and Daily Practices, Martin Hilgeman, Dell

09:45

HPC Technology to Drive Innovations – Infinite Design Exploration, Detlef Schneider, Altair Engineering

10:30

Break

11:00

VENUE: NDLOPFU PLENARY HALL

Moderator: Addison Snell, Intersect360 Research

Industrial Q&A Crossfire

[ Dell ] [ DDN ] [ Altair ] [ HP ] [ Mellanox ] [ Hauwei ] [ Seagate ]

13:00

Lunch

14:00

VENUE: NDLOPFU PLENARY HALL

Chair: Dr Werner Janse van Rensburg, CHPC Improving Wildlife Tracking via HPC-Enabled Applications, Robert Sinkovits, SDSC

14:45 Closing

15:00

Departures

Note: Lunch is served in the Exhibition Hall. Refreshment breaks will be in the foyer.

Page 15: CONFERENCE PROGRAMME · CONFERENCE PROGRAMME 1 - 5 ... • Interaction forces involved in Drug-Receptor ... Influence Of Nanoparticle Surface Modification Through Solvent Interactions

– 15 –

HPC and Data Vendors Cross-Fire• Panel-led interrogation of vendors’ technical experts

CHPC Technical Helpdesk• Technical queries answered at CHPC stand in exhibition hall from Monday to Friday.

Student Poster Competition• Display of posters in exhibition hall and presentation of prizes for top posters.

Page 16: CONFERENCE PROGRAMME · CONFERENCE PROGRAMME 1 - 5 ... • Interaction forces involved in Drug-Receptor ... Influence Of Nanoparticle Surface Modification Through Solvent Interactions

– 16 –

PLENARY ABSTRACTS

Algorithmic and Software Challenges atExtreme ScalesJack Dongarra, University of Tennessee, Oak RidgeNational Laboratory, University of Manchester

In this talk we examine how high performance computinghas changed over the last 10-year and look toward thefuture in terms of trends. These changes have had and will continue to have a major impact on our software. Some of the software and algorithm challenges have already been encountered, such as management of communication and memory hierarchies through a combination of compile--time and run--time techniques, but the increased scale of computation, depth of memory hierarchies, range of latencies, and increased run--time environment variability will make these problems much harder.

Jack Dongarra is University Distinguished Professor ofComputer Science in the Computer Science Department at the University of Tennessee and holds the title of Distinguished Research Staff in the Computer Science and Mathematics Division at Oak Ridge National Laboratory (ORNL), Turing Fellow at Manchester University, and an Adjunct Professor in the Computer Science Department at Rice University. He is the director of the Innovative Computing Laboratory at the University of Tennessee. He is also the director of the Center for Information Technology Research at the University of Tennessee which coordinates and facilitates IT research efforts at the University. He specializes in numerical algorithms in linear algebra, parallel computing, the use of advanced-computer architectures, programming methodology, and

tools for parallel computers. His research includes the development, testing and documentation of high quality mathematical software. He has contributed to the design and implementation of the following open source software packages and systems: EISPACK, LINPACK, the BLAS, LAPACK, ScaLAPACK, Netlib, PVM, MPI, NetSolve, Top500, ATLAS, and PAPI. He was awarded the IEEE Sid Fernbach Award in 2004 and in 2008 he was the recipient of the first IEEE Medal of Excellence in Scalable Computing; in 2010 he was the first recipient of the SIAM Special Interest Group on Supercomputing’s award for Career Achievement. He is a Fellow of the AAAS, ACM, IEEE, and SIAM and a member ofthe National Academy of Engineering, and is also an ISC Fellow and member of the ISC’14 Gauss Award Committee and the ISC’14 Steering Committee.

Technology update and HPC developmentsThomas Sterling, Center for Research in Extreme Scale Technologies (CREST), Indiana University

Dr. Thomas Sterling holds the position of Professor ofInformatics and Computing at the Indiana University (IU) School of Informatics and Computing as well as serves as Chief Scientist and Executive Associate Director of the Center for Research in Extreme Scale Technologies (CREST). Since receiving his Ph.D from MIT in 1984 as a Hertz Fellow Dr. Sterling has engaged in applied research in fields associated with parallel computing system structures, semantics, and operation in industry, government labs, and academia. Dr. Sterling is best known as the “father of Beowulf” for his pioneering research in commodity/Linux cluster computing. He was awarded the Gordon Bell Prize in 1997 with his collaborators for this work. He was the PI of the HTMT Project sponsored by NSF, DARPA, NSA, and NASA

REIMAGINE THE SERVERAS A BUSINESS ACCELERATOR.Introducing HP ProLiant Gen9—advance your compute capabilities with triple the capacity1 and lower TCO.2 Accelerate service delivery by 66X.3 And push workloads 4X faster.4 Position yourself to drive opportunity, grab market share, and inspire success across your business. Reimagine the server. Think compute. hp.com/go/compute

HP ProLiant

Gen9 servers

powered by

Intel® Xeon®

processors

Intel, the Intel logo, Xeon, and Xeon Inside are trademarks or registered trademarks of Intel Corporation in the U.S. and/or other countries. 1Substantiation for triple the capacity: HP

2Substantiation for lower TCO: 100 DL380 G6 servers consolidated down to 16 DL380 Gen9 enabling 62% TCO savings over 3 years including initial acquisition costs. There is also a potential reduction in monthly OPEX expenditure of over 80%. Includes software support for vSphere and Windows. Also includes a 25% discount on hardware. August 2014. 3Anonymous customer results. Customer was able to reduce the time to build and deploy infrastructure for 12 call centers from 66 days to 1. Total of 2000 servers were deployed. IDC whitepaper sponsored by HP, Achieving Organizational Transformation with HP Converged Infrastructure Solutions for SDDC, January 2014, IDC #246385. 4SmartCache Performance done with equivalent controller in a controlled environment. HP Smart Storage engineers, Houston, TX as of 18 May 2014 posted on internal SmartCache wiki page. HP OneView support for HP ProLiant Gen9 rack (DL) and blade (BL) servers will be available with HP OneView 1.20 in Dec 2014. Customers who purchase HP OneView licenses now will be granted rights to use HP Insight Control software and may transition to HP OneView 1.20 with no additional license or support fees. Customers also have the option to purchase HP Insight Control. © 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.

Page 17: CONFERENCE PROGRAMME · CONFERENCE PROGRAMME 1 - 5 ... • Interaction forces involved in Drug-Receptor ... Influence Of Nanoparticle Surface Modification Through Solvent Interactions

– 17 –

to explore advanced technologies and their implication for high-end system architectures. Other research projects included the DARPA DIVA PIM architecture project with USC-ISI, the Cray Cascade Petaflops architecture project sponsored by the DARPA HPCS Program, and the Gilgamesh high-density computing project at NASA JPL. Thomas Sterling is currently engaged in research associated with the innovative ParalleX execution model for extreme scale computing to establish the foundation principles to guide the co-design for the development of future generation Exascale computing systems by the end of this decade. ParalleX is currently the conceptual centerpiece of the XPRESS project as part of the DOE X-stack program and has been demonstrated in proof-of-concept in the HPX runtime system software. Dr. Sterling is the co-author of six books and holds six patents. He was the recipient of the 2013 Vanguard Award.

Title TBAIntel

Big data applications in nature conservationSANParks Representative

The exa-scale challenge of SKA imaging softwarePeter Braam

The Science Data Processor team is designing imagingsoftware for the SKA telescope. The longevity of thetelescope stands in sharp contrast with the fast developments in computing hardware and leads to uniquerequirements for the software: it should be easy to adaptand re-optimize as computing hardware evolves. We willstart with a description of the problem, its scale and howthis leads to a tentative system architecture which displays both cloud and HPC features. We then highlight contributions by numerous academics and industries in the area of advanced high performance software design. A solution emerges showing leveraging domain specific languages with automatic optimization, providing both the desired performance and the maintainability. Of special interest are interfaces with modern cloud and HPC packages.

Biography: Dr. Peter Braam is a multidisciplinary innovatorcovering computing, data and science. As a recognized globalleader in computing, without fear to be wrong or incomplete, he thinks systematically to advance technology frontiers and articulates insights in keynotes, white papers and private discussions. Research organizations, governments and enterprises engage with him to create concrete, focused visions and involve him in co-design of solutions. Peter is currently working with Cambridge University on the SKA telescope Science Data Processor and is most known as the creator of the Lustre file system. Prior to founding and running five startups (the first four of which were acquired) related to parallel computing and programming languages, Peter was a senior academic at Oxford and Carnegie Mellon.

Real world examples of HPC workloads and daily practices.Martin Hilgeman, HPC Consultant EMEA, Dell

Improving Wildlife Tracking via HPC-Enabled ApplicationsRobert Sinkovits, San Diego Supercomputer Center, University of California San Diego

Improvements in digital biotelemetry tracking devices (biologgers) have enabled researchers to collect accurate, long-term datasets on the movements of animals that would be prohibitively difficult to observe directly in the wild. Global Positioning System (GPS) biologgers have dramatically reduced in size/weight and can record an animal’s location at accuracies of ~2m for durations of more than a year, or even longer when equipped with a solar panel. For example, the California condors reintroduced to their former habitat in Mexico by San Diego Zoo Global (SDZG) have a <50g solarpowered GPS biologger attached to their wings that provides hourly locations that can be downloaded directly from the Internet.

Biotelemetry has contributed to major advances in understanding key concepts of animal ecology, including resource use, home range, dispersal, and population dynamics. Biotelemetry is also spawning powerful tools for informing strategies for conserving endangered species and habitats. For example, information on animal movements can be matched to the environmental attributes to build a biologically realistic picture of its ranging patterns and habitat use. Conservation managers and regulatory agencies can use this information togauge and improve the effectiveness of existing and proposed measures to protect animal populations, such as habitat conservation zoning, reserve boundaries and wildlife corridors.

This collaboration between the San Diego Zoo, the US Geological Survey and the San Diego Supercomputer Center brought together wildlife experts and HPC computational scientists to make major advances in the algorithms for processing biologger data, with an initial application to track endangered California condors. We developed an efficient implementation of a 3D movement based kernel density estimator for determining animal space use from discrete GPS measurements. Through acombination of code restructuring, parallelization and performance optimization, we were able to reduce the time to solution by up to a factor of 1000x, thereby greatly improving the applicability of the method, and making feasible full 3D estimates rather than just 2D projections. Full 3D results are a critical advance, particularly for species that make large vertical excursions (e.g. birds, aquatic mammals and arboreal species). For example in this case, it was critical to accurately assess the condor home range, including elevation, in order to evaluate the potential impact of a proposed wind farm near the condor habitat.

In addition to enabling 3D estimation, the large improvements in code performance enable science previously not feasible. Obviously it will be easier to process

REIMAGINE THE SERVERAS A BUSINESS ACCELERATOR.Introducing HP ProLiant Gen9—advance your compute capabilities with triple the capacity1 and lower TCO.2 Accelerate service delivery by 66X.3 And push workloads 4X faster.4 Position yourself to drive opportunity, grab market share, and inspire success across your business. Reimagine the server. Think compute. hp.com/go/compute

HP ProLiant

Gen9 servers

powered by

Intel® Xeon®

processors

Intel, the Intel logo, Xeon, and Xeon Inside are trademarks or registered trademarks of Intel Corporation in the U.S. and/or other countries. 1Substantiation for triple the capacity: HP

2Substantiation for lower TCO: 100 DL380 G6 servers consolidated down to 16 DL380 Gen9 enabling 62% TCO savings over 3 years including initial acquisition costs. There is also a potential reduction in monthly OPEX expenditure of over 80%. Includes software support for vSphere and Windows. Also includes a 25% discount on hardware. August 2014. 3Anonymous customer results. Customer was able to reduce the time to build and deploy infrastructure for 12 call centers from 66 days to 1. Total of 2000 servers were deployed. IDC whitepaper sponsored by HP, Achieving Organizational Transformation with HP Converged Infrastructure Solutions for SDDC, January 2014, IDC #246385. 4SmartCache Performance done with equivalent controller in a controlled environment. HP Smart Storage engineers, Houston, TX as of 18 May 2014 posted on internal SmartCache wiki page. HP OneView support for HP ProLiant Gen9 rack (DL) and blade (BL) servers will be available with HP OneView 1.20 in Dec 2014. Customers who purchase HP OneView licenses now will be granted rights to use HP Insight Control software and may transition to HP OneView 1.20 with no additional license or support fees. Customers also have the option to purchase HP Insight Control. © 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.

Page 18: CONFERENCE PROGRAMME · CONFERENCE PROGRAMME 1 - 5 ... • Interaction forces involved in Drug-Receptor ... Influence Of Nanoparticle Surface Modification Through Solvent Interactions

– 18 –

100Gb/s Interconnect Solutionsfor High Performance Compute

and Storage Platforms

100Gb/s InfiniBand and Ethernet adapter providing scalable, efficient high performance solution

Supports all interconnect speeds – 10, 20, 25, 40, 50, 56 and 100Gb/s

150Mmsg/sec and 0.7µs Latency

RDMA and RoCEv2, GPUDirect RDMA, Coherent Accelerator Processor Interface (CAPI)

EDR 100Gb/s InfiniBand throughput per port36-port 1U InfiniBand switch provides 7.2Tb/s throughput<90ns port latency InfiniBand router

larger amounts of data from greater numbers of animals and more frequent observations. A very fast algorithm allows calculations to be launched and have results returned in just seconds. This might be an extremely valuable capability for a conservation biologist in the field who wants immediate information on the recent range of tracked animals. Finally, researchers are beginning to consider calculations involving multiple animals and overlaps in the space use. These applications are just a beginning and end users will drive new use cases based on the enhanced capabilities enabled by HPC.

Designing Software Libraries and Middleware for Exascale Computing: Opportunities and ChallengesDhabaleswar K. (DK) Panda, The Ohio State UniversityThe high-end computing community is aiming to enter exascale-level computing during the next six to eight years. Such systems will consist of millions of processors and accelerators. This presentation will first focus on the architectural aspects of such exascale computing systems. Next, we will focus on challenges and opportunities in designing software libraries and middleware for such systems. Both HPC and Enterprise/BigData systems will be targeted. For HPC systems, we will focus on multiple emerging trends: support for Hybrid MPI+PGAS (OpenSHMEM and UPC) programming models, support for GPGPUs and Intel Xeon Phi, scalable collectives (multi-core-aware, topology-aware and power-aware), non-blocking collectives using offload framework, and schemes for fault-tolerance/ fault-resilience. For enterprise/BigData systems, we will focus on RDMA-enabled highperformance and scalable designs of Apache Hadoop (including HDFS, MapReduce, RPC and HBase), Apache Spark and Memcached. Schemes for supporting virtualization with high-performance and RDMA-based WAN communication will also be presented.

Biography: Dhabaleswar K. (DK) Panda is a Professor of Computer Science and Engineering at the Ohio State University. His research interests include parallel computer architecture, high performance networking, InfiniBand, exascale computing, programming models, GPUs and accelerators, high performance file systems and storage, virtualization, cloud computing and Big Data. He has published over 350 papers in major journals and international conferences related to these research areas. Prof. Panda and his research group members have been doing extensive research on modern networking technologies including InfiniBand, High-Speed Ethernet and RDMA over Converged Enhanced Ethernet (RoCE). The MVAPICH2 (High Performance MPI over InfiniBand, iWARP and RoCE) and MVAPICH2-X software libraries, developed by his research group (http://mvapich.cse.ohio-state.edu), are currently being used by more than 2,225 organizations worldwide (in 73 countries). This software has enabled several InfiniBand clusters to get into the latest TOP500 ranking during the last decade. More than 224,000 downloads of this software have taken place from the project’s website alone. This software package is also available with the software stacks of many network and server vendors and Linux distributors. The new RDMA-enabled Apache Hadoop package, RDMA-enabled Memcached package, and OSU HiBD benchmarks (OHB) are publicly available from the High-Performance Big Data project site: http://hibd.cse.ohiostate.edu/. Prof. Panda’s research has been supported by funding from US National Science Foundation, US Department of Energy, and several industry including Intel, Cisco, Cray, SUN, Mellanox, QLogic, NVIDIA and NetApp. He is an IEEE Fellow and a member of ACM. More details about Prof. Panda are available at

http://www.cse.ohio-state.edu/~panda/.

HPC Technology to Drive Innovations – InfiniteDesign ExplorationMr Detlef Schneider, Senior VP EMEA at AltairEngineering

HPC technology is becoming increasingly important to drive innovations in many industries, with a great potential to increase the competiveness of companies, organizations and countries. As massive compute capacity becomes available for reasonable costs for a broad group of users, suddenly simulation technology can be used much more intensively and in non-classical ways to explore, rather than simulate. Of course this opportunity comes with a number of challenges that need to be solved for the whole system end to end, including energy efficiency, availability throughout different user groups, application software scalability, robustness and accuracy, easy accessibility of the system for non-HPC specialists, handling of massive data to name a few.

The presentation will highlight new possibilities that emerge by using HPC simulation and cloud technology in the specific domain of product design and development. With the massive and flexible availability of compute resources, engineers and scientist can leverage simulation technology in a very different way, moving away from just replicating the or replacing physical tests, to extensively explore and optimize designs, with the goal to make products more robust and better, as well as shorten the development times. With examples from different industries the presentation will highlight the benefits of HPC for product development organizations, solutions available, as well as challenges that still need to be addressed.

Page 19: CONFERENCE PROGRAMME · CONFERENCE PROGRAMME 1 - 5 ... • Interaction forces involved in Drug-Receptor ... Influence Of Nanoparticle Surface Modification Through Solvent Interactions

– 19 –

KEYNOTE ABSTRACTS

Predictive Science and Laboratory Data Management Software Solutions from Accelrys

Marc Meunier, 3DS BIOVIA

Dr Meunier is principal field application scientist and Science Council Fellow at 3DS BIOVIA, holds a master’s from Pierre et Marie Curie University (Fr) and received a doctorate in chemistry from Bangor University (UK). After completing his doctorate he worked as a postdoctoral research fellow at Imperial College, London. He joined BIOVIA in 2000 as a product specialist for materials modelling. Meunier’s research interests include the study of nanodielectrics, the simulation of polymeric materials used in membrane technology, pharmaceutical materials science and more recently the growing field of materials informatics. He is on the editorial board of the journal Molecular Simulation; his publications appear in Chemical Physics, Applied Physics and Polymer journals; and he has recently edited a book entitled “Industrial Applications of Molecular Simulations.”

“Interconnect Your Future”

Yossi Avni, Vice President of Sales, EMEA, Mellanox Technologies

Abstract: The exponential growth in data and the ever growing demand for higher performance to serve the requirements of the leading scientific applications, drive the need for Petascale system and beyond and the ability to connect tens-of-thousands of compute and co-processor nodes in a very fast and efficient way. The interconnect has become the enabler of data and the enabler of efficient simulations. Beyond throughput and latency, the data center interconnect needs be able to offload the processing units from the communications work in order to deliver the desired efficiency and scalability. Mellanox has already demonstrated 100Gb/s cable solutions in March 2014 and announced the world first 100Gb/ switch at the ISC’14 conference, June 2014. Furthermore, Mellanox has recently introduced the HPC-X software package that provides a complete solution for MPI and PGAS/SHMEM/UPC environments with smart offloading techniques. The presentation will cover the latest technology and solutions from Mellanox that connect the world fastest supercomputers, and a roadmap for the next generation InfiniBand speed.

Lustre based Enterprise Storage – Tuning for optimal performance

Torben Kling Petersen, Seagate Technology

High performance compute systems place an increasing demand on the their primary connected storage systems. Modern file system implementations are no longer just for “scratch” only, data availability and data integrity are considered just as important as performance. In today’s market place as well as in advanced research, the demands on a storage system increase along a number of fronts including capacity, data integrity, system reliability, and system manageability, in addition to an ever-increasing need for performance. Understanding the tuning parameters available, the enhancements in RAID reliability as well as possible tradeoffs between slightly opposing tuning models becomes a critical skill. The Lustre file system has for many

years been the most popular distributed file system for HPC. While the Lustre community to date has been partial to older Lustre server and client releases, a number of the new features desired by many users, requires moving to more modern versions. The major dilemma historically has been that client versions such as 1.8.x perform faster than the 2.x releases, but support for modern Linux kernels is only available with recent Lustre server releases, and newly-implemented feature sets requires moving to newer client versions.

This paper examines the client and server tuning models, hardware components, security features and the advent of new RAID schemes along with their implications for performance. Specifically, when using benchmarking tools such as IOR, new testing models and parameter sets requires storage I/O benchmarking to change and more closely mimic contemporary application I/O workloads.

Image-based cerebrovascular modeling for advanced diagnosis and interventional planning

Alejandro F. Frangi, CISTIB Centre for Computational Imaging & Simulation Technologies in Biomedicine, The University of Sheffield, Sheffield, UK

Current technological progress in multidimensional and multimodal acquisition of biomedical data enables detailed investigation of the individual health status that should underpin improved patient diagnosis and treatment outcome. However, the abundance of biomedical information has not always been translated directly in improved healthcare. It rather increases the current information deluge and desperately calls for more holistic ways to analyse and assimilate patient data in an effective manner.

The Virtual Physiological Human aims at developing the framework and tools that would ultimately enable such integrated investigation of the human body and rendering methods for personalized and predictive medicine.

This lecture will focus on and illustrate two specific aspects: a) how the integration of biomedical imaging and sensing, signal and image computing and computational physiology are essential components in addressing this personalized, predictive and integrative healthcare challenge, and b) how such principles could be put at work to address specific clinical questions in the cardiovascular domain.

Finally, this lecture will also underline the important role of model validation as a key to translational success and how such validations span from technical validation of specific modeling components to clinical assessment of the effectiveness of the proposed tools. To conclude, the talk will outline some of the areas where current research efforts fall short in the VPH domain and that will possibly receive further investigation in the upcoming years.

IME: Towards very high performing, scalable and energy efficient storage solutions

Dr. James Coomer, DDN

DDN is currently developing novel software, referred to as Infinite Memory EngineTM (IME), intended to provide a much more space/cost/energy efficient way to provision I/O performance to large scale systems. IME creates a fast tier of Non Volatile Memory by virtualizing

Page 20: CONFERENCE PROGRAMME · CONFERENCE PROGRAMME 1 - 5 ... • Interaction forces involved in Drug-Receptor ... Influence Of Nanoparticle Surface Modification Through Solvent Interactions

– 20 –

a number of distributed SSDs into a single pool that applications recognize as conventional storage. This approach enables the decoupling of the filesystem and storage from the application, delivering orders of magnitude greater acceleration in I/O performance. DDN is actively introducing this technology (as testbeds) to strategic customers and partners throughout 2014 with full commercial offerings in 2015. IME has a number of distinguishing features. Firstly, it presents standard interfaces like MPI I/O and POSIX such that no application modifications are required. IME supports highly accelerated performance for both reads and writes and an extremely fast coordination within the burst buffer layer along immediate access to just-¬-written data for readers on the network. A key feature of IME is that prior to data being staged to the underlying filesystem, data alignment, coalescence, and other ordering schemes are applied to the I/O to complement the filesystem’s I/O characteristics, making most efficient use of the filesystem and underlying storage architecture. IME is flexible and can be used in a wide variety of environments. It will work with many server hardware, ssd devices, interconnect fabrics and underlying filesystems and can place a burst buffer into almost any architecture. Customers should expect to see a diverse, large peer community of customers running IME — not just customers using a single HPC vendor’s burst buffer with a single filesystem.

Modelling of heliospheric modulation of cosmic rays by means of Stochastic Differential Equations

Andreas Kopp, Jan-Louis Raath, Du Toit Strauss, and Marius Potgieter , North West University:

The transport of energetic charged particles (such as Cosmic Rays) through the heliosphere is described by the Parker equation, a Fokker-Planck type equation describing the distribution function of these particles as a function of space, energy, and time. The traditional way to solve this equation is based on using finite-difference methods. The solution is computed for the entire heliosphere, resulting in limitations concerning spatial and time resolution and large memory requirements. An alternative way is to use Stochastic Differential Equations (SDEs) that solve the Parker equation via trajectories of so-called pseudo-particles (phase-space elements), which need to be binned accordingly in order to obtain the distribution function at a given point. This type of simulation is free of stability conditions, and merely the solutions at the points of interest need to be computed. In order to obtain good statistics, large numbers of pseudo-particles must be launched, but everything can be parallelised very simply and efficiently. Here, we present our simulation results for the propagation for electrons of Jovian and protons of Galactic origin. For the latter, special attention has to be devoted to particle drifts in the non-homogeneous magnetic field. Several modifications are needed to avoid unphysical behaviour at the poles and to include drifts along the heliospheric current sheet, where the sign of the magnetic field changes its sign, are described in detail. We also briefly discuss problems concerning the numerical treatment of sharp gradients in the transport parameters.

Platinum Sponsor of the CHPC National Meeting 2014

The PBS Works suite delivers:

The Trusted Leader in HPC Workload Management

• Market-leading job scheduling• Web-based job submission and management• Remote visualization of Big Data

• Productivity tools for private/public HPC clouds• Detailed analytics and reporting

Page 21: CONFERENCE PROGRAMME · CONFERENCE PROGRAMME 1 - 5 ... • Interaction forces involved in Drug-Receptor ... Influence Of Nanoparticle Surface Modification Through Solvent Interactions

– 21 –

was investigated by PASS. The results suggest that chromane containing glitazones are apoptic agonist (activating p53 by intrinsic pathway leading to the apoptosis) and those which do not contain the chromane are devoid of this. In case of hepato-toxicity by non-chromane glitazone and their metabolite such as M-3, RM-3, rosiglitazone and pioglitazone; PASS suggest that these chemicals are not apoptic agonist but they are the substrate for CYP enzyme (Phase-I Oxidative Enzyme) and Phase-II conjugating enzymes; interfering with bile acid metabolism rendering bile acid more toxic (cholestasis). This unmetabolised bile salt further initiates the process apoptosis via intrinsic and extrinsic pathway leading to the apoptosis.

Docking study further reveals that chromane containing glitazones shows key hydrophobic interaction with residue Val 93 (orange colour) which occupies a central position in the p53 binding pocket of MDM2, a peculiar topological feature of this protein cavity. Similarly non-chromane ring glitazones does not show any kind hydrophobic interaction with Val 93. This docking study supports our hypothesis that glitazones containing chromane ring via hydrophobic interaction with Val 93 (orange colour) acts as apoptic agonist and responsible for the hepatotoxicity. Hence for the development of new glitazones with minimal hepatotoxicity, replacement of chromane by other heterocyclic moieties is crucial.

Region-free HPC -- a case study of connecting medical image data in Boston, USA to the CHPC in Cape Town.

Rudolph Pienaar, Boston Children’s Hospital

HPC offers considerable promise to reduce the computational time cost of numerous workflows. While the backend power of HPCs continues to improve steadily, the ease-of-use problem has remained largely unchanged since the mid-90s: i.e. end users are responsible for copying data into an HPC, managing their own analysis workflow engines, understanding the HPC scheduler, running and debugging their jobs, and on completion, copying data back out of the HPC. This complexity severely reduces the number of potential ad-hoc users of HPC resources, and consequently remains a stubborn bottleneck.

In this talk, I will demo a web-based workflow manager called CHRIS (Children’s Hospital Research Integration System) developed at Boston Children’s Hospital. CHRIS represents a technology to address and simplify the HPC access problem by presenting users with a familiar social web 2.0-inspired UI that allows easy access to medical image data within a hospital, and moreover connects these images to a library of image processing plugins. These plugins are executed on remote HPC resources. During the talk, I hope to log into a Boston CHRIS, access image data, and schedule a processing job of that Boston data in the Cape Town CHPC. On completion, the data is transparently copied back to Boston Hospital.

It is our hope that in situations where time of analysis is important, for example in clinical medical imaging where analysis of traumatic brain injury needs analysis in minutes rather than days, and where remote sites do not have access to internal computational power, a system such as CHRIS can connect this data to HPC hubs and bring the benefit of HPC

Page 22: CONFERENCE PROGRAMME · CONFERENCE PROGRAMME 1 - 5 ... • Interaction forces involved in Drug-Receptor ... Influence Of Nanoparticle Surface Modification Through Solvent Interactions

– 22 –

DFT study of Fischer-type metal carbenes

C.G.C.E. van Sittert, T.G.T. Mofokeng, J.I. du Toit, M. Landman

Fischer-type metal carbenes are used as catalysts in various organic synthesis reactions e.g. metathesis, cyclopropanation and benzannulation. In previous studies in the Catalysis and Synthesis Group at North West University, a modified molecular modelling method was developed to investigate the catalytic activity of metal carbenes for various reactions.

In this study the focus was on the application of the modified molecular modelling method to classify various Fischer-type metal carbene complexes with heteroaromatic groups (furan, bithiophene, N-methyl-thieno[3,2-b]pyrrole, 2-(2’-thienyl)furan and N-methyl-2-(2’-thienyl)pyrrole). The abovementioned complexes were synthesised by a research group at Pretoria University.

The following software packages were used for geometry optimization, orbital energy calculation and analysis of data: Material Studio 6.0 DMol3, Gaussian09, NBO, Solid-G and Statistica version 12. It was concluded from this study that compounds A3, B9 and C15 are suitable candidates for nucleophilic attack reaction, while compounds A4, C12, C13 and D23 are suitable for benzannulation and metathesis reactions. Compound B6 is suitable for both nucleophilic attack reactions and benzannulation.

Computational and Experimental Structural Studies of Selected Molybdenum(0) Monocarbene Complexes

Marilé Landman, Tamzyn Levell, Peet van Rooyen and Jeanet Conradie, University of Pretoria

A set of molybdenum Fischer carbene complexes, with different heteroatom substituents on the carbene carbon atom, were studied. Density functional theory as well as single crystal diffraction techniques were employed. The complexes studied, [Mo(CO)5{C(X)2-furyl}], had their substituents varied systematically to yield novel complexes, with X = ethoxy (-OCH2CH3) or amino (-NH2, -NHCy) substituents as the heteroatom substituents. Reaction of the monocarbene pentacarbonyl complexes with PPh3 resulted in the formation of tetracarbonyl carbene complexes, where the PPh3 ligand can be found either cis or trans to the carbene ligand. Reaction of the monocarbene pentacarbonyl complexes with dppe yielded chelated tricarbonyl carbene complexes. Both fac and mer isomers are possible. Changing the dihedral angle between the heteroatom of the heteroatom substituent and the heteroatom of the heteroaromatic substituents, may result in syn and anti conformations. X-ray crystallography data was compared with DFT calculations to determine the minimum energy conformation for each complex. Comparisons will be discussed.

A numerical modelling study of cosmic ray modulation in a global MHD heliosphere

Xi Luo, Marius Potgieter, Ming Zhang and du Toit Strauss , Northwest University:

Parker’s transport equation forms the basis for modern computer simulations of the modulation of cosmic

rays (charged particles with very high energies) in the heliosphere (the large space surrounding the Sun). The theory for Stochastic Differential Equations provides a convenient way to solve this transport equation very effectively using parallel computing. Based on this stochastic method, we first incorporate the heliospheric geometry and features using an advanced MHD solution. We then construct a 3-D cosmic ray transport model using the MHD model’s output which has a correct treatment of interstellar neutral atoms. By using this numerical package, we simulate the flux of cosmic rays and explore effects in heliospheric latitude and longitude.

Bio-inspired FeS nano-catalyst for CO2 activation and conversion

N. Y. Dzade, A. Roldan, and N.H. de Leeuw , Department of Chemistry, University College London:

Despite the high thermodynamic stability of CO2, biological systems (e.g. carbon monoxide dehydrogenase (CODH) enzyme) are capable of both activating the molecule and converting it into a range of organic molecules, all of which under moderate conditions. It is clear that, if we were able to emulate nature and successfully convert CO2 into useful chemical intermediates without the need for extreme reaction conditions, the benefits would be enormous: One of the major gases responsible for climate change would become an important feedstock for the chemical and pharmaceutical industries. Iron sulfide membranes formed in the warm, alkaline springs on the Archaean ocean floor are increasingly considered to be the early catalysts for a series of chemical reactions leading to the emergence of life. The structural similarity of the cubane active centers in CODH enzyme to the surfaces of present day sulfide minerals such as greigite (Fe3S4) and mackinawite (FeS) offer a valuable route of enquiry. In fact, acetic acid has been synthesized on iron sulfide surfaces in conditions simulating Earth before life.

In view of the importance of iron sulfides minerals as catalysts for pre-biotic CO2 conversion, in the present study, we have carried out a comprehensive computational study based on Density Functional Theory techniques to explore the structures and chemical reactivity of the low-index surfaces of mackinawite (tetragonal FeS) towards CO2 activation and conversion. The interactions of low-index surfaces with water molecules which could serve as the necessary hydrogen source in the CO2 reductions have also been thoroughly investigated. The results are very promising: the FeS surfaces exhibit strong chemical reactivity towards the CO2 molecule, spontaneously activating it via charge transfer from the surface species to the adsorbed CO2, resulting in the formation of negatively charged bent CO2-δ species.

Hydrogenation reactions of the activated CO2 molecule shows that products such as carbon monoxide (CO), formic acid (HCOOH), formaldehyde (CH2O) and methanol (CH3OH) are attainable under moderate conditions (favourable thermodynamics and kinetics).

An insight on energy-to-solution running GPU accelerated application on Wilkes

Spiga Filippo, Stuart Rankin, Paul Calleja, High Performance Computing Service (HPCS), University of Cambridge (UK):

HPC systems nowadays, especially those which aim

Page 23: CONFERENCE PROGRAMME · CONFERENCE PROGRAMME 1 - 5 ... • Interaction forces involved in Drug-Receptor ... Influence Of Nanoparticle Surface Modification Through Solvent Interactions

to break the Peta-flop barrier, are expected to be power hungry. We are already witnessing the installation of HPC systems designed to be “energy efficient”, the University of Cambridge GPU system (Wilkes) is an example. The same concerns about being energy efficient should also motivate application specialists to better exploit heterogeneous many-core architectures during the development phase (the concept of ‘co-design’). Exascale is still probably far away but the problem is already in front of us now.

The aim of this talk is to explore and compare energy-to-solution and time-to-solution measurements from synthetic benchmarks and real scientific applications. Our main goal is to capture energy profile of these applications using standard tools embedded in SLURM without build an ad-hoc invasive external hardware infrastructure around the HPC system. We will present the framework we put in place for an on-demand and transparent monitoring, discussing limitations and future improvements.

Structural Insight of Glitazone for Hepato-toxicity: Resolving Mystery by PASS

Harun M. Patel and Rajshekhar Karpoormath

Dept. of Pharmaceutical Chemistry, University of KwaZulu-Natal :

Troglitazone causes severe hepatic injury in certain individuals and multiple mechanisms related to hepato- toxicity has been reported creating confusion. In the present study, the mechanism for the hepatic injury of glitazones was investigated by PASS. The results suggest that chromane containing glitazones are apoptic agonist (activating p53 by intrinsic pathway leading to the apoptosis) and those which do not contain the chromane are devoid of this. In case of hepato-toxicity by non-chromane glitazone and their metabolite such as M-3, RM-3, rosiglitazone and pioglitazone; PASS suggest that these chemicals are not apoptic agonist but they are the substrate for CYP enzyme (Phase-I Oxidative Enzyme) and Phase-II conjugating enzymes; interfering with bile acid metabolism rendering bile acid more toxic (cholestasis). This unmetabolised bile salt further initiates the process apoptosis via intrinsic and extrinsic pathway leading to the apoptosis.

Docking study further reveals that chromane containing glitazones shows key hydrophobic interaction with residue Val 93 (orange colour) which occupies a central position in the p53 binding pocket of MDM2, a peculiar topological feature of this protein cavity. Similarly non-chromane ring glitazones does not show any kind hydrophobic interaction with Val 93. This docking study supports our hypothesis that glitazones containing chromane ring via hydrophobic interaction with Val 93 (orange colour) acts as apoptic agonist and responsible for the hepatotoxicity. Hence for the development of new glitazones with minimal hepatotoxicity, replacement of chromane by other heterocyclic moieties is crucial.

Region-free HPC -- a case study of connecting medical image data in Boston, USA to the CHPC in Cape Town.

Rudolph Pienaar, Boston Children’s HospitalHPC offers considerable promise to reduce the

computational time cost of numerous workflows. While the backend power of HPCs continues to improve steadily, the ease-of-use problem has remained largely unchanged since the mid-90s: i.e. end users are responsible for copying data into an HPC, managing their own analysis workflow engines, understanding the HPC scheduler, running and debugging their jobs, and on completion, copying data back out of

the HPC. This complexity severely reduces the number of potential ad-hoc users of HPC resources, and consequently remains a stubborn bottleneck.

In this talk, I will demo a web-based workflow manager called CHRIS (Children’s Hospital Research Integration System) developed at Boston Children’s Hospital. CHRIS represents a technology to address and simplify the HPC access problem by presenting users with a familiar social web 2.0-inspired UI that allows easy access to medical image data within a hospital, and moreover connects these images to a library of image processing plugins. These plugins are executed on remote HPC resources. During the talk, I hope to log into a Boston CHRIS, access image data, and schedule a processing job of that Boston data in the Cape Town CHPC. On completion, the data is transparently copied back to Boston Hospital.

It is our hope that in situations where time of analysis is important, for example in clinical medical imaging where analysis of traumatic brain injury needs analysis in minutes rather than days, and where remote sites do not have access to internal computational power, a system such as CHRIS can connect this data to HPC hubs and bring the benefit of HPC directly to medical facilities where ever they may be.

Efficiency and application of the ONETEP Linear-Scaling Density Functional Theory on Modern High Performance Computing Platforms

Dr. Karl A. Wilkinson, School of Chemistry, University of Southampton:

Page 24: CONFERENCE PROGRAMME · CONFERENCE PROGRAMME 1 - 5 ... • Interaction forces involved in Drug-Receptor ... Influence Of Nanoparticle Surface Modification Through Solvent Interactions

– 24 –

The ONETEP (Order-N Electronic Total Energy Package)Linear-Scaling Density Functional Theory code may be used to perform highly accurate electronic structure calculations on tens of thousands of atoms. The package has been ported

for execution on a range of homogeneous and heterogeneous high performance computing (HPC) platforms. We discuss the efficiency of these implementations in terms of time-to-solution, parallel efficiency and energy consumption and show that these developments permit meaningful calculations to be performed at this level of accuracy at an unprecedented scale.

The implementations utilize the MPI, OpenMP and OpenACC paradigms. Our work has focused on applying these paradigms to the routines which dominate the computational load: 3D FFT box operations, sparse matrix algebra operations, calculation of integrals, and Ewald summation. Whilst the underlying numerical methods are unchanged, significantly different algorithms are used within ONETEP to efficiently distribute the workload across the various computational resources and exploit them efficiently.

These developments result in a significantly shorter time to solution than was possible using MPI alone, and facilitate the application of the ONETEP code to systems larger than previously feasible. We illustrate this with both benchmarks to illustrate performance and chemically meaningful calculations in order to illustrate the importance of these developments.

We investigate the performance of these implementations on different platforms and examine the power to solution for these different approaches at different scales of parallelism.

Evaluating surface processes in the Southern Ocean in ocean models

Chang, N., Swart, S. and Monteiro, P.M.S

Ocean Systems and Climate Research Group, CSIR

The Southern Ocean is the most effective sink of carbon of the world’s oceans. However, when compared to observations, air-sea carbon fluxes are poorly represented in many models such as those used in climate change projections. Carbon exchange between the air-sea interface as well as between the surface-deep ocean is driven by biological and physical processes of the upper ocean, known as the biological and solubility pumps. The misrepresentation of these surface ocean processes in the models may be the drivers of incorrect carbon exchange. The first step is to fully understand these mechanisms on the biological and solubility pumps and how they are incorporated in the ocean models through parameterizations or increased resolution. This may lead to an improvement in models used in climate projection.

In this study, a suite of ocean model configurations were designed with varying domain size and resolution. These range from global coarse resolution (2º) to finer-scale regional ocean models (1/12º) as well as localised ultra-high resolution (1/36º). Surface ocean properties were statistically evaluated for these models and compared to datasets of in situ and satellite observations. The effect of these processes on the surface ocean are addressed and the implications for future models discussed.

Fast Collision Detection on the GPU for particle simulations

Nicolin Govender, Daniel N Wilke, Schalk Kok,Advanced Mathematical Modeling CSIR and University of Pretoria, Department of Mechanical and Aeronautical Engineering:

Particle simulations arises in many areas of engineering and science and involves simulating the individual trajectories of all particles in the system. A particle experiences forces as a result of its interaction with particles that are either within its local neighborhood as in molecular dynamics (MD) or with particles that are only in physical contact as in the granular media (GM) simulations.

This requires finding of nearest neighbors (NN) which has a computational complexity of O(N). This limits the size of particle systems that can be simulated in the time period which required for use in many industrial and computational scenarios. Finding the nearest neighbors is well suited to the parallel nature of the Graphic Processor Unit GPU and can result in a significant speed up over CPU implementations. However due to this parallelism the GPU cannot take advantage of force symmetry and has a computational complexity of O(2N). Memory transactions on the GPU has been a limiting factor in utilizing force symmetry on the GPU. However the hardware improvements on the GPU over the past few years (Kelpler (2012) onwards) can result in a smaller cost for memory transactions for suitable algorithms. In this paper we introduce a NN search algorithm that has a computational complexity of O(N). The implementation of this algorithm on an NVIDIA GTX 780 GPU results in a 40% speed up which equates to only a 10% increase in memory overhead. We demonstrate this performance with various particle simulations using the discrete element method (DEM) based code BLAZE-DEM.

Formation of structure in the Universe

Stefan Gottlöber, Leibniz-Institut für Astrophysik Potsdam :

In 1965 Arno Penzias and Robert Wilson detected the cosmic microwave background radiation. More than 13 billion years ago this radiation was imprinted on the sky, only a few 100,000 years after the Big Bang. In 1992 the COBE satellite detected anisotropies in the temperature of the CMB radiation. Meanwhile these temperature fluctuations are measured with very high precision by satellites (WMAP, Planck) as well as many ground based observations. The measured temperature fluctuations tells us that shortly after the Big Bang the Universe was almost homogeneous with tiny density fluctuations of the order of 10−5 . Comparing the power spectrum of measured density fluctuations with models the cosmologists concluded that the Universe is spatially flat and consists at present of about 68 % of some unknown Dark Energy, 27 % of also unknown Dark Matter and 5% of baryons.

In the evolved universe one can directly observe the distribution of baryons and indirectly (gravitational lensing, velocity measurements) the distribution of Dark Matter. We see huge clusters of galaxies with masses up to a few 1015 solar masses in the knots of the cosmic web which is build by galaxies in a wide range of masses from tiny dwarfs (109 solar masses) to massive ellipticals (1013 solar masses). All these structures have been formed out of the tiny fluctuations generated during the early inflationary phase and measured in the CMB background.

During the last two decades our understanding of the evolution of structure in the universe grew substantially.

Page 25: CONFERENCE PROGRAMME · CONFERENCE PROGRAMME 1 - 5 ... • Interaction forces involved in Drug-Receptor ... Influence Of Nanoparticle Surface Modification Through Solvent Interactions

– 25 –

Due to the non-linear nature of the gravitational dynamics and the complicated gas-astrophysical processes numerical simulations on modern supercomputers have been the driving force behind much of this theoretical progress. Dark matter only simulations of the evolution of large cosmological volumes use thousands of cores of the largest supercomputers in parallel.

In the analysis of these simulations assumptions must be made about the observable objects (galaxies) which are hosted by the dark matter halos. Gas-dynamical simulations allow to include the formation of stars but such simulations are much more demanding both in computational resources as well as in the number of physical processes which must be considered in addition to the gravitational clustering. This includes radiative cooling of the gas, star formation and the feedback of the stars. But also magnetic fields, supermassive black holes, and many other processes might be important.

Cosmological simulations must cover a large dynamical and mass range. A representative volume of the universe should be large, but this comes at the expense of the resolution. To overcome this problem a new, and almost orthogonal but yet complementary, approach to cosmological simulations has been introduced over the last few years. This consists of using observations of the nearby universe as constraints imposed on the initial conditions of the simulations. The resulting constrained simulations serve as a numerical laboratory of the nearby universe where small scale structures can be studied in detail.

Towards the IPCC 6th Assessment Report: building Africa’s first global model for climate change projections N. Chang, Ocean Systems and Climate Research Group, CSIR Climate change is the most serious collective environmental challenge ever faced by humankind. It is a problem with global reach, but the research effort to address it is disproportionately concentrated in the northern hemisphere and in developed countries.

Southern hemispheric and African climate issues differ from those that drive the research and modelling effort in the north. In particular, oceans dominate the southern hemisphere and the land is largely occupied by arid systems and tropical forests. African terrestrial ecosystems and processes, Southern Ocean biochemistry and circulation dynamics and Southern Hemisphere atmospheric processes are under-studied and poorly represented in global models - despite being globally important contributors to earth system processes. In particular, of the about thirty currently existing coupled ocean-atmosphere global circulation models (CGCMs) and Earth System Models (ESMs) suitable for the projection of future climate change, only one model had its genesis in the southern hemisphere. Towards addressing this disproportionality, and in alignment with the South African Department of Science & Technology’s Global Change Grand Challenge, the CSIR and partners are invested in building a Variable Resolution Earth System Model (VRESM), with the aim of contributing projections of future climate change to the Coupled Model Intercomparison Project Phase 6 (CMIP6) and Assessment Report 6 (AR6) of the IPCC. This first Africanbased Earth System Model

(ESM) is being developed in close collaboration with the Commonwealth Scientific and Industrial Research Organisation in Australia, the Japanese Agency for Marine Earth Science and Technology (JAMSTEC) and the Centre National de la Recherche Scientifique - Institut Pierre Simon Laplace (CNRS-IPSL). The coupled model has as component models the variable-cubic atmospheric model (VCAM), a dynamic land-surface model (CABLE), the parallel cubic ocean model (PCOM), a dynamic ice model and ocean biogeochemistry model (PISCES).

Teams began their work on Sunday, 30 November and must build and configure cluster computers, then run a series of applications to see which team can get the best performance.

Winners will be announced at the award ceremony on Thurs-day evening, 4 December 2014.

South African Student Cluster Competition

Eight teams are battling it out for the chance to once again defend South Africa’s dominance and represent the country in the International Student Cluster Com-

petition scheduled for June next year in Germany.

Page 26: CONFERENCE PROGRAMME · CONFERENCE PROGRAMME 1 - 5 ... • Interaction forces involved in Drug-Receptor ... Influence Of Nanoparticle Surface Modification Through Solvent Interactions

– 26 –

Page 27: CONFERENCE PROGRAMME · CONFERENCE PROGRAMME 1 - 5 ... • Interaction forces involved in Drug-Receptor ... Influence Of Nanoparticle Surface Modification Through Solvent Interactions

– 27 –

CHPC WISHES TO THANK ALL ITS SPONSORS FOR THEIR GENEROUS SUPPORT.

Page 28: CONFERENCE PROGRAMME · CONFERENCE PROGRAMME 1 - 5 ... • Interaction forces involved in Drug-Receptor ... Influence Of Nanoparticle Surface Modification Through Solvent Interactions

30

S

1

M

2

T

3

W

DECEMBER 2015

4

T

5

F

6

S

7 8 9 10 11 12 13

14 15 16 17 18 19 20

21 22 23 24 25 26 27

28 29 30 31

Remember to save the date for the CHPC National Meeting 2015, Boardwalk Conference Centre, Port Elizabeth, 30 November - 4 December 2015