Towards an efficient implementation of the Particle-Mesh-Ewald … · 2012-09-14 · Towards an...

33
Towards an efficient implementation of the Particle-Mesh-Ewald (PME) method on low-bandwidth linux clusters Workshop on Fast Algorithms for Long-Range Interactions Forschungszentrum Jülich, 7 - 8 April 2005 Carsten Kutzner [email protected] Max-Planck-Institute for Biophysical Chemistry Theoretical and Computational Biophysics Department G ¨ ottingen – p.1/31

Transcript of Towards an efficient implementation of the Particle-Mesh-Ewald … · 2012-09-14 · Towards an...

Page 1: Towards an efficient implementation of the Particle-Mesh-Ewald … · 2012-09-14 · Towards an efficient implementation of the Particle-Mesh-Ewald (PME) method on low-bandwidth

Towards an efficient implementation of theParticle-Mesh-Ewald (PME) method on

low-bandwidth linux clusters

Workshop on Fast Algorithms for Long-Range InteractionsForschungszentrum Jülich, 7 - 8 April 2005

Carsten [email protected]

Max-Planck-Institute for Biophysical Chemistry

Theoretical and Computational Biophysics Department

Gottingen

– p.1/31

Page 2: Towards an efficient implementation of the Particle-Mesh-Ewald … · 2012-09-14 · Towards an efficient implementation of the Particle-Mesh-Ewald (PME) method on low-bandwidth

Goal of projectbetter performance of GROMACS on parallel machines

• David van der Spoel, Uppsala University (GMX developer)

• Erik Lindahl, Stockholm (GMX developer)

• Jakob Pichlmeier, IBM Munich (domain decomposition)

• Renate Dohmen, RZ Garching (load balancing)

• Carsten Kutzner (PME/PP node splitting)

– p.2/31

Page 3: Towards an efficient implementation of the Particle-Mesh-Ewald … · 2012-09-14 · Towards an efficient implementation of the Particle-Mesh-Ewald (PME) method on low-bandwidth

Molecular dynamics simulations• molecular dynamics (MD)

simulations of proteins in

water

• 1 000 – 1 000 000 atoms

• GROMACS 3.2.1 MD

simulation package

• long-range electrostatic

forces are evaluated with

Particle-Mesh-Ewald (PME)

• Aquaporin-1, ≈ 80000

atoms, protein (tetramer)

embedded in a lipid bilayer

membrane surrounded by

water

– p.3/31

Page 4: Towards an efficient implementation of the Particle-Mesh-Ewald … · 2012-09-14 · Towards an efficient implementation of the Particle-Mesh-Ewald (PME) method on low-bandwidth

Speedup of GROMACS 3.2.1• N Intel Xeon CPUs

3.06GHz (Orca1)

• LAM 7.1.1 MPI

• Gigabit Ethernet

• MPI_Wtime hi-res t

counter

• time step length

variation typ. ≈ 5 %

• 9 step average

speedup = t1/tN

scaling = speedup /N

• Max. speedup:2.2 @ 4 CPUs,

scaling 0.551 2 4 6 10 20 CPUs

1

8

2

4

speedup

– p.4/31

Page 5: Towards an efficient implementation of the Particle-Mesh-Ewald … · 2012-09-14 · Towards an efficient implementation of the Particle-Mesh-Ewald (PME) method on low-bandwidth

Switch congestionDefault switch settings: switch congestion prevents good scaling

With switch flow control: no lost messages any more

– p.5/31

Page 6: Towards an efficient implementation of the Particle-Mesh-Ewald … · 2012-09-14 · Towards an efficient implementation of the Particle-Mesh-Ewald (PME) method on low-bandwidth

Switch congestionDefault switch settings: switch congestion prevents good scaling

With switch flow control: no lost messages any more

– p.5/31

Page 7: Towards an efficient implementation of the Particle-Mesh-Ewald … · 2012-09-14 · Towards an efficient implementation of the Particle-Mesh-Ewald (PME) method on low-bandwidth

Speedup of GROMACS 3.2.1

• default switch settings

• Max. speedup:

2.2 @ 4 CPUs

• scaling (4 CPUs):

0.55

1 2 4 6 10 20 CPUs1

8

2

4

speedup

– p.6/31

Page 8: Towards an efficient implementation of the Particle-Mesh-Ewald … · 2012-09-14 · Towards an efficient implementation of the Particle-Mesh-Ewald (PME) method on low-bandwidth

Speedup of GROMACS 3.2.1

• with flow control:

• Max. speedup:

2.4 @ 6 CPUs

• scaling (6 CPUs):

0.39

1 2 4 6 10 20 CPUs1

8

2

4

speedup

– p.7/31

Page 9: Towards an efficient implementation of the Particle-Mesh-Ewald … · 2012-09-14 · Towards an efficient implementation of the Particle-Mesh-Ewald (PME) method on low-bandwidth

Particle Mesh EwaldCoulomb forces on N particles, charges qi, positions ri, box length L, periodic b.c.

• electrostatic potential

V =12

N

∑i, j=1

∑n∈Z3

qiq j

|ri j +nL|

– p.8/31

Page 10: Towards an efficient implementation of the Particle-Mesh-Ewald … · 2012-09-14 · Towards an efficient implementation of the Particle-Mesh-Ewald (PME) method on low-bandwidth

Particle Mesh EwaldCoulomb forces on N particles, charges qi, positions ri, box length L, periodic b.c.

• electrostatic potential

V =12

N

∑i, j=1

∑n∈Z3

qiq j

|ri j +nL|

– p.9/31

Page 11: Towards an efficient implementation of the Particle-Mesh-Ewald … · 2012-09-14 · Towards an efficient implementation of the Particle-Mesh-Ewald (PME) method on low-bandwidth

Particle Mesh EwaldCoulomb forces on N particles, charges qi, positions ri, box length L, periodic b.c.

• electrostatic potential

V =12

N

∑i, j=1

∑n∈Z3

qiq j

|ri j +nL|

straightforward summation

impracticable!

– p.10/31

Page 12: Towards an efficient implementation of the Particle-Mesh-Ewald … · 2012-09-14 · Towards an efficient implementation of the Particle-Mesh-Ewald (PME) method on low-bandwidth

Particle Mesh EwaldCoulomb forces on N particles, charges qi, positions ri, box length L, periodic b.c.

• electrostatic potential

V =12

N

∑i, j=1

∑n∈Z3

qiq j

|ri j +nL|

straightforward summation

impracticable

• Trick 1: Split problem into 2 parts with help of:

1r

=f (r)

r︸︷︷︸

short range

+1− f (r)

r︸ ︷︷ ︸

long range

– p.11/31

Page 13: Towards an efficient implementation of the Particle-Mesh-Ewald … · 2012-09-14 · Towards an efficient implementation of the Particle-Mesh-Ewald (PME) method on low-bandwidth

Particle Mesh EwaldCoulomb forces on N particles, charges qi, positions ri, box length L, periodic b.c.

• electrostatic potential

V =12

N

∑i, j=1

∑n∈Z3

qiq j

|ri j +nL|

straightforward summation

impracticable

• Trick 1: Split problem into 2 parts with help of:

1r

=f (r)

r︸︷︷︸

short range

+1− f (r)

r︸ ︷︷ ︸

long range

• V = Vdir︸︷︷︸

direct space

+ Vrec︸︷︷︸

fourier space

smooth Vrec needs only few k

– p.12/31

Page 14: Towards an efficient implementation of the Particle-Mesh-Ewald … · 2012-09-14 · Towards an efficient implementation of the Particle-Mesh-Ewald (PME) method on low-bandwidth

Particle Mesh EwaldCoulomb forces on N particles, charges qi, positions ri, box length L, periodic b.c.

• electrostatic potential

V =12

N

∑i, j=1

∑n∈Z3

qiq j

|ri j +nL|

straightforward summation

impracticable

• Trick 1: Split problem into 2 parts with help of:

1r

=f (r)

r︸︷︷︸

short range

+1− f (r)

r︸ ︷︷ ︸

long range

• V = Vdir︸︷︷︸

direct space

+ Vrec︸︷︷︸

fourier space

smooth Vrec needs only few k

– p.13/31

Page 15: Towards an efficient implementation of the Particle-Mesh-Ewald … · 2012-09-14 · Towards an efficient implementation of the Particle-Mesh-Ewald (PME) method on low-bandwidth

Particle Mesh EwaldVrec needs FT charge density

• Trick 2: discretize charges → use discrete FT

– p.14/31

Page 16: Towards an efficient implementation of the Particle-Mesh-Ewald … · 2012-09-14 · Towards an efficient implementation of the Particle-Mesh-Ewald (PME) method on low-bandwidth

Particle Mesh EwaldVrec needs FT charge density

• Trick 2: discretize charges → use discrete FT

– p.14/31

Page 17: Towards an efficient implementation of the Particle-Mesh-Ewald … · 2012-09-14 · Towards an efficient implementation of the Particle-Mesh-Ewald (PME) method on low-bandwidth

Particle Mesh EwaldVrec needs FT charge density

• Trick 2: discretize charges → use discrete FT

– p.15/31

Page 18: Towards an efficient implementation of the Particle-Mesh-Ewald … · 2012-09-14 · Towards an efficient implementation of the Particle-Mesh-Ewald (PME) method on low-bandwidth

Particle Mesh EwaldVrec needs FT charge density

• Trick 2: discretize charges → use discrete FT

• Each charge is spread on 64 = 4x4x4 neighbouring grid points,

grid spacing 0.12 nm

• mesh-based charge density:

approximation to Σ of charges at atom positions

• Aquaporin-1: 80 000 atoms, grid size 90x88x80 = 633600 points

– p.16/31

Page 19: Towards an efficient implementation of the Particle-Mesh-Ewald … · 2012-09-14 · Towards an efficient implementation of the Particle-Mesh-Ewald (PME) method on low-bandwidth

Particle Mesh EwaldVrec needs FT charge density

• Trick 2: discretize charges → use discrete FT

• Each charge is spread on 64 = 4x4x4 neighbouring grid points,

grid spacing 0.12 nm

• mesh-based charge density:

approximation to Σ of charges at atom positions

• Aquaporin-1: 80 000 atoms, grid size 90x88x80 = 633600 points

– p.17/31

Page 20: Towards an efficient implementation of the Particle-Mesh-Ewald … · 2012-09-14 · Towards an efficient implementation of the Particle-Mesh-Ewald (PME) method on low-bandwidth

Evaluation of Coulomb forces1. Short-range Fi,dir

2. Long-range (PME):

• Put charges on grid

• FFT charge grid convolution FFT back

yields potential Vrec at grid points

• interpolate grid to derive forces at atom positions

Fi,rec = − ∂∂ri

V

– p.18/31

Page 21: Towards an efficient implementation of the Particle-Mesh-Ewald … · 2012-09-14 · Towards an efficient implementation of the Particle-Mesh-Ewald (PME) method on low-bandwidth

GMX 3.2.1 parallel PME

Put charges on grid

Sum q grid: MPI_Reduce

FFT

convolution

FFT back

MPI_Bcast

Interpolate forces

– p.19/31

Page 22: Towards an efficient implementation of the Particle-Mesh-Ewald … · 2012-09-14 · Towards an efficient implementation of the Particle-Mesh-Ewald (PME) method on low-bandwidth

PME time stepGROMACS 3.2.1 time step on 4 CPUs (Orca1)

short-range Coulomb

spread charges

sum q grid FFT, conv, FFT

broadcast potential grid

interpolate forces

Sum charge grid: N × MPI Reduce , Broadcast potential grid: N × MPI Bcast– p.20/31

Page 23: Towards an efficient implementation of the Particle-Mesh-Ewald … · 2012-09-14 · Towards an efficient implementation of the Particle-Mesh-Ewald (PME) method on low-bandwidth

Domain decomp. ↔ atom dec.

– p.21/31

Page 24: Towards an efficient implementation of the Particle-Mesh-Ewald … · 2012-09-14 · Towards an efficient implementation of the Particle-Mesh-Ewald (PME) method on low-bandwidth

PME time step with DD

short-range Coulomb

spread charges

sum charge grid FFT, conv, FFT

broadcast potential grid

interpolate forces 3.2.1

short-range Coulomb

spread charges

interpolate forcesFFT, conv, FFT

atom redist. force redist.

charge gridboundaries

potential gridboundaries

DD – p.22/31

Page 25: Towards an efficient implementation of the Particle-Mesh-Ewald … · 2012-09-14 · Towards an efficient implementation of the Particle-Mesh-Ewald (PME) method on low-bandwidth

Domain decomp. speedups

• broken lines: standard

switch settings

• solid lines: with switch

flow control

• Max. speedup:

8.0 @ 18 CPUs

(was: 2.4 @ 6 CPUs)

• scaling (4 CPUs):

0.74 (was: 0.54)

1 2 4 6 10 20 CPUs1

8

2

4

– p.23/31

Page 26: Towards an efficient implementation of the Particle-Mesh-Ewald … · 2012-09-14 · Towards an efficient implementation of the Particle-Mesh-Ewald (PME) method on low-bandwidth

PME/PP splittingSeparate a part of the CPUs to do PME only. Expect speed gains in FFT part!

1. because parallel FFT expensive on high number of CPUs (All-to-all)

(latency)

– p.24/31

Page 27: Towards an efficient implementation of the Particle-Mesh-Ewald … · 2012-09-14 · Towards an efficient implementation of the Particle-Mesh-Ewald (PME) method on low-bandwidth

PME/PP splittingSeparate a part of the CPUs to do PME only. Expect speed gains in FFT part!

1. because parallel FFT expensive on high number of CPUs (All-to-all)

(latency)

– p.25/31

Page 28: Towards an efficient implementation of the Particle-Mesh-Ewald … · 2012-09-14 · Towards an efficient implementation of the Particle-Mesh-Ewald (PME) method on low-bandwidth

PME/PP splittingSeparate a part of the CPUs to do PME only. Expect speed gains in FFT part!

1. because parallel FFT expensive on high number of CPUs (All-to-all)

(latency)

2. because x-size of FFT grid must be multiple of nCPU

4 000 atoms →

30x30x21 = 18900 FFT grid points on 2 CPUs

40x40x21 = 33600 FFT grid points on 20 CPUs.

– p.26/31

Page 29: Towards an efficient implementation of the Particle-Mesh-Ewald … · 2012-09-14 · Towards an efficient implementation of the Particle-Mesh-Ewald (PME) method on low-bandwidth

PME/PP splitting scheme

– p.27/31

Page 30: Towards an efficient implementation of the Particle-Mesh-Ewald … · 2012-09-14 · Towards an efficient implementation of the Particle-Mesh-Ewald (PME) method on low-bandwidth

Time step AP-1 / 80k atomsDomain decomposition in PME part (speedup = 2.9 @ 4 CPUs, scaling 0.74):

DD + node splitting (speedup = 2.9 @ 4 CPUs, scaling 0.74):

– p.28/31

Page 31: Towards an efficient implementation of the Particle-Mesh-Ewald … · 2012-09-14 · Towards an efficient implementation of the Particle-Mesh-Ewald (PME) method on low-bandwidth

PME/PP splitting speedups

• Max. speedup:

> 9.6 @ 24 CPUs

• scaling (4 CPUs):

0.74

1 2 4 6 10 20 CPUs1

8

2

4

– p.29/31

Page 32: Towards an efficient implementation of the Particle-Mesh-Ewald … · 2012-09-14 · Towards an efficient implementation of the Particle-Mesh-Ewald (PME) method on low-bandwidth

Time step Guanylin / 4k atomsDomain decomposition in PME part (speedup = 2.8 @ 6 CPUs, scaling 0.47):

DD + node splitting (speedup = 3.6 @ 6 CPUs, scaling 0.60):

– p.30/31

Page 33: Towards an efficient implementation of the Particle-Mesh-Ewald … · 2012-09-14 · Towards an efficient implementation of the Particle-Mesh-Ewald (PME) method on low-bandwidth

Summary1. DD only: scaling @ 4 CPUs & 80 000 atoms

0.54 → 0.74 !

2. DD + node splitting: speedup of 9.6+ possible (was: 2.4)

3. Node splitting needed for small systems and/or many CPUs

Outlook:

Optimize splitting code for

speed (non-blocking communi-

cations)

– p.31/31