PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

67
PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006

Transcript of PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Page 1: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

PETSc and Neuronal Networks

Toby IsaacVIGRE Seminar, Wednesday,

November 15, 2006

Page 2: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for general ODEs

Page 3: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for general ODEs

Recall: have an input program to convert to PETSc binary format

Page 4: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for general ODEs

Recall: have an input program to convert to PETSc binary format

e.g.: Vec for initial values, Mat for linear ODE, adjacency/connectivity Mat

Page 5: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for general ODEs

Recall: have an input program to convert to PETSc binary format

e.g.: Vec for initial values, Mat for linear ODE, adjacency/connectivity Mat

PetscBinaryView for arrays of scalars (see exampleinput.c)

Page 6: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for general ODEs

To keep scalar parameters organized (end time, dt, # cells, etc.) use a PetscBag:

Page 7: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for general ODEs

To keep scalar parameters organized (end time, dt, # cells, etc.) use a PetscBag:

Allows you to save a struct in binary and read in to all processors

Page 8: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for general ODEs

To keep scalar parameters organized (end time, dt, # cells, etc.) use a PetscBag:

Allows you to save a struct in binary and read in to all processors

No need to keep track of order in which scalars are written/read

Page 9: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for general ODEs

Recall: using “Load” functions, parallel layout is specified at read in…

Page 10: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for general ODEs

Recall: using “Load” functions, parallel layout is specified at read in…

Except for arrays: only go to first processor

Page 11: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for general ODEs

Recall: using “Load” functions, parallel layout is specified at read in…

Except for arrays: only go to first processor

Use MPI_Bcast to send those arrays to all processors

Page 12: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for general ODEs

Recall: using “Load” functions, parallel layout is specified at read in…

Except for arrays: only go to first processor

Use MPI_Bcast to send those arrays to all processors

e.g.: piecewise constant inj. current

Page 13: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for general ODEs

A TS object keeps track of the settings for time-stepping

Page 14: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for general ODEs

A TS object keeps track of the settings for time-stepping

Same old song: TSCreate and TSDestroy

Page 15: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for general ODEs

A TS object keeps track of the settings for time-stepping

Same old song: TSCreate and TSDestroy

TSSetType: forward Euler, backward Euler, “ode45”, (pseudo-timestepping)

Page 16: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for general ODEs

A TS object keeps track of the settings for time-stepping

Same old song: TSCreate and TSDestroy

TSSetType: forward Euler, backward Euler, “ode45”, (pseudo-timestepping)

TSSetProblemType: linear, nonlinear

Page 17: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for general ODEs

TSSetSolution: set initial conditions

Page 18: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for general ODEs

TSSetSolution: set initial conditions TSSetRHSFunction/

TSSetRHSMatrix:

Page 19: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for general ODEs

TSSetSolution: set initial conditions TSSetRHSFunction/

TSSetRHSMatrix: Specified functions has format

rhsfunc(ts, t, u, du, void *additional arguments)

Page 20: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for general ODEs

TSSetSolution: set initial conditions TSSetRHSFunction/TSSetRHSMatrix: Specified functions has format

rhsfunc(ts, t, u, du, void *additional arguments)

Create a struct for passing additional arguments

Page 21: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for general ODEs

TSSetRHSJacobian, if method calls for it

Page 22: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for general ODEs

TSSetRHSJacobian, if method calls for it

TSSetInitialTimeStep (that is, initial time and initial time step)

Page 23: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for general ODEs

TSSetRHSJacobian, if method calls for it

TSSetInitialTimeStep (that is, initial time and initial time step)

TSSetDuration

Page 24: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for general ODEs

TSSetRHSJacobian, if method calls for it

TSSetInitialTimeStep (that is, initial time and initial time step)

TSSetDuration TSRKSetTolerance:

Page 25: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for general ODEs

TSSetRHSJacobian, if method calls for it

TSSetInitialTimeStep (that is, initial time and initial time step)

TSSetDuration TSRKSetTolerance: Control absolute error over whole

time of integration: a bit sketchy

Page 26: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for general ODEs

If only interested in final state, run TSStep to execute

Page 27: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for general ODEs

If only interested in final state, run TSStep to execute

If interested in progress along the way, you need a monitor function:

Page 28: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for general ODEs

If only interested in final state, run TSStep to execute

If interested in progress along the way, you need a monitor function:

Runs after every time step, can output, plot, change parameters, change time-step etc.

Page 29: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for general ODEs

Multiple monitor functions can run: e.g. one for parameter changes, one for output

Page 30: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for general ODEs

Multiple monitor functions can run: e.g. one for parameter changes, one for output

Attention IAF modelers: you can change the state vector too!

Page 31: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for general ODEs

Multiple monitor functions can run: e.g. one for parameter changes, one for output

Attention IAF modelers: you can change the state vector too!

Syntax: TSSetMonitor, monitor(ts, iter#, t, u, void *args)

Page 32: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for Homogeneous Nets

Page 33: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for Homogeneous Nets

Most dependency occurs within cell: bad to have one cell divided across processors

Page 34: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for Homogeneous Nets

Most dependency occurs within cell: bad to have one cell divided across processors

No guarantee that PETSC_DECIDE won’t split your vector this way

Page 35: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for Homogeneous Nets

Have a vector y of length = # cells

Page 36: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for Homogeneous Nets

Have a vector y of length = # cells PETSc evenly distributes this

vector

Page 37: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for Homogeneous Nets

Have a vector y of length = # cells PETSc evenly distributes this

vector nlocal = VecGetLocalSize(y)

Page 38: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for Homogeneous Nets

Have a vector y of length = # cells PETSc evenly distributes this

vector nlocal = VecGetLocalSize(y) VecCreateMPI(…,

neqns*nlocalcells, PETSC_DETERMINE,&x);

Page 39: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for Homogeneous Nets

VecSetBlockSize: set this to the number of equations per cell

Page 40: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for Homogeneous Nets

VecSetBlockSize: set this to the number of equations per cell

VecStrideGather: send value from same index for each block to another vector

Page 41: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for Homogeneous Nets

VecSetBlockSize: set this to the number of equations per cell

VecStrideGather: send value from same index for each block to another vector

VecStrideScatter: send values from a vector to the same index for each block

Page 42: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for Homogeneous Nets

Paradigm for ease/simplicity: gather like indices, make changes, scatter back

Page 43: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for Homogeneous Nets

Paradigm for ease/simplicity: gather like indices, make changes, scatter back

VecStrideGatherAll/VecStrideScatterAll: take the state vector, break it up into an array of vectors, one for each equivalent index

Page 44: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for Homogeneous Nets

In RHSFunction: Vec U and Vec DU are inputs

Page 45: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for Homogeneous Nets

In RHSFunction: Vec U and Vec DU are inputs

Declare arrays Vec u[neqns], du[neqns]

Page 46: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for Homogeneous Nets

In RHSFunction: Vec U and Vec DU are inputs

Declare arrays Vec u[neqns], du[neqns]

VecStrideGatherAll at the start

Page 47: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for Homogeneous Nets

In RHSFunction: Vec U and Vec DU are inputs

Declare arrays Vec u[neqns], du[neqns]

VecStrideGatherAll at the start Set du[i] in terms of u[] for each i

Page 48: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for Homogeneous Nets

In RHSFunction: Vec U and Vec DU are inputs

Declare arrays Vec u[neqns], du[neqns]

VecStrideGatherAll at the start Set du[i] in terms of u[] for each I VecStrideScatterAll at the end

Page 49: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for Homogeneous Nets

For very large networks, large number of processors: message passing will take its toll

Page 50: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for Homogeneous Nets

For very large networks, large number of processors: message passing will take its toll

Order cells so that connections occur between close numbers

Page 51: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for Homogeneous Nets

For very large networks, large number of processors: message passing will take its toll

Order cells so that connections occur between close numbers

MatGetOrdering, MATORDERING_RCM, MatPermute

Page 52: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for Inhomogeneous Nets

Page 53: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for Inhomogeneous Nets

VecSetBlockSize no longer an option

Page 54: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for Inhomogeneous Nets

VecSetBlockSize no longer an option

Can be reproduced with more generic VecGather/VecScatter

Page 55: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for Inhomogeneous Nets

VecSetBlockSize no longer an option

Can be reproduced with more generic VecGather/VecScatter

Requires the creation of arrays of VecScatter objects: one for each state

Page 56: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for Inhomogeneous Nets

VecSetBlockSize no longer an option Can be reproduced with more generic

VecGather/VecScatter Requires the creation of arrays of

VecScatter objects: one for each state VecScatter created from two IS index

objects: TO Vec and FROM Vec

Page 57: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for Inhomogeneous Nets

Different types: gathered on separate processors or mixed?

Page 58: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for Inhomogeneous Nets

Different types: gathered on separate processors or mixed?

Gathered is easiest to implement: specify which processor, treat like the homogeneous case

Page 59: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for Inhomogeneous Nets

Different types: gathered on separate processors or mixed?

Gathered is easiest to implement: specify which processor, treat like the homogeneous case

Mixed is faster: balance processor load, potentially less message passing

Page 60: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for Inhomogeneous Nets

If there are ODEs for each connection, then the need for mixed distribution is greater

Page 61: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for Inhomogeneous Nets

If there are ODEs for each connection, then the need for mixed distribution is greater

Ideally (if disparity in # eqns isn’t great):

Page 62: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for Inhomogeneous Nets

If there are ODEs for each connection, then the need for mixed distribution is greater

Ideally (if disparity in # eqns isn’t great):

Lump all cells together, RCM permute, equal # cells per processor

Page 63: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for Adaptive Time Step

Page 64: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for Adaptive Time Step

PETSc RK45 < ode45

Page 65: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for Adaptive Time Step

PETSc RK45 < ode45 Will not integrate across

discontinuities

Page 66: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for Adaptive Time Step

PETSc RK45 < ode45 Will not integrate across

discontinuities For discontinuities you know of

(e.g. time dependent forcing function):

Page 67: PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006.

Tips for Adaptive Time Step

PETSc RK45 < ode45 Will not integrate across

discontinuities For discontinuities you know of (e.g.

time dependent forcing function): Loop: TSSetDuration to

discontinuity, TSStep