Multiscale analysis of hybrid processes and reduction of stochastic neuron models. · Multiscale...

Post on 05-Jul-2020

2 views 0 download

Transcript of Multiscale analysis of hybrid processes and reduction of stochastic neuron models. · Multiscale...

Multiscale analysis of hybrid processes and reduction ofstochastic neuron models.

Gilles Wainribjoint work with:

Khashayar Pakdaman and Michele Thieullen

Institut J.Monod- CNRS,Univ.Paris 6,Paris 7 - Labo. Proba et Modeles Aleatoires Univ.Paris 6,Paris 7,CNRSCREA Ecole polytechnique

January, 2010

Part I : Introduction

Deterministic neuron model

Hodgkin Huxley (HH) model (Hodgkin Huxley - J.Physiol. 1952):

CmdV

dt= I − gL(V − VL)− gNam3h(V − VNa)− gK n4(V − VK )

dm

dt= τm(V )−1 (m∞(V )−m)

dh

dt= τh(V )−1 (h∞(V )− h)

dn

dt= τn(V )−1 (n∞(V )− n)

→ Conductance-based neuron model

Time-scale separation and reduction

Sodium activation dynamic is faster than the other variables : τm → 0

m = m∞(V )

Three-dimensional reduced system:

CmdV

dt= I − gL(V − VL)− gNam∞(V)3h(V − VNa)− gK n4(V − VK )

dh

dt= τh(V ) (h∞(V )− h)

dn

dt= τn(V ) (n∞(V )− n)

Reduction of neuron models : key step in theoretical (singular perturbations) andnumerical analysisRinzel 1985, Kepler et al. 1992, Meunier 1992, Suckley et al.2003, Rubin et al. 2007,...

Time-scale separation and reduction

Sodium activation dynamic is faster than the other variables : τm → 0

m = m∞(V )

Three-dimensional reduced system:

CmdV

dt= I − gL(V − VL)− gNam∞(V)3h(V − VNa)− gK n4(V − VK )

dh

dt= τh(V ) (h∞(V )− h)

dn

dt= τn(V ) (n∞(V )− n)

Reduction of neuron models : key step in theoretical (singular perturbations) andnumerical analysisRinzel 1985, Kepler et al. 1992, Meunier 1992, Suckley et al.2003, Rubin et al. 2007,...

Modelling neurons with stochastic ion channels

Single ion channels stochasticity:

• Macromolecular devices : open and close through voltage-inducedconformational changes

Potassium channel

• Stochasticity due to thermal noise

Channel noise : finite size effects responsible for intrinsic variability noise-inducedphenomena (spontaneous activity, signal detection enhancement,...)

Modelling neurons with stochastic ion channels

Single ion channels stochasticity:

• Macromolecular devices : open and close through voltage-inducedconformational changes

Potassium channel

• Stochasticity due to thermal noise

Channel noise : finite size effects responsible for intrinsic variability noise-inducedphenomena (spontaneous activity, signal detection enhancement,...)

Modelling neurons with stochastic ion channels

Deterministic model X = (V , u)

dV

dt= F (V , u)

du

dt= (1− u)α(V )− uβ(V ) = τu(V )(u∞(V )− u)

Modelling neurons with stochastic ion channels

Stochastic model XN = (VN , uN )

• Single ion channel i ∈ {1, ...,N} with voltage-dependent transition rates :independent jump Markov process ci (t)

• Proportion of open ion channels (empirical measure) ::

uN (t) =1

N

NXi=1

ci (t)

• Between the jumps, voltage dynamics:

dVN

dt= F (VN , uN )

Modelling neurons with stochastic ion channels

Stochastic model XN = (VN , uN )

• Single ion channel i ∈ {1, ...,N} with voltage-dependent transition rates :independent jump Markov process ci (t)

• Proportion of open ion channels (empirical measure) ::

uN (t) =1

N

NXi=1

ci (t)

• Between the jumps, voltage dynamics:

dVN

dt= F (VN , uN )

Modelling neurons with stochastic ion channels

Stochastic model XN = (VN , uN )

• Single ion channel i ∈ {1, ...,N} with voltage-dependent transition rates :independent jump Markov process ci (t)

• Proportion of open ion channels (empirical measure) ::

uN (t) =1

N

NXi=1

ci (t)

• Between the jumps, voltage dynamics:

dVN

dt= F (VN , uN )

Modelling neurons with stochastic ion channels

Stochastic model XN = (VN , uN )

• Single ion channel i ∈ {1, ...,N} with voltage-dependent transition rates :independent jump Markov process ci (t)

• Proportion of open ion channels (empirical measure) ::

uN (t) =1

N

NXi=1

ci (t)

• Between the jumps, voltage dynamics:

dVN

dt= F (VN , uN )

Modelling neurons with stochastic ion channels

• Modelling framework:Neuron ⇐⇒ population of globally coupled independent ion channels

• Mathematical framework:Piecewise-deterministic Markov process at the fluid limit

⇓ ⇓(Davis, 1984) (Kurtz, 1971)

Modelling neurons with stochastic ion channels

• Modelling framework:Neuron ⇐⇒ population of globally coupled independent ion channels

• Mathematical framework:Piecewise-deterministic Markov process at the fluid limit

⇓ ⇓(Davis, 1984) (Kurtz, 1971)

Modelling neurons with stochastic ion channels

• Modelling framework:Neuron ⇐⇒ population of globally coupled independent ion channels

• Mathematical framework:Piecewise-deterministic Markov process at the fluid limit

⇓ ⇓(Davis, 1984) (Kurtz, 1971)

Modelling neurons with stochastic ion channels

• Modelling framework:Neuron ⇐⇒ population of globally coupled independent ion channels

• Mathematical framework:Piecewise-deterministic Markov process at the fluid limit

(Davis, 1984)

(Kurtz, 1971)

Modelling neurons with stochastic ion channels

• Modelling framework:Neuron ⇐⇒ population of globally coupled independent ion channels

• Mathematical framework:Piecewise-deterministic Markov process at the fluid limit

⇓ ⇓(Davis, 1984) (Kurtz, 1971)

Limit Theorems : Law of large numbers

Theorem When N →∞, XN converges to X in probability over finite time intervals[0,T ]

For ∆ > 0, define

PN (T ,∆) := P

"sup

t∈[0,T ]|XN (t)− X (t)|2 > ∆

#

Thenlim

N→∞PN (T ,∆) = 0

More precisely, there exists constants B,C > 0 such that:

lim supN→∞

1

Nlog PN (T ,∆) ≤ −

∆e−BT 2

CT

Pakdaman, Thieullen, W. ”Fluid limit theorems for stochastic hybrid systems withapplication to neuron models” (2009) arXiv:1001.2474

Limit Theorems : Law of large numbers

Theorem When N →∞, XN converges to X in probability over finite time intervals[0,T ]For ∆ > 0, define

PN (T ,∆) := P

"sup

t∈[0,T ]|XN (t)− X (t)|2 > ∆

#

Thenlim

N→∞PN (T ,∆) = 0

More precisely, there exists constants B,C > 0 such that:

lim supN→∞

1

Nlog PN (T ,∆) ≤ −

∆e−BT 2

CT

Pakdaman, Thieullen, W. ”Fluid limit theorems for stochastic hybrid systems withapplication to neuron models” (2009) arXiv:1001.2474

Limit Theorems : Law of large numbers

Theorem When N →∞, XN converges to X in probability over finite time intervals[0,T ]For ∆ > 0, define

PN (T ,∆) := P

"sup

t∈[0,T ]|XN (t)− X (t)|2 > ∆

#

Thenlim

N→∞PN (T ,∆) = 0

More precisely, there exists constants B,C > 0 such that:

lim supN→∞

1

Nlog PN (T ,∆) ≤ −

∆e−BT 2

CT

Pakdaman, Thieullen, W. ”Fluid limit theorems for stochastic hybrid systems withapplication to neuron models” (2009) arXiv:1001.2474

Limit Theorems : Law of large numbers

Theorem When N →∞, XN converges to X in probability over finite time intervals[0,T ]For ∆ > 0, define

PN (T ,∆) := P

"sup

t∈[0,T ]|XN (t)− X (t)|2 > ∆

#

Thenlim

N→∞PN (T ,∆) = 0

More precisely, there exists constants B,C > 0 such that:

lim supN→∞

1

Nlog PN (T ,∆) ≤ −

∆e−BT 2

CT

Pakdaman, Thieullen, W. ”Fluid limit theorems for stochastic hybrid systems withapplication to neuron models” (2009) arXiv:1001.2474

Limit Theorems : Central limit

Theorem:Let

RN (t) :=√

N

„XN (t)−

Z t

0F (XN (s))ds

«When N →∞, RN converges in law to a diffusion process

R(t) =

Z t

0Σ(X (s))dWs

Langevin Approximation XN = (VN (t), uN (t)):

dVN (t) = F (VN (t), uN (t))dt

duN (t) = b(VN (t), uN (t))dt +1√

NΣ(VN (t), uN (t))dWs

Further developments : strong approximation (pathwise CLT), Markov vs. Langevin,large deviations

Limit Theorems : Central limit

Theorem:Let

RN (t) :=√

N

„XN (t)−

Z t

0F (XN (s))ds

«When N →∞, RN converges in law to a diffusion process

R(t) =

Z t

0Σ(X (s))dWs

Langevin Approximation XN = (VN (t), uN (t)):

dVN (t) = F (VN (t), uN (t))dt

duN (t) = b(VN (t), uN (t))dt +1√

NΣ(VN (t), uN (t))dWs

Further developments : strong approximation (pathwise CLT), Markov vs. Langevin,large deviations

Limit Theorems : Central limit

Theorem:Let

RN (t) :=√

N

„XN (t)−

Z t

0F (XN (s))ds

«When N →∞, RN converges in law to a diffusion process

R(t) =

Z t

0Σ(X (s))dWs

Langevin Approximation XN = (VN (t), uN (t)):

dVN (t) = F (VN (t), uN (t))dt

duN (t) = b(VN (t), uN (t))dt +1√

NΣ(VN (t), uN (t))dWs

Further developments : strong approximation (pathwise CLT), Markov vs. Langevin,large deviations

Stochastic reduction ?

Part II : Mathematical analysis

Singular perturbations for jump Markov processes

Figure: Multiscale four-state model. Horizontal transitions are fast, whereas vertical transitions areslow.

Singular perturbations for jump Markov processes

Singular perturbations for jump Markov processes

Singular perturbations for jump Markov processes : general setting

Yin, Zhang, ”Continuous-time Markov Chains and Applications : a singularperturbation approach”, 1998

Assumption There exist n subsets of fast transitions.

E = E1 ∪ E2 ∪ ... ∪ En

• if i , j ∈ Ek then αi,j is of order O(ε−1),

• otherwise, if i ∈ Ek and j ∈ El , with k 6= l then αi,j is of order O(1).

Singular perturbations for jump Markov processes : general setting

Constructing a reduced process:

• quasi-stationary distributions (ρki )i∈Ek

within fast subsets Ek , for k ∈ {1, ..., n}.• aggregated process (X ) on the state space E = {1, ..., n} with transition rates:

αk,l =Xi∈Ek

Xj∈El

ρikαi,j for k, l ∈ E

Singular perturbations for jump Markov processes : first-order

Theorem

• all-fast case For all t > 0, the probability Pεi (t) = P [X εt = xi ] converges when

ε→ 0 to the stationary distribution ρi , for all i ∈ E.

• multiscale case As ε→ 0 the process (Xε) is close to the reduced process (X ).More precisely :

1. EhR T

0

“1{Xε(t)=xik} − ρ

ki 1{Xε=k}

”Φ(xik )dt

i2= O(ε), for any function Φ : E → R,

with k ∈ {1, ..., n} and i ∈ Ek .2. The process X ε converges in law to X .

Singular perturbations for jump Markov processes : first-order

Theorem

• all-fast case For all t > 0, the probability Pεi (t) = P [X εt = xi ] converges when

ε→ 0 to the stationary distribution ρi , for all i ∈ E.

• multiscale case As ε→ 0 the process (Xε) is close to the reduced process (X ).More precisely :

1. EhR T

0

“1{Xε(t)=xik} − ρ

ki 1{Xε=k}

”Φ(xik )dt

i2= O(ε), for any function Φ : E → R,

with k ∈ {1, ..., n} and i ∈ Ek .2. The process X ε converges in law to X .

Singular perturbations for jump Markov processes : second-order

Rescaled process

nε(t) =1√ε

Z T

0

`1{X ε(t)=xi} − ρi

´Φ(xi , s)ds

Theorem The rescaled process nε(t) converges in law to the switching diffusionprocess

n(t) =

Z t

0σ(s)dWs

where W is a standard n-dimensional Brownian motion. The diffusion matrixA = σ(s)σ′(s) is given by:

Aij (s) = Φ(xi , s)Φ(xj , s)ˆρi R(i , j) + ρj R(j , i)

˜where

R(i , j) =

Z ∞0

`Pε(i , j , t)− ρj

´dt

Multiscale analysis of stochastic neuron models

Full model : X εN = (V ε

N , uεN ) with

• uεN empirical measure for a population of multiscale jump processes

• V εN = F (V ε

N , uεN )

Requires two extensions :

1. Population of jump processes

2. Piecewise deterministic Markov process

Multiscale analysis of stochastic neuron models

Full model : X εN = (V ε

N , uεN ) with

• uεN empirical measure for a population of multiscale jump processes

• V εN = F (V ε

N , uεN )

Requires two extensions :

1. Population of jump processes

2. Piecewise deterministic Markov process

Stationnary distribution for populations of multiscale jump processes

Stationnary distributions for the empirical measure→ multinomial distributions

Ex: two-state modelρ(N)(k/N) = C k

N uk∞(1− u∞)N−k

Averaging method for PDMP

Ex (all-fast):

V εN (t) =

Z t

0F (V ε

N (s), uεN (s))ds

with uεN fast

→ FN (VN ) :=

ZF (VN , u)ρ

(N)stat (du) (ergodic convergence)

• Theorem (general case) When ε→ 0, the process (V εN , u

εN ) converges in law

towards a coarse-grained hybrid process:

dVN

dt= FN (VN , uN )

and u reduced jump process with averaged transition rates, functions of V .Faggionato, Gabrielli, Ribezzi Crivellari 2009

• Central limit theorem (ongoing work) → diffusion approximation :

dVN dt = FN (V εN , u

εN )dt +

√εσN (V ε

N , uεN )dWt

Averaging method for PDMP

Ex (all-fast):

V εN (t) =

Z t

0F (V ε

N (s), uεN (s))ds

with uεN fast

→ FN (VN ) :=

ZF (VN , u)ρ

(N)stat (du) (ergodic convergence)

• Theorem (general case) When ε→ 0, the process (V εN , u

εN ) converges in law

towards a coarse-grained hybrid process:

dVN

dt= FN (VN , uN )

and u reduced jump process with averaged transition rates, functions of V .Faggionato, Gabrielli, Ribezzi Crivellari 2009

• Central limit theorem (ongoing work) → diffusion approximation :

dVN dt = FN (V εN , u

εN )dt +

√εσN (V ε

N , uεN )dWt

Averaging method for PDMP

Ex (all-fast):

V εN (t) =

Z t

0F (V ε

N (s), uεN (s))ds

with uεN fast

→ FN (VN ) :=

ZF (VN , u)ρ

(N)stat (du) (ergodic convergence)

• Theorem (general case) When ε→ 0, the process (V εN , u

εN ) converges in law

towards a coarse-grained hybrid process:

dVN

dt= FN (VN , uN )

and u reduced jump process with averaged transition rates, functions of V .Faggionato, Gabrielli, Ribezzi Crivellari 2009

• Central limit theorem (ongoing work) → diffusion approximation :

dVN dt = FN (V εN , u

εN )dt +

√εσN (V ε

N , uεN )dWt

Averaging method for PDMP

Ex (all-fast):

V εN (t) =

Z t

0F (V ε

N (s), uεN (s))ds

with uεN fast

→ FN (VN ) :=

ZF (VN , u)ρ

(N)stat (du) (ergodic convergence)

• Theorem (general case) When ε→ 0, the process (V εN , u

εN ) converges in law

towards a coarse-grained hybrid process:

dVN

dt= FN (VN , uN )

and u reduced jump process with averaged transition rates, functions of V .Faggionato, Gabrielli, Ribezzi Crivellari 2009

• Central limit theorem (ongoing work) → diffusion approximation :

dVN dt = FN (V εN , u

εN )dt +

√εσN (V ε

N , uεN )dWt

Part III : Application to Hodgkin-Huxley model

Application Hodgkin-Huxley model : reduced model (two-state)

Averaging ”m3” with respect to the binomial stationnary distribution

ρ(N)m (k/N) = C k

N mk∞(1−m∞)N−k yields:

CmdV

dt= I − gL(V − VL)− gNam∞(V )3h(V − VNa)− gK n4(V − VK )

− gNah(V − VNa)KN(V) (supplementary terms)

with

KN (V ) =3

Nm∞(V )2(1−m∞(V )) +

1

N2m∞(V )(1 + 2m∞(V )2)

Important remark : Noise strength η := 1N

appears as a bifurcation parameter.

Application Hodgkin-Huxley model : bifurcations of the reduced model

Figure: Bifurcation diagram with η as parameter for I = 0 of system (HHNTS ).

Application Hodgkin-Huxley model : bifurcations of the reduced model

Figure: Two-parameter bifurcation diagram of system (HHNTS ) with I and η as parameters.

Application Hodgkin-Huxley model : bifurcations of the reduced model

1. Below the double cycle curve is a region with a unique stable equilibrium point :ISI distribution should be approximately exponential, since a spike corresponds toa threshold crossing.

2. Between the double cycle and the Hopf curves is a bistable region : ISIdistribution should be bimodal, one peak corresponding to the escape from thestable equilibrium, and the other peak to the fluctuations around the limit cycle.

3. Above the Hopf curve is a region with a stable limit cycle and an unstableequilibrium point : ISI distribution should be centered around the period of thelimit cycle.

Application Hodgkin-Huxley model : bifurcations of the reduced model

1. Below the double cycle curve is a region with a unique stable equilibrium point :ISI distribution should be approximately exponential, since a spike corresponds toa threshold crossing.

2. Between the double cycle and the Hopf curves is a bistable region : ISIdistribution should be bimodal, one peak corresponding to the escape from thestable equilibrium, and the other peak to the fluctuations around the limit cycle.

3. Above the Hopf curve is a region with a stable limit cycle and an unstableequilibrium point : ISI distribution should be centered around the period of thelimit cycle.

Application Hodgkin-Huxley model : bifurcations of the reduced model

1. Below the double cycle curve is a region with a unique stable equilibrium point :ISI distribution should be approximately exponential, since a spike corresponds toa threshold crossing.

2. Between the double cycle and the Hopf curves is a bistable region : ISIdistribution should be bimodal, one peak corresponding to the escape from thestable equilibrium, and the other peak to the fluctuations around the limit cycle.

3. Above the Hopf curve is a region with a stable limit cycle and an unstableequilibrium point : ISI distribution should be centered around the period of thelimit cycle.

Application Hodgkin-Huxley model : stochastic simulations

Figure: A. With N = 30 (zone 3), noisy periodic trajectory. B. With N = 70 (zone 2), bimodalityof ISI’s C. With N = 120, ISI statistics are closer to a poissonian behavior.

Application Hodgkin-Huxley model : stochastic simulations

Figure: Interspike Interval (ISI) distributions

Conclusions and perspectives

• Systematic method for reducing a large class of stochastic neuron models

• Based on recent mathematical developments of the averaging method

• Illustration on HH : enables a bifurcation analysis with noise strength asparameter

• Other applications in neuroscience (synaptic models, networks, biochemicalreactions)

• Open mathematical questions (link with stochastic bifurcations, scaling in thedouble limit N →∞, ε→ 0)

Conclusions and perspectives

• Systematic method for reducing a large class of stochastic neuron models

• Based on recent mathematical developments of the averaging method

• Illustration on HH : enables a bifurcation analysis with noise strength asparameter

• Other applications in neuroscience (synaptic models, networks, biochemicalreactions)

• Open mathematical questions (link with stochastic bifurcations, scaling in thedouble limit N →∞, ε→ 0)

Singular perturbations for jump Markov processes : heuristics

Law evolution :dPε

dt=

„Qs (t) +

1

εQf (t)

«Pε

with initial condition Pε(0) = p0. We are looking for an expansion of Pε(t) of theform

Pεr (t) =rX

i=0

εiφi (t) +rX

i=0

εiψi (t

ε)

Singular perturbations for jump Markov processes : heuristics

Identifying power of ε:

Qf (t)φ0(t) = 0

Qf (t)φ1(t) =dφ0(t)

dt− φ0(t)Qs (t)

...

Qf (t)φi (t) =dφi−1(t)

dt− φi−1(t)Qs (t)

Error control:

1. |Pε(t)− Pεr (t)| = O(εr+1) uniformly in t ∈ [0,T ]

2. there exist K , k0 > 0 such that |ψi (t)| < Ke−k0t

Multiscale analysis of stochastic neuron models : summary

Second order approximation for PDMP

Central limit theorem

1√ε

„V ε

t −Z t

0F (V ε

s )ds

«→Z t

0σF (Vs )dWs