Statistical analysis of spike trains in neuronalnetworks
Bruno Cessac
NeuroMathComp Team,INRIA Sophia Antipolis,France.
Mathematical Modeling and Statistical Analysis inNeuroscience, 02-07-14
Bruno Cessac Statistical analysis of spike trains in neuronal networks
Bruno Cessac Statistical analysis of spike trains in neuronal networks
Visual system
Bruno Cessac Statistical analysis of spike trains in neuronal networks
Visual system
Bruno Cessac Statistical analysis of spike trains in neuronal networks
Visual system
Bruno Cessac Statistical analysis of spike trains in neuronal networks
Visual system
Bruno Cessac Statistical analysis of spike trains in neuronal networks
Multi Electrodes Array
Figure : Multi-Electrodes Array.
Multi Electrodes Array
Encoding a visual scene
Encoding a visual scene
Encoding a visual scene
Encoding a visual scene
Encoding a visual scene
Do Ganglion cells act as independent encoders ?
Or do their dynamical (spatio-temporal) correlations play a role inencoding a visual scene (population coding) ?
Encoding a visual scene
Do Ganglion cells act as independent encoders ?
Or do their dynamical (spatio-temporal) correlations play a role inencoding a visual scene (population coding) ?
Let us measure (instantaneous pairwise) correlations
E. Schneidman, M.J. Berry, R. Segev, and W. Bialek. "Weak pairwise correlations imply strongly correlated
network states in a neural population". Nature, 440(7087):1007-1012, 2006.
Let us measure (instantaneous pairwise) correlations
E. Schneidman, M.J. Berry, R. Segev, and W. Bialek. "Weak pairwise correlations imply strongly correlated
network states in a neural population". Nature, 440(7087):1007-1012, 2006.
Let us measure (instantaneous pairwise) correlations
E. Schneidman, M.J. Berry, R. Segev, and W. Bialek. "Weak pairwise correlations imply strongly correlated
network states in a neural population". Nature, 440(7087):1007-1012, 2006.
Constructing a statistical model handling measuredcorrelations
Assume stationarity.
Measure empirical correlations.
Select the probability distribution which maximizes theentropy and reproduces these correlations.
Spike events
Figure : Spike state.
Spike state
!k(n) 2 f 0; 1 g
Spike pattern
!(n) = (!k(n) )Nk=1
Spike block
!nm = f!(m)!(m + 1) : : : !(n) g
Raster plot
!def= !T0
Spike events
Figure : Spike pattern.
Spike state
!k(n) 2 f 0; 1 g
Spike pattern
!(n) = (!k(n) )Nk=1
Spike block
!nm = f!(m)!(m + 1) : : : !(n) g
Raster plot
!def= !T0
Spike events
Figure : Spike pattern.
Spike state
!k(n) 2 f 0; 1 g
Spike pattern
!(n) = (!k(n) )Nk=1
�1001
�
Spike block
!nm = f!(m)!(m + 1) : : : !(n) g
Raster plot
!def= !T0
Spike events
Figure : Spike block.
Spike state
!k(n) 2 f 0; 1 g
Spike pattern
!(n) = (!k(n) )Nk=1
Spike block
!nm = f!(m)!(m + 1) : : : !(n) g
Raster plot
!def= !T0
Spike events
Figure : Spike block.
Spike state
!k(n) 2 f 0; 1 g
Spike pattern
!(n) = (!k(n) )Nk=1
Spike block
!nm = f!(m)!(m + 1) : : : !(n) g�1 1 0 1 00 1 0 1 00 0 0 1 00 1 0 1 1
�
Raster plot
!def= !T0
Spike events
Figure : Raster plot/Spike train.
Spike state
!k(n) 2 f 0; 1 g
Spike pattern
!(n) = (!k(n) )Nk=1
Spike block
!nm = f!(m)!(m + 1) : : : !(n) g
Raster plot
!def= !T0
Constructing a statistical model handling measuredcorrelations
Let �(T )! be the empirical measure:
�(T )! [ f ] =
1
T
TXt=1
f � �t(!)
e.g. �(T )! [!i ] =
1T
PTt=1 !i (t): �ring rate;
�(T )! [!i!j ] =
1T
PTt=1 !i (t)!j(t).
Find the (stationary) probability distribution � thatmaximizes statistical entropy under the constraints:
�(T )! [!i ] = �(!i );�
(T )! [!i!j ] = �(!i!j)
Constructing a statistical model handling measuredcorrelations
There is a unique probability distribution which satis�es theseconditions.
This is the Gibbs distribution with potential:
H(!(0)) =NXi=1
hi!i (0) +NX
i ;j=1
Jij!i (0)!j(0)
Ising model
End of the story ?
End of the story ?
End of the story ?
The Ising potential:
H(!(0)) =NXi=1
hi!i (0) +NX
i ;j=1
Jij!i (0)!j(0)
does not consider time correlations between neurons.It is therefore bad at predicting spatio-temporal patterns !
Which correlations ?
Spikes correlations seem to play a role in spike coding.
Although this statement depends on several assumption that could bias statistics
Stationarity;
Binning;
Stimulus dependence ?
Modulo these remarks, Maximum entropy seems to be a relevantsetting to study the role of spatio-temporal spike correlations in
retina coding.
Which correlations ?
Spikes correlations seem to play a role in spike coding.Although this statement depends on several assumption that could bias statistics
Stationarity;
Binning;
Stimulus dependence ?
Modulo these remarks, Maximum entropy seems to be a relevantsetting to study the role of spatio-temporal spike correlations in
retina coding.
Which correlations ?
Spikes correlations seem to play a role in spike coding.Although this statement depends on several assumption that could bias statistics
Stationarity;
Binning;
Stimulus dependence ?
Modulo these remarks, Maximum entropy seems to be a relevantsetting to study the role of spatio-temporal spike correlations in
retina coding.
OK. So let us consider spatio-temporal constraints.
OK. So let us consider spatio-temporal constraints.
Easy !
Euh... In fact not so easy.
H(!D0 ) =NXi=1
hi!i (0) +NX
i ;j=1
J(0)ij !i (0)!j(0)
+NX
i ;j=1
J(1)ij !i (0)!j(1)
+NX
i ;j ;k=1
J(2)ijk !i (0)!j(1)!k(2)
+????
OK. So let us consider spatio-temporal constraints.
Easy !
Euh... In fact not so easy.
H(!D0 ) =NXi=1
hi!i (0) +NX
i ;j=1
J(0)ij !i (0)!j(0)
+NX
i ;j=1
J(1)ij !i (0)!j(1)
+NX
i ;j ;k=1
J(2)ijk !i (0)!j(1)!k(2)
+????
OK. So let us consider spatio-temporal constraints.
Easy !
Euh... In fact not so easy.
H(!D0 ) =NXi=1
hi!i (0) +NX
i ;j=1
J(0)ij !i (0)!j(0)
+NX
i ;j=1
J(1)ij !i (0)!j(1)
+NX
i ;j ;k=1
J(2)ijk !i (0)!j(1)!k(2)
+????
OK. So let us consider spatio-temporal constraints.
Easy ! Euh... In fact not so easy.
H(!D0 ) =NXi=1
hi!i (0) +NX
i ;j=1
J(0)ij !i (0)!j(0)
+NX
i ;j=1
J(1)ij !i (0)!j(1)
+NX
i ;j ;k=1
J(2)ijk !i (0)!j(1)!k(2)
+????
Two "small" problems.
Handling temporality and memory.
Two "small" problems.
Handling temporality and memory.
Two "small" problems.
Handling temporality and memory.
Two "small" problems.
Handling temporality and memory.
Ising model considers successive times as independent
Two "small" problems.
Handling temporality and memory.
Probability of characteristic spatio-temporal patterns
The probability of a spike pattern ....depends on the network history (transition probabilities).
Two "small" problems.
Handling temporality and memory.
Probability of characteristic spatio-temporal patterns
The probability of a spike pattern ....depends on the network history (transition probabilities).
Two "small" problems.
Handling temporality and memory.
Probability of characteristic spatio-temporal patterns
The probability of a spike pattern ....depends on the network history (transition probabilities).
Two "small" problems.
Handling temporality and memory.
Probability of characteristic spatio-temporal patterns
The probability of a spike pattern ....depends on the network history (transition probabilities).
Two "small" problems.
Handling temporality and memory.
Probability of characteristic spatio-temporal patterns
The probability of a spike pattern ....depends on the network history (transition probabilities).
Two "small" problems.
Handling temporality and memory.
Probability of characteristic spatio-temporal patterns
The probability of a spike pattern ....depends on the network history (transition probabilities).
Two "small" problems.
Handling temporality and memory.
Probability of characteristic spatio-temporal patterns
The probability of a spike pattern ....depends on the network history (transition probabilities).
Two "small" problems.
Handling temporality and memory.
Probability of characteristic spatio-temporal patterns
The probability of a spike pattern ....depends on the network history (transition probabilities).
Two "small" problems.
Handling temporality and memory.
Probability of characteristic spatio-temporal patterns
The probability of a spike pattern ....depends on the network history (transition probabilities).
Two "small" problems.
Handling temporality and memory.
Probability of characteristic spatio-temporal patterns
The probability of a spike pattern ....depends on the network history (transition probabilities).
Two "small" problems.
Handling temporality and memory.
Probability of characteristic spatio-temporal patterns
Given a set of hypotheses on transition probabilities there exists amathematical framework to solve the problem.
Handling memory.
Markov chains
Variable length Markov chains
Chains with complete connections
: : :
Gibbs distributions.
Mathematical setting
Probability distribution on (bi-in�nite) rasters:� [!nm ] ;8m < n 2 Z
Conditional probabilities with memory depth D:Pn�!(n)
��!n�1n�D
�.
Generating arbitrary depth D blocks probabilities:
��!m+Dm
�= Pm+D
�!(m + D)
��!m+D�1m
���!m+D�1m
�� [!nm ] =
Qnl=m+D Pl
h!(l)
���!l�1l�D
i��!m+D�1m
�;
8m < n 2 ZChapman-Kolmogorov relation
Mathematical setting
Probability distribution on (bi-in�nite) rasters:� [!nm ] ;8m < n 2 ZConditional probabilities with memory depth D:Pn�!(n)
��!n�1n�D
�.
Generating arbitrary depth D blocks probabilities:
��!m+Dm
�= Pm+D
�!(m + D)
��!m+D�1m
���!m+D�1m
�� [!nm ] =
Qnl=m+D Pl
h!(l)
���!l�1l�D
i��!m+D�1m
�;
8m < n 2 ZChapman-Kolmogorov relation
Mathematical setting
Probability distribution on (bi-in�nite) rasters:� [!nm ] ;8m < n 2 ZConditional probabilities with memory depth D:Pn�!(n)
��!n�1n�D
�.
�1 1 0 1 j 00 1 0 1 j 00 0 0 1 j 00 1 0 1 j 1
�
Generating arbitrary depth D blocks probabilities:
��!m+Dm
�= Pm+D
�!(m + D)
��!m+D�1m
���!m+D�1m
�� [!nm ] =
Qnl=m+D Pl
h!(l)
���!l�1l�D
i��!m+D�1m
�;
8m < n 2 ZChapman-Kolmogorov relation
Mathematical setting
Probability distribution on (bi-in�nite) rasters:� [!nm ] ;8m < n 2 ZConditional probabilities with memory depth D:Pn�!(n)
��!n�1n�D
�.
Generating arbitrary depth D blocks probabilities:
��!m+Dm
�= Pm+D
�!(m + D)
��!m+D�1m
���!m+D�1m
�� [!nm ] =
Qnl=m+D Pl
h!(l)
���!l�1l�D
i��!m+D�1m
�;
8m < n 2 ZChapman-Kolmogorov relation
Mathematical setting
Probability distribution on (bi-in�nite) rasters:� [!nm ] ;8m < n 2 ZConditional probabilities with memory depth D:Pn�!(n)
��!n�1n�D
�.
Generating arbitrary depth D blocks probabilities:
��!m+Dm
�=
Pm+D
�!(m + D)
��!m+D�1m
���!m+D�1m
�� [!nm ] =
Qnl=m+D Pl
h!(l)
���!l�1l�D
i��!m+D�1m
�;
8m < n 2 ZChapman-Kolmogorov relation
Mathematical setting
Probability distribution on (bi-in�nite) rasters:� [!nm ] ;8m < n 2 ZConditional probabilities with memory depth D:Pn�!(n)
��!n�1n�D
�.
Generating arbitrary depth D blocks probabilities:
��!m+Dm
�= Pm+D
�!(m + D)
��!m+D�1m
���!m+D�1m
�
� [!nm ] =Qn
l=m+D Pl
h!(l)
���!l�1l�D
i��!m+D�1m
�;
8m < n 2 ZChapman-Kolmogorov relation
Mathematical setting
Probability distribution on (bi-in�nite) rasters:� [!nm ] ;8m < n 2 ZConditional probabilities with memory depth D:Pn�!(n)
��!n�1n�D
�.
Generating arbitrary depth D blocks probabilities:
��!m+Dm
�= Pm+D
�!(m + D)
��!m+D�1m
���!m+D�1m
�� [!nm ] =
Qnl=m+D Pl
h!(l)
���!l�1l�D
i��!m+D�1m
�;
8m < n 2 ZChapman-Kolmogorov relation
Mathematical setting
� [!nm ] =nY
l=m+D
Pl
h!(l)
���!l�1l�D
i�h!m+D�1m
i; 8m < n 2 Z
�l
�!ll�D
�= logPl
h!(l)
���!l�1l�D
i
� [!nm ] = expnX
l=m+D
�l
�!ll�D
��h!m+D�1m
i
�h!nm j!m+D�1
m
i= exp
nXl=m+D
�l
�!ll�D
�
Mathematical setting
� [!nm ] =nY
l=m+D
Pl
h!(l)
���!l�1l�D
i�h!m+D�1m
i; 8m < n 2 Z
�l
�!ll�D
�= logPl
h!(l)
���!l�1l�D
i
� [!nm ] = expnX
l=m+D
�l
�!ll�D
��h!m+D�1m
i
�h!nm j!m+D�1
m
i= exp
nXl=m+D
�l
�!ll�D
�
Mathematical setting
� [!nm ] =nY
l=m+D
Pl
h!(l)
���!l�1l�D
i�h!m+D�1m
i; 8m < n 2 Z
�l
�!ll�D
�= logPl
h!(l)
���!l�1l�D
i
� [!nm ] = expnX
l=m+D
�l
�!ll�D
��h!m+D�1m
i
�h!nm j!m+D�1
m
i= exp
nXl=m+D
�l
�!ll�D
�
Mathematical setting
� [!nm ] =nY
l=m+D
Pl
h!(l)
���!l�1l�D
i�h!m+D�1m
i; 8m < n 2 Z
�l
�!ll�D
�= logPl
h!(l)
���!l�1l�D
i
� [!nm ] = expnX
l=m+D
�l
�!ll�D
��h!m+D�1m
i
�h!nm j!m+D�1
m
i= exp
nXl=m+D
�l
�!ll�D
�
Gibbs distribution
8� � Zd ; �(f S g j @�) = 1
Z�;@�e��H
�;@�( fS g )
f (�) = � 1
�lim�"1
1
j�j logZ�;@�
(free energy density)
Gibbs distribution
8� � Zd ; �(f S g j @�) = 1
Z�;@�e��H
�;@�( fS g )
f (�) = � 1
�lim�"1
1
j�j logZ�;@�
(free energy density)
Gibbs distribution
8� � Zd ; �(f S g j @�) = 1
Z�;@�e��H
�;@�( fS g )
f (�) = � 1
�lim�"1
1
j�j logZ�;@�
(free energy density)
Gibbs distribution
Gibbs distribution
8m; n; �h!nm j!m+D�1
m
i= exp
nXl=m+D
�l
�!ll�D
�
(normalized potential)
Gibbs distribution
8m < n; A <� [!nm ]
expPn
l=m+DH�!ll�D
�exp�(n �m)P(H)
< B
(non normalized potential)
Gibbs distribution
P(H) is called "topological pressure" and is formalyequivalent to free energy density.
Does not require time-translation invariance (stationarity).
In the stationary case (+ assumptions) a Gibbs state is alsoan equilibrium state.
sup�2Minv
h(�) + �(H) = h(�) + �(H) = P(H)
.
Gibbs distribution
This formalism allows to handle the spatio-temporal case
H(!D0 ) =NXi=1
hi!i (0) +NX
i ;j=1
J(0)ij !i (0)!j(0)
+NX
i ;j=1
J(1)ij !i (0)!j(1)
+NX
i ;j ;k=1
J(2)ijk !i (0)!j(1)!k(2) + : : :
even numerically.J.C. Vasquez, A. Palacios, O. Marre, M.J. Berry II, B. Cessac, J. Physiol. Paris, , Vol 106, Issues 34, (2012).
H. Nasser, O. Marre, and B. Cessac, J. Stat. Mech. (2013) P03006.
H. Nasser, B. Cessac, Entropy (2014), 16(4), 2244-2277.
Gibbs distribution
This formalism allows to handle the spatio-temporal case
H(!D0 ) =NXi=1
hi!i (0) +NX
i ;j=1
J(0)ij !i (0)!j(0)
+NX
i ;j=1
J(1)ij !i (0)!j(1)
+NX
i ;j ;k=1
J(2)ijk !i (0)!j(1)!k(2) + ?????
even numerically.J.C. Vasquez, A. Palacios, O. Marre, M.J. Berry II, B. Cessac, J. Physiol. Paris, , Vol 106, Issues 34, (2012).
H. Nasser, O. Marre, and B. Cessac, J. Stat. Mech. (2013) P03006.
H. Nasser, B. Cessac, Entropy (2014), 16(4), 2244-2277.
Two small problems.
Exponential number of possible terms.
Contrarily to what happens usually in physics, we do not knowwhat should be
the right potential.
Two small problems.
Exponential number of possible terms.
Contrarily to what happens usually in physics, we do not knowwhat should be
the right potential.
Can we have a reasonable idea of what could be the spike statisticsby studying a neural network model ?
An Integrate and Fire neural network model with chemicaland electric synapses
An Integrate and Fire neural network model with chemicaland electric synapses
R.Cofr�e,B. Cessac: "Dynamics and spike trains statistics in conductance-based Integrate-and-Fire neural networks
with chemical and electric synapses", Chaos, Solitons and Fractals, 2013.
An Integrate and Fire neural network model with chemicaland electric synapses
Sub-threshold dynamics:
CkdVk
dt= �gL;k(Vk � EL)
�Xj
gkj(t; !)(Vk � Ej)
�Xj
�gkj (Vk � Vj)
+i(ext)k (t) + �B�k(t)
An Integrate and Fire neural network model with chemicaland electric synapses
Sub-threshold dynamics:
CkdVk
dt= �gL;k(Vk � EL)
�Xj
gkj(t; !)(Vk � Ej)
�Xj
�gkj (Vk � Vj)
+i(ext)k (t) + �B�k(t)
An Integrate and Fire neural network model with chemicaland electric synapses
Sub-threshold dynamics:
CkdVk
dt= �gL;k(Vk � EL)
�Xj
gkj(t; !)(Vk � Ej)
�Xj
�gkj (Vk � Vj)
+i(ext)k (t) + �B�k(t)
An Integrate and Fire neural network model with chemicaland electric synapses
Sub-threshold dynamics:
CkdVk
dt= �gL;k(Vk � EL)
�Xj
gkj(t; !)(Vk � Ej)
�Xj
�gkj (Vk � Vj)
+i(ext)k (t) + �B�k(t)
0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
0.1
0.5 1 1.5 2 2.5 3 3.5 4
PSP
t
t=1t=1.2t=1.6
t=3g(x)
An Integrate and Fire neural network model with chemicaland electric synapses
Sub-threshold dynamics:
CkdVk
dt= �gL;k(Vk � EL)
�Xj
gkj(t; !)(Vk � Ej)
�Xj
�gkj (Vk � Vj)
+i(ext)k (t) + �B�k(t)
0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
0.1
0.5 1 1.5 2 2.5 3 3.5 4
PSP
t
g(x)
An Integrate and Fire neural network model with chemicaland electric synapses
Sub-threshold dynamics:
CkdVk
dt= �gL;k(Vk � EL)
�Xj
gkj(t; !)(Vk � Ej)
�Xj
�gkj (Vk � Vj)
+i(ext)k (t) + �B�k(t)
An Integrate and Fire neural network model with chemicaland electric synapses
Sub-threshold dynamics:
CkdVk
dt= �gL;k(Vk � EL)
�Xj
gkj(t; !)(Vk � Ej)
�Xj
�gkj (Vk � Vj)
+i(ext)k (t) + �B�k(t)
Sub-threshold regime
CdV
dt+�G (t; !)� G
�V = I (t; !);
Gkl(t; !) =
24 gL;k +
NXj=1
gkj(t; !)
35 �kl
def= gk(t; !)�kl :
I (t; !) = I (cs)(t; !) + I (ext)(t) + I (B)(t)
I(cs)k (t; !) =
Xj
Wkj�kj(t; !); Wkjdef= GkjEj :
Sub-threshold regime
8<:
dV = (�(t; !)V + f (t; !))dt + �BcINdW (t);
V (t0) = v ;
�(t; !) = C�1�G � G (t; !)
�
f (t; !) = C�1I (cs)(t; !) + C�1I (ext)(t)
Homogeneous Cauchy problem
�dV (t;!)
dt= �(t; !)V (t; !);
V (t0) = v ;
Theorem
�(t; !) square matrix with bounded elements.
M0(t0; t; !) = IN
Mk(t0; t; !) = IN +
Z t
t0
�(s; !)Mk�1(s; t)ds; t � t1;
converges uniformly in [t0; t1].Brockett, R. W., "Finite Dimensional Linear Systems",John Wiley and Sons, 1970.
Flow
�(t0; t; !)def= lim
k!1Mk(t0; t; !)
Homogeneous Cauchy problem
�dV (t;!)
dt= �(t; !)V (t; !);
V (t0) = v ;
Theorem
�(t; !) square matrix with bounded elements.
M0(t0; t; !) = IN
Mk(t0; t; !) = IN +
Z t
t0
�(s; !)Mk�1(s; t)ds; t � t1;
converges uniformly in [t0; t1].Brockett, R. W., "Finite Dimensional Linear Systems",John Wiley and Sons, 1970.
Flow
�(t0; t; !)def= lim
k!1Mk(t0; t; !)
Homogeneous Cauchy problem
�dV (t;!)
dt= �(t; !)V (t; !);
V (t0) = v ;
Theorem
�(t; !) square matrix with bounded elements.
M0(t0; t; !) = IN
Mk(t0; t; !) = IN +
Z t
t0
�(s; !)Mk�1(s; t)ds; t � t1;
converges uniformly in [t0; t1].Brockett, R. W., "Finite Dimensional Linear Systems",John Wiley and Sons, 1970.
Flow
�(t0; t; !)def= lim
k!1Mk(t0; t; !)
Homogeneous Cauchy problem
If �(t; !) and �(s; !) commute
�(t0; t; !) =1Xk=0
1
k!(
Z t
t0
�(s; !)ds)k = eR tt0�(s;!)ds
Homogeneous Cauchy problem
If �(t; !) and �(s; !) commute
�(t0; t; !) =1Xk=0
1
k!(
Z t
t0
�(s; !)ds)k = eR tt0�(s;!)ds
This holds only in two cases :
G = 0;
�(t0; t; !) = e� 1
c
R tt0G(s;!)ds
B. Cessac, J. Math. Neuroscience, 2011.
Homogeneous Cauchy problem
If �(t; !) and �(s; !) commute
�(t0; t; !) =1Xk=0
1
k!(
Z t
t0
�(s; !)ds)k = eR tt0�(s;!)ds
This holds only in two cases :
G = 0;
�(t0; t; !) = e� 1
c
R tt0G(s;!)ds
B. Cessac, J. Math. Neuroscience, 2011.
G (t; !) = �(t; !)IN
Homogeneous Cauchy problem
In general:
�(t0; t; !) = IN++1Xn=1
XX1 = ( B; A(s1; !) )X2 = ( B; A(s2; !) )
: : :
Xn = ( B; A(sn; !) )
Z t
t0
� � �Z sn�1
t0
nYk=1
Xk ds1 � � � dsn :
B = C�1G ; A(t; !) = �C�1G (t; !)
Exponentially bounded ow
De�nition: An exponentially bounded ow is a two parameter(t0; t) family f�(t0; t; !)gt�t0 of ows such that, 8! 2 :
1 �(t0; t0; !) = IN and �(t0; t; !)�(t; s; !) = �(t0; s; !)whenever t0 � t � s;
2 For each v 2 RN and ! 2 , (t0; t)! �(t0; t; !)v iscontinuous for t0 � t;
3 There is M > 0 and m > 0 such that :
jj�(s; t; !)jj � Me�m(t�s); s � t: (1)
Exponentially bounded ow
Proposition
Let �1 be the largest eigenvalue of �G. If:
�1 < gL;
then the ow � in our model has the exponentially bounded ow
property.
Remark The typical electrical conductance values are of order 1nano-Siemens, while the leak conductance of retinal ganglion cellsis of order 50 micro-Siemens. Therefore, this condition is
compatible with the biophysical values of conductances in the
retina.
Exponentially bounded ow
Proposition
Let �1 be the largest eigenvalue of �G. If:
�1 < gL;
then the ow � in our model has the exponentially bounded ow
property.
Remark The typical electrical conductance values are of order 1nano-Siemens, while the leak conductance of retinal ganglion cellsis of order 50 micro-Siemens. Therefore, this condition is
compatible with the biophysical values of conductances in the
retina.
Exponentially bounded ow
Theorem
If �(t0; t; !) is an exponentially bounded ow , there is a unique
strong solution for t � t0 given by:
V (t0; t; !) = �(t0; t; !)v+
Z t
t0
�(s; t; !)f (s; !)ds+�B
c
Z t
t0
�(s; t; !)dW (s):
R. Wooster, "Evolution systems of measures for non-autonomous stochastic di�erential equations with Levy noise",
Communications on Stochastic Analysis, vol 5, 353-370, 2011
Membrane potential decomposition
V (t; !) = V (d)(t; !) + V (noise)(t; !);
V (d)(t; !) = V (cs)(t; !) + V (ext)(t; !);
V (cs)(t; !) =1
c
Z t
�1�(s; t; !)�(s; !) I (cs)(s; !)ds;
V (ext)(t; !) =1
c
Z t
�1�(s; t; !)�(s; !) I (ext)(s; !)ds;
V (noise)(t; !) =�B
c
Z t
�k (t;!)�(s; t; !) dW (s):
Membrane potential decomposition
V (t; !) = V (d)(t; !) + V (noise)(t; !);
V (d)(t; !) = V (cs)(t; !) + V (ext)(t; !);
V (cs)(t; !) =1
c
Z t
�1�(s; t; !)�(s; !) I (cs)(s; !)ds;
V (ext)(t; !) =1
c
Z t
�1�(s; t; !)�(s; !) I (ext)(s; !)ds;
V (noise)(t; !) =�B
c
Z t
�k (t;!)�(s; t; !) dW (s):
Membrane potential decomposition
V (t; !) = V (d)(t; !) + V (noise)(t; !);
V (d)(t; !) = V (cs)(t; !) + V (ext)(t; !);
V (cs)(t; !) =1
c
Z t
�1�(s; t; !)�(s; !) I (cs)(s; !)ds;
V (ext)(t; !) =1
c
Z t
�1�(s; t; !)�(s; !) I (ext)(s; !)ds;
V (noise)(t; !) =�B
c
Z t
�k (t;!)�(s; t; !) dW (s):
Membrane potential decomposition
V (t; !) = V (d)(t; !) + V (noise)(t; !);
V (d)(t; !) = V (cs)(t; !) + V (ext)(t; !);
V (cs)(t; !) =1
c
Z t
�1�(s; t; !)�(s; !) I (cs)(s; !)ds;
V (ext)(t; !) =1
c
Z t
�1�(s; t; !)�(s; !) I (ext)(s; !)ds;
V (noise)(t; !) =�B
c
Z t
�k (t;!)�(s; t; !) dW (s):
Membrane potential decomposition
V (t; !) = V (d)(t; !) + V (noise)(t; !);
V (d)(t; !) = V (cs)(t; !) + V (ext)(t; !);
V (cs)(t; !) =1
c
Z t
�1�(s; t; !)�(s; !) I (cs)(s; !)ds;
V (ext)(t; !) =1
c
Z t
�1�(s; t; !)�(s; !) I (ext)(s; !)ds;
V (noise)(t; !) =�B
c
Z t
�k (t;!)�(s; t; !) dW (s):
Transition probabilities
Pb: to determine P�!(n)
��!n�1�1
�
Fix !, n and t < n. Set:
b�k(t; !) = � � V(d)k (t; !); (1)
Neuron k emits a spike at integer time n (!k(n) = 1) if:
9t 2 [n � 1; n]; V(noise)k (t; !) = b�k(t; !):
"First passage" problem, in N dimension, with a time
dependent boundary b�k(t; !). (general form unknown).
Transition probabilities
Pb: to determine P�!(n)
��!n�1�1
�Fix !, n and t < n. Set:
b�k(t; !) = � � V(d)k (t; !); (1)
Neuron k emits a spike at integer time n (!k(n) = 1) if:
9t 2 [n � 1; n]; V(noise)k (t; !) = b�k(t; !):
"First passage" problem, in N dimension, with a time
dependent boundary b�k(t; !). (general form unknown).
Transition probabilities
Pb: to determine P�!(n)
��!n�1�1
�Fix !, n and t < n. Set:
b�k(t; !) = � � V(d)k (t; !); (1)
Neuron k emits a spike at integer time n (!k(n) = 1) if:
9t 2 [n � 1; n]; V(noise)k (t; !) = b�k(t; !):
"First passage" problem, in N dimension, with a time
dependent boundary b�k(t; !). (general form unknown).
Transition probabilities
Pb: to determine P�!(n)
��!n�1�1
�Fix !, n and t < n. Set:
b�k(t; !) = � � V(d)k (t; !); (1)
Neuron k emits a spike at integer time n (!k(n) = 1) if:
9t 2 [n � 1; n]; V(noise)k (t; !) = b�k(t; !):
"First passage" problem, in N dimension, with a time
dependent boundary b�k(t; !). (general form unknown).
Conditional probability
Without electric synapses the probability of !(n) conditionally to!n�1�1 can be approximated by:
P�!(n)
��!n�1�1
�=
NYk=1
P�!k(n)
��!n�1�1
�;
with P�!k(n)
��!n�1�1
�=
!k(n)� (Xk(n � 1; !)) + (1� !k(n)) (1� � (Xk(n � 1; !))) ;
where
Xk(n � 1; !) =� � V
(det)k (n � 1; !)
�k(n � 1; !);
and
�(x) =1p2�
Z +1
x
e�u2
2 du:
Conditional probability
�(!) = logP�!(n)
��!n�1�1
�de�nes a (in�nite range)
normalized potential de�ning a unique Gibbs distribution.
It depends explicitly on networks parameters and external
stimulus.
Its de�nition holds for a time-dependent stimulus (nonstationary).
It is similar to the so-called Generalized Linear Model used forretina analysis, although with a more complex structure.
The general form (with electric synapses) is yet unknown.
Conditional probability
�(!) = logP�!(n)
��!n�1�1
�de�nes a (in�nite range)
normalized potential de�ning a unique Gibbs distribution.
It depends explicitly on networks parameters and external
stimulus.
Its de�nition holds for a time-dependent stimulus (nonstationary).
It is similar to the so-called Generalized Linear Model used forretina analysis, although with a more complex structure.
The general form (with electric synapses) is yet unknown.
Conditional probability
�(!) = logP�!(n)
��!n�1�1
�de�nes a (in�nite range)
normalized potential de�ning a unique Gibbs distribution.
It depends explicitly on networks parameters and external
stimulus.
Its de�nition holds for a time-dependent stimulus (nonstationary).
It is similar to the so-called Generalized Linear Model used forretina analysis, although with a more complex structure.
The general form (with electric synapses) is yet unknown.
Conditional probability
�(!) = logP�!(n)
��!n�1�1
�de�nes a (in�nite range)
normalized potential de�ning a unique Gibbs distribution.
It depends explicitly on networks parameters and external
stimulus.
Its de�nition holds for a time-dependent stimulus (nonstationary).
It is similar to the so-called Generalized Linear Model used forretina analysis, although with a more complex structure.
The general form (with electric synapses) is yet unknown.
Conditional probability
�(!) = logP�!(n)
��!n�1�1
�de�nes a (in�nite range)
normalized potential de�ning a unique Gibbs distribution.
It depends explicitly on networks parameters and external
stimulus.
Its de�nition holds for a time-dependent stimulus (nonstationary).
It is similar to the so-called Generalized Linear Model used forretina analysis, although with a more complex structure.
The general form (with electric synapses) is yet unknown.
Back to our second "small" problem
Is there a Maximum Entropy potential corresponding to � (inthe stationary case) ?
Back to our second "small" problem
Is there a Maximum Entropy potential corresponding to � (inthe stationary case) ?
Back to our second "small" problem
One can make a Taylor expansion of �(!).
Back to our second "small" problem
Using !i (n)k = !i (n); k � 1 one ends up with a potential of the
form:
�(!) =NXi=1
hi!i (0) +NX
i ;j=1
J(0)ij !i (0)!j(0) + : : :
Back to our second "small" problem
The expansion is in�nite although one can approximate the in�niterange potential � by a �nite range approximation (�nite memory),
giving rise to a �nite expansion.
Back to our second "small" problem
The coe�cients of the expansion are non linear functions of thenetwork parameters and stimulus.
They are therefore somewhat redundant.
Back to our second "small" problem
Rodrigo Cofr�e, Bruno Cessac, "Exact computation of the maximum-entropy potential of spiking neural-network
models",Phys. Rev. E 89, 052117.
Given a set of stationary transition probabilities P�!(D)
��!D�1
0
�> 0
there is a unique (up to a constant) Maximum Entropy potential, written
as a linear combination of spike interactions terms with a minimal
number of terms (normal form). This potential can be explicitly (and
algorithmically) computed.
Hints: Using variable change one can eliminate terms in thepotential ("normal" form).
The construction is based on equivalence between Gibbs potentials(cohomology) and periodic orbits expansion.
Back to our second "small" problem
However, there is still a number of terms growing exponentially
with the number of neurons and the memory depth.
These terms are generically non zero.
Back to the retina
Neuromimetic models have typically O(N2) parameters whereN is the number of neurons.
The equivalent MaxEnt potential has generically a number ofparameters growing exponentially with N, non linear andredundant functions of the network parameters (synapticweights, stimulus).
)Intractable determination of parameters;
Stimulus dependent parameters;
Over�tting.
BUT Real neural networks are not generic
Back to the retina
Neuromimetic models have typically O(N2) parameters whereN is the number of neurons.
The equivalent MaxEnt potential has generically a number ofparameters growing exponentially with N, non linear andredundant functions of the network parameters (synapticweights, stimulus).
)Intractable determination of parameters;
Stimulus dependent parameters;
Over�tting.
BUT Real neural networks are not generic
Back to the retina
Neuromimetic models have typically O(N2) parameters whereN is the number of neurons.
The equivalent MaxEnt potential has generically a number ofparameters growing exponentially with N, non linear andredundant functions of the network parameters (synapticweights, stimulus).
)Intractable determination of parameters;
Stimulus dependent parameters;
Over�tting.
BUT Real neural networks are not generic
Back to the retina
Neuromimetic models have typically O(N2) parameters whereN is the number of neurons.
The equivalent MaxEnt potential has generically a number ofparameters growing exponentially with N, non linear andredundant functions of the network parameters (synapticweights, stimulus).
)Intractable determination of parameters;
Stimulus dependent parameters;
Over�tting.
BUT
Real neural networks are not generic
Back to the retina
Neuromimetic models have typically O(N2) parameters whereN is the number of neurons.
The equivalent MaxEnt potential has generically a number ofparameters growing exponentially with N, non linear andredundant functions of the network parameters (synapticweights, stimulus).
)Intractable determination of parameters;
Stimulus dependent parameters;
Over�tting.
BUT Real neural networks are not generic
Back to the retina
MaxEnt approach might be useful if there is some hidden law ofnature/ symmetry which cancels most terms in the expansion.
Acknowledgment
Neuromathcomp team
Rodrigo Cofr�e (pHd, September 2014)
Dora Karvouniari (M2)
Pierre Kornprobst (CR1 INRIA)
Slim Kraria (IR)
Gaia Lombardi (M2! Paris)
Hassan Nasser (pHd! Startup)
Daniela Pamplona (PostDoc)
Geo�rey Portelli (Post Doc)
Vivien Robinet (Post Doc! MCF Kourou)
Horacio Rostro (pHd! Docent Mexico)
Wahiba Taouali (Post Doc! Post Doc INTMarseille)
Juan-Carlos Vasquez (pHd! Post Doc Bogota)
Princeton University
Michael J. Berry II
ANR KEOPS
Maria-Jos�e Escobar (CN Valparaiso)
Adrian Palacios (CN Valparaiso)
Cesar Ravelo (CN Valparaiso)
Thierry Vi�eville (INRIA Mnemosyne)
Renvision FP7 project
Luca Bernondini (IIT Genova)
Matthias Hennig (Edinburgh)
Alessandro Maccionne (IIT Genova)
Evelyne Sernagor (Newcastle)
Institut de la Vision
Olivier Marre
Serge Picaud
Bruno Cessac Statistical analysis of spike trains in neuronal networks
Can we hear the shape of a Maximum entropy potential
Two distinct potentials H(1);H(2) of range R = D + 1 correspondto the same Gibbs distribution (are \equivalent"), if and only ifthere exists a range D function f such that (Chazottes-Keller(2009)):
H(2)�!D0
�= H(1)
�!D0
�� f
�!D�10
�+ f
�!D1
�+�; (2)
where � = P(H(2))� P(H(1)).
Can we hear the shape of a Maximum entropy potential
Summing over periodic orbits we get rid of the function f
RXn=1
�(!�nl1) =RXn=1
H�(!�nl1)� RP(H�); (3)
We eliminate equivalent constraints.
Can we hear the shape of a Maximum entropy potential
Conclusion
Given a set of transition probabilities Ph!(D)
���!D�10
i> 0 there
is a unique, up to a constant, MaxEnt potential, written as a linearcombination of constraints (average of spike events) with aminimal number of terms. This potential can be explicitly (andalgorithmically) computed.
Top Related