Brief Notes on Theoretical Neuroscience April 14, 2010 Gabriel Kreiman .

34
Brief Notes on Theoretical Neuroscience April 14, 2010 Gabriel Kreiman http://klab.tch.harvard.edu
  • date post

    19-Dec-2015
  • Category

    Documents

  • view

    214
  • download

    0

Transcript of Brief Notes on Theoretical Neuroscience April 14, 2010 Gabriel Kreiman .

Page 1: Brief Notes on Theoretical Neuroscience April 14, 2010 Gabriel Kreiman .

Brief Notes on Theoretical Neuroscience

April 14, 2010Gabriel Kreimanhttp://klab.tch.harvard.edu

Page 2: Brief Notes on Theoretical Neuroscience April 14, 2010 Gabriel Kreiman .

Outline

1. Why theoretical neuroscience?2. Single neuron models3. Network models4. Algorithms and methods for data analysis5. A sample of a few computational studies

(visual recognition)

Page 3: Brief Notes on Theoretical Neuroscience April 14, 2010 Gabriel Kreiman .

Further reading

This will NOT be an exhaustive presentation of work in Theoretical Neuroscience…

•Abbott and Dayan. Theoretical Neuroscience - Computational and Mathematical Modeling of Neural Systems [2001] (ISBN 0-262-04199-5). MIT Press.

•Koch. Biophysics of computation [1999] (ISBN 0-19-510491-9).

Oxford University Press.

•Hertz, Krogh, and Palmer, Introduction to the theory of neural computation. [1991] (ISBN 0-20151560-1). Santa Fe Institute Studies in the Sciences of Complexity.

Page 4: Brief Notes on Theoretical Neuroscience April 14, 2010 Gabriel Kreiman .

Why bother with models?

-Quantitative models force us to think about and formalize hypotheses and assumptions

-Models can integrate and summarize observations across experiments, resolutions and laboratories

-A good model can lead to (non-intuitive) experimental predictions

-A quantitative model, implemented through simulations, can be useful from an engineering viewpoint (e.g. face recognition)

-A model can point to important missing data, critical information and decisive experiments

Page 5: Brief Notes on Theoretical Neuroscience April 14, 2010 Gabriel Kreiman .

Do I have to be a “professional theoretician” to build a model?

-No!

-There are plenty of excellent computational papers written by experimentalists…

A very brief sample:

-Laurent G (2002) Olfactory Network Dynamics and the coding of multidimensional signals. Nature Reviews Neuroscience 3:884-895.- Carandini M, Heeger DJ, Movshon JA (1997) Linearity and normalization in simple cells of the macaque primary visual cortex. J Neurosci 17:8621-8644.- Brincat S, Connor C (2006) Dynamic Shape Synthesis in Posterior Inferior Temporal Cortex. Neuron 49:17-24- Blake R (1989) A neural theory of binocular rivalry. Psychological Review 96:145-167.- Hubel, D.H. and T.N. Wiesel, Receptive fields, binocular interaction and functional architecture in the cat's visual cortex. J Physiol, 1962. 160:106-54.

-Scientists always build models (even if the models are not quantitative and are not implemented through computational simulations). There is no such thing as a “model-free” experiment…

Page 6: Brief Notes on Theoretical Neuroscience April 14, 2010 Gabriel Kreiman .

A model for orientation tuning in simple cells

A feed-forward model for orientation selectivity in V1

(by no means the only model)

Hubel and Wiesel. J. Physiology (1962)

Page 7: Brief Notes on Theoretical Neuroscience April 14, 2010 Gabriel Kreiman .

More anatomical complexity – Similar math

Felleman and Van Essen 1991Douglas and Martin 2004

Page 8: Brief Notes on Theoretical Neuroscience April 14, 2010 Gabriel Kreiman .

Outline

1. Why theoretical neuroscience?2. Single neuron models3. Network models4. Algorithms and methods5. A sample of a few computational studies

Page 9: Brief Notes on Theoretical Neuroscience April 14, 2010 Gabriel Kreiman .

A nested family of single neuron models

Filter operations

Integrate-and-fire circuit

Hodgkin-Huxley units

Multi-compartmental models

Spines, channels

Biological accuracy

Lack of analytical solutions

Computational complexity

Page 10: Brief Notes on Theoretical Neuroscience April 14, 2010 Gabriel Kreiman .

Geometrically accurate models vs. spherical cows with point masses

A central question in Theoretical Neuroscience:What is the “right” level of abstraction?

Page 11: Brief Notes on Theoretical Neuroscience April 14, 2010 Gabriel Kreiman .

The leaky integrate-and-fire model• Lapicque 1907• Below threshold, the voltage is

governed by:

• A spike is fired when V(t)>Vthr (and V(t) is reset)

• A refractory period tref is imposed after a spike.

• Simple and fast.• Does not consider spike-rate

adaptation, multiple compartments, sub-ms biophysics, neuronal geometry

CdV(t)

dt

V(t)

R I(t)

Vrest=-65 mVVth =-50 mVΤm = 10 msRm = 10 MΩ

Line = I&F modelCircles = cortex

first 2spikes

adapted

Page 12: Brief Notes on Theoretical Neuroscience April 14, 2010 Gabriel Kreiman .

The Hodgkin-Huxley Model

)()()()( 34NaNaKKLL EVhmgEVngEVg

dt

dVCtI

where:im = membrane currentV = voltage

L = leak channelK = potassium channelNa = sodium channel

g = conductances (e.g. gNa=120 mS/cm2; gK=36 mS/cm2; gL=0.3 mS/cm2)E = reversal potentials (e.g. ENa=115mV, EK=-12 mV, EL = 10.6 mV)n, m, h = “gating variables”, n=n(t), m=m(t), h=h(t)

Hodgkin, A. L., and Huxley, A. F. (1952). A quantitative description of membrane current and its application to conduction and excitation in nerve.

Journal of Physiology 117, 500-544.

Page 13: Brief Notes on Theoretical Neuroscience April 14, 2010 Gabriel Kreiman .

Outline

1. Why theoretical neuroscience?2. Single neuron models3. Network models4. Algorithms and methods5. A sample of a few computational studies

Page 14: Brief Notes on Theoretical Neuroscience April 14, 2010 Gabriel Kreiman .

From neurons to circuits•Single neurons can perform many interesting and important computations (e.g. Gabbiani et al (2002). Multiplicative computation in a visual neuron sensitive to looming. Nature 420, 320-324)

•Neurons are not isolated. They are part of circuits. A typical cortical neuron receives input from ~104 other neurons.

•It is not always trivial to predict circuit-level properties from single neuron properties. There could be interesting properties emerging at the network level.

Page 15: Brief Notes on Theoretical Neuroscience April 14, 2010 Gabriel Kreiman .

Circuits – some basic definitions

Notes:1. Connectivity does not need to be all-to-all2. There are excitatory neurons and inhibitory neurons (and many types of inhibitory

neurons)3. Most models assume balance between excitation and inhibition4. Most models do not include layers and the anatomical separation of forward and back

pathways5. There are many more recurrent+feedback connections than feed-forward connections

(the opposite is true about models…)

Page 16: Brief Notes on Theoretical Neuroscience April 14, 2010 Gabriel Kreiman .

Firing rate network models – A simple feedforward circuit

• Time scales > ~ 1 ms• Analytic calculations in

some cases• Fewer free parameters

than spiking models• Easier/faster to

simulate

Is wb dKs(t )ub ( )

tb1

N

sdIsdt

Is wbubb1

N

Ks(t)(1/ s)exp( t / s)

Is = total synaptic currentN = total number of inputswb = synaptic weightsKs(t) = synaptic kernelub = input firing rates

if

)( sIFv

][)( ss IIF

F can be a sigmoid functionOr a threshold linear function:

Page 17: Brief Notes on Theoretical Neuroscience April 14, 2010 Gabriel Kreiman .

Learning from examples – The perceptron

Imagine that we want to classify the inputs u into two groups “+1” and “-1”

w w Ú2vm v(u

m) um Perceptron learning rule

Training examples: {um,vm}

Linear separability: can attain zero errorCross-validation: use separate training and test dataThere are several more sophisticated learning algorithms

Page 18: Brief Notes on Theoretical Neuroscience April 14, 2010 Gabriel Kreiman .

Learning from examples – Gradient descent

Now imagine that v is a real value (as opposed to binary)

uf (s)

v(s)w.u

We want to choose the weights so that the output approximates some function h(s)

Move along the gradient of the error between the desired output and the current output

Page 19: Brief Notes on Theoretical Neuroscience April 14, 2010 Gabriel Kreiman .

Outline

1. Why theoretical neuroscience?2. Single neuron models3. Network models4. Algorithms and methods5. A sample of a few computational studies

Page 20: Brief Notes on Theoretical Neuroscience April 14, 2010 Gabriel Kreiman .

Some examples of computational algorithms and methods

• Different techniques for time-frequency analysis of neural signals (e.g. Pesaran et al 2002, Fries et al 2001)

• Spike sorting (e.g. Lewicki 1998, Quian Quiroga et al 2005)

• Machine learning approaches to decoding neuronal responses (e.g. Hung et al 2005, Wilson et al 1993, Musallam et al 2004)

• Information theory (e.g. Abbott et al 1996, Bialek et al 1991)

• Neural coding (e.g. Gabbiani et al 1998, Bialek et al 1991)

• Definition of spatio-temporal receptive fields, phenomenological models, measures of neuronal synchrony, spike train statistics

Page 21: Brief Notes on Theoretical Neuroscience April 14, 2010 Gabriel Kreiman .

Outline

1. Why theoretical neuroscience?2. Single neuron models3. Network models4. Algorithms and methods5. A sample of a few computational studies

Page 22: Brief Notes on Theoretical Neuroscience April 14, 2010 Gabriel Kreiman .

Predicting spikes in the retina

Keat J, Reinagel P, Reid RC, Meister M (2001) Predicting every spike: a model for the responses of visual neurons. Neuron 30:803-817.

g( t) s(t)* F( t)

s(t) = visual stimulusF(t) = linear filter“*” = convolutiong(t)= “generator” potential= thresholdr(t) = firing event (1 when g(t)>)

Page 23: Brief Notes on Theoretical Neuroscience April 14, 2010 Gabriel Kreiman .

Predicting spikes in the retina

Keat J, Reinagel P, Reid RC, Meister M (2001) Predicting every spike: a model for the responses of visual neurons. Neuron 30:803-817.

h(t)=g(t)+P(t)P(t)=Bexp(-t/)

Page 24: Brief Notes on Theoretical Neuroscience April 14, 2010 Gabriel Kreiman .

Predicting spikes in the retina

Keat J, Reinagel P, Reid RC, Meister M (2001) Predicting every spike: a model for the responses of visual neurons. Neuron 30:803-817.

Two “noise” sources are added to account for trial-to-trial variability:

a(t) = gaussian noise at the generator potential

b(t) = random fluctuations in the feedback potential P(t)

Page 25: Brief Notes on Theoretical Neuroscience April 14, 2010 Gabriel Kreiman .

The “blue brain” modeling project

-http://bluebrain.epfl.ch

- IBM’s Blue gene supercomputer

- “Reverse engineer” the brain in a “biologically accurate” way

- November 2007 milestone: 30 million synapses in “precise” locations to model a neocortical column

- Compartmental simulations for neurons

- Needs another supercomputer for visualization (10,000 neurons, high quality mesh, 1 billion triangles, 100 Gb)

QUESTION: What is the “right” level of abstraction needed to understand the function of cortical circuitry?

Page 26: Brief Notes on Theoretical Neuroscience April 14, 2010 Gabriel Kreiman .

An object can cast an infinite number of projections on the retina

Page 27: Brief Notes on Theoretical Neuroscience April 14, 2010 Gabriel Kreiman .

A brute force approach to object recognition

Task: Recognize the handwritten “A”

A “brute force” solution:- Use templates for each letter- Use multiple scales for each

template- Use multiple positions for

each template- Use multiple rotations for

each template- Etc.

Problems with this approach:- Large amount of storage for

each object- No extrapolation, no

intelligent learning- Need to learn about each

object under each condition

Page 28: Brief Notes on Theoretical Neuroscience April 14, 2010 Gabriel Kreiman .

A non-exhaustive list of biological object recognition models

K. Fukushima, Neocognitron: a self organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological Cybernetics, 1980. 36: 193-202.Y. LeCun, L. Bottou, Y. Bengioand P. Haffner, Gradient-based learning applied to document recognition. Proc of the IEEE, 1998. 86: 2278-2324.G. Wallis and E.T. Rolls, Invariant face and object recognition in the visual system. Progress in Neurobiology, 1997. 51: 167-94.B. Mel, SEEMORE: Combining color, shape and texture histogramming in a neurally inspired approach to visual object recognition. Neural Computation, 1997. 9: 777.B.A. Olshausen, C.H. Anderson and D.C. Van Essen, A neurobiological model of visual attention and invariant pattern recognition based on dynamic routing of information. J Neurosci, 1993. 13: 4700-19.M. Riesenhuberand T. Poggio, Hierarchical models of object recognition in cortex. Nature Neuroscience, 1999. 2: 1019-1025.G. Deco and E.T. Rolls, A neurodynamical cortical model of visual attention and invariant object recognition. Vision Res, 2004. 44: 621-42.P. Foldiak, Learning Invariance from Transformation Sequences. Neural Computation, 1991. 3: 194-200.

Some common themes across multiple models:

•Hierarchical structure

•Increased “receptive field” size

•Increased complexity in shape preferences

•Increased invariance

Page 29: Brief Notes on Theoretical Neuroscience April 14, 2010 Gabriel Kreiman .

Neocognitron

Fukushima K. (1980) Neocognitron: a self organizing neural network model for a mechanism fo pattern recognition unaffected by shift in position. Biological Cybernetics 36, 193-202

Retinotopically arranged connections between layersFeature extracting “S” cellsC-cells performing a local “OR” operationIncreasing buildup of position toleranceUnsupervised learning in S layers

Page 30: Brief Notes on Theoretical Neuroscience April 14, 2010 Gabriel Kreiman .

Recognizing objects by part decomposition

Biederman (1987) Psychological Review

Page 31: Brief Notes on Theoretical Neuroscience April 14, 2010 Gabriel Kreiman .

Object recognition by alignment to prototypes

Ullman (1996) High-level vision

Prototype

Alignment of 3 points to the prototype (black arrows)Note: some points may not align (red ellipses)

Page 32: Brief Notes on Theoretical Neuroscience April 14, 2010 Gabriel Kreiman .

Invariance in visual object recognition

Poggio T, Edelman S (1990) A network that learns to recognize 3D objects. Nature 343:263-266.

Several computational models for rotation invariance rely on building 3D object modelsHere, recognition is based on learning from few perspective viewsGeneralized radial basis functions

f (x) ciG(|| x t i ||)i1

K

ti = centersci = coefficientsG = basis function (e.g. gaussian)

Page 33: Brief Notes on Theoretical Neuroscience April 14, 2010 Gabriel Kreiman .

Invariance in visual object recognition

Poggio T, Edelman S (1990) A network that learns to recognize 3D objects. Nature 343:263-266.

Page 34: Brief Notes on Theoretical Neuroscience April 14, 2010 Gabriel Kreiman .

Computer vision

- Goal: “generic” object recognition

- Challenge: object transformations, intra-class variability for categorization

- e.g. Caltech 101 dataset, 30-800 exemplars/category

•Learning generative visual models.L. Fei-Fei, R. Fergus, and P. Perona. CVPR 2004Shape Matching and Object Recognition using Low Distortion Correspondence. Alexander C. Berg, Tamara L. Berg, JitendraMalik. CVPR 2005•The Pyramid Match Kernel:DiscriminativeClassification with Sets of Image Features. K. Graumanand T. Darrell. ICCV) 2005.•Combining Generative Models and Fisher Kernels Holub, AD. Welling, M. Perona, P. ICCV 2005•Exploiting Unlabelled Data for Hybrid Object Classification.Holub, AD. Welling, M. Perona, P. NIPS 2005 Workshop in Inter-Class Transfer.•Object Recognition with Features Inspired by Visual Cortex. T. Serre, L. Wolf and T. Poggio. CVPR 2005.

Wang et al CVPR 2006