From brains to machine learningand back again
David RolnickMIT Applied Math, Media Lab, CSAIL
UC Davis, October 18, 2016
Part I: From brains to machine learning
The brain
Deep learning and the visual cortex
Deep learning and the visual cortex
Simple
Simple
Simple
Simple
Simple
Simple
Complex
Complex
Deep learning and the visual cortex
Simple
Simple
Simple
Simple
Simple
Simple
Complex
Complex
Deep learning and the visual cortex● Simple cells are sensitive to different stimuli at different places.● Complex cells pool the results of simple cells - they respond to different stimuli
at *any* place.
● Convolutional neural net: Alternates between convolutional layers (simple cells) and pooling layers (complex cells)
Concepts as attractor networks
dog
cat
skull
pet
bone
meow
brain
word: “dog”
Concepts as attractor networks
dog
cat
skull
pet
bone
meow
brain
word: “dog”
Concepts as attractor networks
dog
cat
skull
pet
bone
meow
brain
word: “dog”
Concepts as attractor networks
dog
cat
skull
pet
bone
meow
brain
word: “dog”
Hopfield model for attractor networks
Each vertex xi can take values ±1,
updates according to:
xi = sign( ∑ Wij xj )
Hopfield networks
Hopfield networks
Hopfield networks
Hopfield networks
Memory 1
Hopfield networks
Memory 1
Hopfield networks
Memory 1
Hopfield networks
Memory 1
Hopfield networks
Memory 1
Hopfield networks
Memory 1 Memory 2
Simulating Markov chain dynamics
p = 0.5
p = 1
p = 0.5
p = 1
Joint work with Haim Sompolinsky, Ishita Dasgupta, and Jeremy Bernstein
Pattern 1
Pattern 2
Pattern 3
● Attractors can be deterministic sequences of patterns● What about non-deterministic sequences?
Motivation
Weather
Rain Sprinkler
Wet roof Wet grass
Joint work with Haim Sompolinsky, Ishita Dasgupta, and Jeremy Bernstein
Motivation
Joint work with Haim Sompolinsky, Ishita Dasgupta, and Jeremy Bernstein
Model outline
Memory attractors Noise attractors
Mixed representationdelay delay
Joint work with Haim Sompolinsky, Ishita Dasgupta, and Jeremy Bernstein
s1
s3s2
p = ⅓ s1
s2
n1
p = ⅔
Stochastic transitions
Deterministictransitions
s1
s3
n2
s1
s3
n3
Joint work with Haim Sompolinsky, Ishita Dasgupta, and Jeremy Bernstein
Memoryattractors
Noiseattractors
Mixed representation
0
0
0
1
1
1
Time
The network in action
Model demo
P = 0.5
P = 1
P = 0.5
P = 1
Markov chain
Memoryattractors
0
1
Time
Joint work with Haim Sompolinsky, Ishita Dasgupta, and Jeremy Bernstein
Applications of Hopfield networks
Joint work with Christopher Hillar, Felix Effenberger, and Sarah Marzen
Hopfield networks on n vertices can store...
● Cover (1965): …at most O(n) randomly selected patterns.● Hillar & Tran (2014): ...at least O(exp(√n)) nonrandom, nontrivial patterns.● Theorem (Effenberger, Hillar, Marzen, R.):
...at least O(exp(n1 - o(1))) nonrandom, nontrivial patterns.
Image processing in the brain
Hermann grid illusion Checker shadow illusion
Image processing in the brain
Hermann grid illusion Checker shadow illusion
Image-processing with Hopfield networks
Joint work with Christopher Hillar, Felix Effenberger, and Sarah Marzen
Joint work with Christopher Hillar, Felix Effenberger, and Sarah MarzenData from Tasovanis group at the German Center for Neurodegenerative Diseases (DZNE), Bonn
Denoising images
Continuous neuron dynamics
Joint work with Carina Curto, figures from Morrison et al.
● Threshold linear networks:
● Oscillating behavior observed by Morrison et al. (unproven), limit cycles as attractors.
● Theorem (Curto & R.): Oscillating behavior is indeed a stable state for certain simple network architectures.
Part II: From machine learning to brains
Connectomics
● Input: microscope images of slices of brain tissue.
● Slices aligned and stacked.● Boundaries of neurons are
predicted with deep learning.● Neurons are filled in.● Output: 3D segmentation.
Connectomics
● Input: microscope images of slices of brain tissue.
● Slices aligned and stacked.● Boundaries of neurons are
predicted with deep learning.● Neurons are filled in.● Output: 3D segmentation.
Connectomics
● Input: microscope images of slices of brain tissue.
● Slices aligned and stacked.● Boundaries of neurons are
predicted with deep learning.● Neurons are filled in.● Output: 3D segmentation.
Connectomics with context
● Neurons look like this:
● Standard methods use only local context● Leads to mistakes like this:
Joint work with Nir Shavit, Yaron Meirovitch, and the MIT Computational Connectomics Group
Learning neuron morphologies
Problem: How to learn a distribution on embedded graphs?
One approach: Throw out outliers
● Learn common types of errors
Another approach: Similarity scores
● Compare, cluster graphs based on similarity● Use a library of known graphs to evaluate plausibility of candidate graphs
Fixing merged neurons
Joint work with Nir Shavit, Yaron Meirovitch, and the MIT Computational Connectomics Group
● Neuron as graph embedded in R3
● Smooth embedding, trim short branches● Measure instantaneous direction and radius● Check for coherent splits into subgraphs
Shape context● Original algorithm (Belongie & Malik, 2000)● Pick random sample points on each neuron● Compute Euclidean distance and shortest-path distance between sample points
0.01 0.01 0.05 0.02 0
0.02 0 0.04 0.03 0
0.02 0.06 0.2 0.1 0.01
0.01 0.1 0.15 0 0
0.01 0.04 0.13 0.08 0.01
Example 2D histogram for one sample point
Frac of other points at Euclidean dist.
Frac of other points at shortest-path dist.
400-800
200-400
100-200
50-100
0-50
0-50 50-100 100-200 200-400 400-800
Joint work with Viren Jain and Google Research
Shape context
● Minimum cost perfect matching (linear assignment) in complete bipartite graph● Pair sample points between two neurons● Matching cost (edge weight) = χ2-distance between histograms● Similarity score = normalized minimum cost● Build library of known neuron morphologies: sets of histograms● Compare candidate morphologies against library
Joint work with Viren Jain and Google Research
Shape context
Partial neuron A: High similarity to A: Low similarity to A:
● Robust to differences in sample preparation● Not sensitive to angles, small variations in spines, etc.● Highly sensitive to erroneous connections
Joint work with Viren Jain and Google Research
Score distributions, individual neurons
Joint work with Viren Jain and Google Research
Shape context
Joint work with Viren Jain and Google Research
● Each point represents a neuron● t-SNE embedding, colors from k-medians clustering
Coming full circle
● Learning the structures of real neural nets with artificial neural nets● Project neurons into 2D images for deep learning:
Joint work with Viren Jain and Google Research
Thanks to all these people...● Nir Shavit, Yaron Meirovitch, and the MIT Computational Connectomics group● Ed Boyden and the MIT Synthetic Neurobiology group● Viren Jain and Google Research● Haim Sompolinsky, Ishita Dasgupta, and Jeremy Bernstein● Carina Curto● Christopher Hillar, Felix Effenberger, and Sarah Marzen
● This work was also supported by the Center for Brains, Minds, and Machines and the National Science Foundation (grant no. 1122374).
...and thank you!
Top Related