Biological Foundations for Deep Learning: Towards Decision Networks
-
Upload
diannepatricia -
Category
Technology
-
view
210 -
download
0
Transcript of Biological Foundations for Deep Learning: Towards Decision Networks
Nathan R. Wilson, Ph.D.CSIG Speaker SeriesJune 23, 2016
Biological Foundations for Deep Learning: Towards Decision Networks
Neural Networks and Neuroscience
• Benefits of Cross-Pollination
• New Learning Rules and Emerging Analogies
• “Recommendations and Decision Support” as an
Enriched Domain for Both
Today
Benefits of Cross-Pollination
Overcoming Stereotypes
Neuroscientist- Focused on insignificant details
- Loves chemicals, ice, rodents
- Has trouble seeing the big picture
Reality: has many of the same
goals as DL researchers
Deep Learning Researcher- Doesn’t pay any dues
- Disembodied from traditional fields
- Works out of coffee shops
Reality: establishing one of the most
important disciplines of 21st century
Neural Networks and Neuroscience >
Neural networks and neuroscience
share the same core orientation
• Same Goal: overarching
understanding of a general algorithm
• Same Structure:
• Connectionist not Von Neumann
• Pathways not rules
• Connection weights are the key
• Structure is function
• Same Puzzles:
• What is the optimal transfer function?
• Is the code distributed or localized?
• How are sequences learned?
Benefits of Cross-Pollination
Neural Networks and Neuroscience >
So why study nature? Why biological neural networks?
A Lesson from the 20th Century: Aviation
Benefits of Cross-Pollination
Otto Lilienthal, Foundations of Modern Aviation
AI is to the brain as airplanes
are to birds. The details are
different, but the underlying
principles are the same.
-- Yann LeCun, 2015
The Wright Brothers spent a great
deal of time observing birds in flight.
Neural Networks and Neuroscience >
Common retort: “Of course, modern aircraft look
nothing like birds”
Stabilizing Similar Ideas Through Cross-Linking
Benefits of Cross-Pollination
Neuroscience:Has spent a century on how to
map connectionist frameworks
directly to cognitive,
psychological, and social
principles => where AI is headed.
Neural Networks:Are emerging as the dominant
framework for machine learning, and
will inherit / reconcile mappings from
other adjacent fields of AI.
Neural Networks and Neuroscience >
Why Now?
Benefits of Cross-Pollination
Neuroscience Has:New tools to interact with cells at
the “network” level (Zhang et al.,
2010, Wilson et al. 2013, Nature
Protocols) and uncover insights.
Neural Networks Has:Rigorous frameworks and data sets
for evaluating network intelligence.
Software for identifying and optimizing
key parameters of learning.
Neural Networks and Neuroscience >
A difference of terminology, but not concepts.
What is the goal?
Neuroscience: “maximize reward”
Neural Networks: “minimize loss”
What is the system doing?
Neuroscience: “learning from local micro-successes (Hebb)”
Neural networks: “globally optimizing a function (backprop)”
What are additional parameters for?
Neuroscience: Stabilizing firing rates
Neural networks: Regularization
Learning Rules and Analogies
Neural Networks and Neuroscience >
“Feedforward” transmission is electrical and easy to measure in the brain
“Retrograde” signals are more “invisible”, but candidates exist
Hebbian or STDP learning could provide mechanics for gradient descent
(Markram et al., 1997; Bi and Poo, 1998; Xie and Seung, 2003; Bengio et al., 2016)
A mismatch between pairs of neurons could be construed by the cells as a local
error signal which could then propagate further.
Methods are emerging that will explain how/if backpropagation happens
Synaptic plasticity and backpropagation
• Signals like: NGF, BDNF, cannabinoids, NO
• Some are released in proportion to synapse
strength
• Can travel back through vesicular uptake,
cytoplasmic transport
Neural Networks and Neuroscience > Learning Rules and Analogies >
Resisting “out of range” connections
Synaptic Plasticity and Regularization
Neurons will “auto-tune” at different scales:
• Inter-synaptic competition (Fonseca, 2002)
• Single neuron within network (Murthy, 2003)
• Trade strength for more partners (Wilson, 2007)
• Whole network scaling (Turrigiano 1998, 2008)
Neurons also undergo forms of “dropout”
• Sparse coding and decorrelation (Olshausen
2004)
• Stochastic firing; stochastic synapses (Zador
1999, Abbott 2004)
Neural Networks and Neuroscience > Learning Rules and Analogies >
Inhibitory connections: more than meets the eye
Negative weights in networks
Denève et al.,
Nature Neuroscience 2016
Cells seem to “want”
an excitatory /
inhibitory balance
(Liu, Nature
Neuroscience, 2004)
Inhibition is the basis for
network gain control
(Wilson et al., Nature
2012)
Carandini et al.,
Nature Rev. Neuro 2012
Neural Networks and Neuroscience > Learning Rules and Analogies >
Interesting aspects of recommendations / decision support:
• As with games, it connects perception to cognition and action
• Remains an original commercial justification of machine learning
• Highly structured data sets and goals, rigorous arenas for success.
Recommendations / Decision Support:
an Interesting Network Learning Problem
Games / Which Move to Make Recommendations /
Decision Support
Multi-level learning problems:
Our networks learn to “match” contexts to decisions:
• Travel: which spots should I visit when my plane lands?
• Entertainment: which movie is right for me and my friends?
• Medicine: which available doctor is right for this patient?
• Crime: which events could be related to this incident?
• Fraud: which recent behavior doesn’t match the others?
• Supply Chain: which component is exhibiting fault-predictive traits?
Recommendations / Decision Support:
An Interesting Network Learning Problem
Evaluate many criteria, and “match” a recommended decision.
Brain-like algorithms can construct networks of knowledge
In any domain to power real-time recommendations and decisions.
Elucidating Pathways for Recommendations Through Network Learning
Learning Representations Using
“Perceptual” vs. “Cognitive” Structures
Neural Networks and Neuroscience > Learning Rules and Analogies >
“Mommy’s hair
is melting”
Challenge 1: Recommendations need to cold start and
generalize to new things, but then also to hyper-optimize
• Bottom-up representations via sparse,
“one shot” learning => like PageRank
• Works in cold-start conditions
• Can trace back reasons for answers
Deep learning techniques can then further
optimize recommendations, when historical data
is available to support them => see paper on this
appearing soon.
Challenge 2: the # of nuanced features gathered is pivotal
to your recommendation success, across all algorithms
Liam Neeson, 1990s:
Liam Neeson, 2010s:
Andrew Ng – Importance of Data
Plot Concept
Co
nn
ec
tivit
yPlot Concept
Co
nn
ec
tivit
y
Challenge 3: Effective learning stacks are multi-
component and must be kept organized
III. Data Organization Engine• Structured, unstructured• Cross-source, un-harmonized• Feature engineering pathways
II. Learning Platform• Unsupervised• Supervised
I. Interfaces and APIs• Recommendations• Profiles / Analytics
Special Thanks:
• Observations around deep learning at Nara Logics:
• General-purpose recommendations pushed us to create a general
implementation that worked across many domains and contexts.
• Deep learning is compatible with our neuroscience-inspired
association networks, and we continue to work on this convergence.
• Analytics and interfaces into these networks are as important as the
learning itself, for maintaining and extending performance.
Deep Learning for Recommendations Is Now an
Important Part of Our General Process
Sahil Zubair Denise Ichinco Raymond Plante Jana Eggers
Neuroscience and deep learning research offer
complementary insights that can be utilized in practice.
For galvanizing this work, “recommendations” offers a
particularly rich and well-structured domain for
exploring the relationship between data and decisions.
Summary: Today