September 7, 2010Neural Networks Lecture 1: Motivation & History 1 Welcome to CS 672 – Neural...
-
date post
19-Dec-2015 -
Category
Documents
-
view
212 -
download
0
Transcript of September 7, 2010Neural Networks Lecture 1: Motivation & History 1 Welcome to CS 672 – Neural...
September 7, 2010 Neural Networks Lecture 1: Motivation & History
1
Welcome toWelcome to
CS 672 – CS 672 – Neural NetworksNeural Networks
Fall 2010Fall 2010
Instructor: Marc PomplunInstructor: Marc Pomplun
September 7, 2010 Neural Networks Lecture 1: Motivation & History
2
Instructor – Marc PomplunInstructor – Marc Pomplun
Office:Office: S-3-171 S-3-171
Lab:Lab: S-3-135 S-3-135
Office Hours:Office Hours: Tuesdays 14:30-16:00 Tuesdays 14:30-16:00 Thursdays 19:00-20:30 Thursdays 19:00-20:30
Phone:Phone: 287-6443 (office) 287-6443 (office) 287-6485 (lab) 287-6485 (lab)
E-Mail:E-Mail: [email protected] [email protected]
September 7, 2010 Neural Networks Lecture 1: Motivation & History
3
The Visual Attention LabThe Visual Attention Lab
Cognitive research, esp. eye movementsCognitive research, esp. eye movements
September 7, 2010 Neural Networks Lecture 1: Motivation & History
4
Example: Distribution of Visual AttentionExample: Distribution of Visual Attention
September 7, 2010 Neural Networks Lecture 1: Motivation & History
5
Selectivity in Complex ScenesSelectivity in Complex Scenes
September 7, 2010 Neural Networks Lecture 1: Motivation & History
6
Selectivity in Complex ScenesSelectivity in Complex Scenes
September 7, 2010 Neural Networks Lecture 1: Motivation & History
7
Selectivity in Complex ScenesSelectivity in Complex Scenes
September 7, 2010 Neural Networks Lecture 1: Motivation & History
8
Selectivity in Complex ScenesSelectivity in Complex Scenes
September 7, 2010 Neural Networks Lecture 1: Motivation & History
9
Selectivity in Complex ScenesSelectivity in Complex Scenes
September 7, 2010 Neural Networks Lecture 1: Motivation & History
10
Selectivity in Complex ScenesSelectivity in Complex Scenes
September 7, 2010 Neural Networks Lecture 1: Motivation & History
11
Artificial IntelligenceArtificial Intelligence
September 7, 2010 Neural Networks Lecture 1: Motivation & History
12
Modeling of Brain FunctionsModeling of Brain Functions
September 7, 2010 Neural Networks Lecture 1: Motivation & History
13
Biologically Motivated Computer Vision:Biologically Motivated Computer Vision:
September 7, 2010 Neural Networks Lecture 1: Motivation & History
14
Human-Computer Interfaces:Human-Computer Interfaces:
September 7, 2010 Neural Networks Lecture 1: Motivation & History
15
GradingGrading
95%: A95%: A 90%: A-90%: A-
74%: C+74%: C+ 70%: C70%: C 66%: C- 66%: C-
86%: B+86%: B+ 82%: B 82%: B 78%: B- 78%: B-
62%: D+62%: D+ 56%: D 56%: D 50%: D- 50%: D-
50%: F50%: F
For the assignments, exams and your course grade, For the assignments, exams and your course grade, the following scheme will be used to convert the following scheme will be used to convert percentages into letter grades:percentages into letter grades:
September 7, 2010 Neural Networks Lecture 1: Motivation & History
16
Complaints about GradingComplaints about Grading
If you think that the grading of your If you think that the grading of your assignment or exam was unfair, assignment or exam was unfair,
• write down your complaint (handwriting is OK),write down your complaint (handwriting is OK),• attach it to the assignment or exam, attach it to the assignment or exam, • and give it to me or put it in my mailbox.and give it to me or put it in my mailbox.
I will re-grade the I will re-grade the wholewhole exam/assignment and exam/assignment and return it to you in class.return it to you in class.
September 7, 2010 Neural Networks Lecture 1: Motivation & History
17
Computers vs. Neural NetworksComputers vs. Neural Networks
““Standard” ComputersStandard” Computers Neural NetworksNeural Networks
one CPUone CPU highly parallelhighly parallelprocessingprocessing
fast processing unitsfast processing units slow processing unitsslow processing units
reliable unitsreliable units unreliable unitsunreliable units
static infrastructurestatic infrastructure dynamic infrastructuredynamic infrastructure
September 7, 2010 Neural Networks Lecture 1: Motivation & History
18
Why Artificial Neural Networks?Why Artificial Neural Networks?There are two basic reasons why we are interested in There are two basic reasons why we are interested in building artificial neural networks (ANNs):building artificial neural networks (ANNs):
• Technical viewpoint:Technical viewpoint: Some problems such as Some problems such as character recognition or the prediction of future character recognition or the prediction of future states of a system require massively parallel and states of a system require massively parallel and adaptive processing. adaptive processing.
• Biological viewpoint:Biological viewpoint: ANNs can be used to ANNs can be used to replicate and simulate components of the human replicate and simulate components of the human (or animal) brain, thereby giving us insight into (or animal) brain, thereby giving us insight into natural information processing. natural information processing.
September 7, 2010 Neural Networks Lecture 1: Motivation & History
19
Why Artificial Neural Networks?Why Artificial Neural Networks?Why do we need another paradigm than symbolic AI Why do we need another paradigm than symbolic AI for building “intelligent” machines?for building “intelligent” machines?
• Symbolic AI is well-suited for representing Symbolic AI is well-suited for representing explicit explicit knowledge that can be appropriately formalized. knowledge that can be appropriately formalized.
• However, learning in biological systems is mostly However, learning in biological systems is mostly implicit implicit – it is an adaptation process based on – it is an adaptation process based on uncertain information and reasoning. uncertain information and reasoning.
• ANNs are inherently parallel and work extremely ANNs are inherently parallel and work extremely efficientlyefficiently if implemented in parallel hardware. if implemented in parallel hardware.
September 7, 2010 Neural Networks Lecture 1: Motivation & History
20
How do NNs and ANNs work?How do NNs and ANNs work?• The “building blocks” of neural networks are the The “building blocks” of neural networks are the
neuronsneurons..• In technical systems, we also refer to them as In technical systems, we also refer to them as unitsunits
or or nodesnodes..• Basically, each neuronBasically, each neuron
– receives receives inputinput from many other neurons, from many other neurons,– changes its internal state (changes its internal state (activationactivation) based on ) based on
the current input,the current input,– sends sends one output signalone output signal to many other to many other
neurons, possibly including its input neurons neurons, possibly including its input neurons (recurrent network)(recurrent network)
September 7, 2010 Neural Networks Lecture 1: Motivation & History
21
How do NNs and ANNs work?How do NNs and ANNs work?• Information is transmitted as a series of electric Information is transmitted as a series of electric
impulses, so-called impulses, so-called spikesspikes..
• The The frequencyfrequency and and phasephase of these spikes encodes of these spikes encodes the information.the information.
• In biological systems, one neuron can be In biological systems, one neuron can be connected to as many as connected to as many as 10,00010,000 other neurons. other neurons.
• Usually, a neuron receives its information from Usually, a neuron receives its information from other neurons in a confined area, its so-called other neurons in a confined area, its so-called receptive fieldreceptive field..
September 7, 2010 Neural Networks Lecture 1: Motivation & History
22
History of Artificial Neural NetworksHistory of Artificial Neural Networks
19381938 Rashevsky describes neural activation dynamics Rashevsky describes neural activation dynamics by means of differential equationsby means of differential equations
19431943 McCulloch & Pitts propose the first mathematical McCulloch & Pitts propose the first mathematical model for biological neuronsmodel for biological neurons
19491949 Hebb proposes his learning rule: Repeated Hebb proposes his learning rule: Repeated activation of one neuron by another strengthens activation of one neuron by another strengthens their connectiontheir connection
19581958 Rosenblatt invents the perceptron by basically Rosenblatt invents the perceptron by basically adding a learning algorithm to the McCulloch & adding a learning algorithm to the McCulloch & Pitts modelPitts model
September 7, 2010 Neural Networks Lecture 1: Motivation & History
23
History of Artificial Neural NetworksHistory of Artificial Neural Networks
19601960 Widrow & Hoff introduce the Adaline, a simple Widrow & Hoff introduce the Adaline, a simple network trained through gradient descentnetwork trained through gradient descent
19611961 Rosenblatt proposes a scheme for training Rosenblatt proposes a scheme for training multilayer networks, but his algorithm is weak multilayer networks, but his algorithm is weak because of non-differentiable node functionsbecause of non-differentiable node functions
19621962 Hubel & Wiesel discover properties of visual Hubel & Wiesel discover properties of visual cortex motivating self-organizing neural network cortex motivating self-organizing neural network modelsmodels
19631963 Novikoff proves Perceptron Convergence Novikoff proves Perceptron Convergence Theorem Theorem
September 7, 2010 Neural Networks Lecture 1: Motivation & History
24
History of Artificial Neural NetworksHistory of Artificial Neural Networks
19641964 Taylor builds first winner-take-all neural circuit Taylor builds first winner-take-all neural circuit with inhibitions among output unitswith inhibitions among output units
19691969 Minsky & Papert show that perceptrons are not Minsky & Papert show that perceptrons are not computationally universal; interest in neural computationally universal; interest in neural network research decreasesnetwork research decreases
19821982 Hopfield develops his auto-association networkHopfield develops his auto-association network
19821982 Kohonen proposes the self-organizing mapKohonen proposes the self-organizing map
19851985 Ackley, Hinton & Sejnowski devise a stochastic Ackley, Hinton & Sejnowski devise a stochastic network named Boltzmann machinenetwork named Boltzmann machine
September 7, 2010 Neural Networks Lecture 1: Motivation & History
25
History of Artificial Neural NetworksHistory of Artificial Neural Networks19861986 Rumelhart, Hinton & Williams provide the Rumelhart, Hinton & Williams provide the
backpropagation algorithm in its modern form, backpropagation algorithm in its modern form, triggering new interest in the fieldtriggering new interest in the field
19871987 Hecht-Nielsen develops the counterpropagation Hecht-Nielsen develops the counterpropagation networknetwork
19881988 Carpenter & Grossberg propose the Adaptive Carpenter & Grossberg propose the Adaptive Resonance Theory (ART)Resonance Theory (ART)
Since then, research on artificial neural networks has Since then, research on artificial neural networks has remained active, leading to numerous new network remained active, leading to numerous new network types and variants, as well as hybrid algorithms and types and variants, as well as hybrid algorithms and hardware for neural information processing.hardware for neural information processing.