The Evolution of Learning Algorithms for Artificial Neural Networks

13
The Evolution of Learning Algorithms for Artificial Neural Networks Published 1992 in Complex Systems by Jonathan Baxter Michael Tauraso

description

The Evolution of Learning Algorithms for Artificial Neural Networks. Published 1992 in Complex Systems by Jonathan Baxter Michael Tauraso. Genetic Algorithm on NNs. Start with a population of neural networks. Find the fitness of each for a particular task Weed out the low-fitness ones - PowerPoint PPT Presentation

Transcript of The Evolution of Learning Algorithms for Artificial Neural Networks

Page 1: The Evolution of Learning Algorithms for Artificial Neural Networks

The Evolution of Learning Algorithms for Artificial

Neural Networks

Published 1992 in Complex Systems by Jonathan Baxter

Michael Tauraso

Page 2: The Evolution of Learning Algorithms for Artificial Neural Networks

Genetic Algorithm on NNsStart with a population of neural networks.Find the fitness of each for a particular taskWeed out the low-fitness onesBreed the high-fitness ones to make a new

population.

Repeat.

Page 3: The Evolution of Learning Algorithms for Artificial Neural Networks

Local Binary Neural Networks(LBNNs)

All weights, inputs, and outputs are binary.Learning rule is a localized boolean function of

two variables.This vastly simplifies everything.LBNNs are easy to encode into binary strings.LBNNs are easy to write into genetic

algorithms

Page 4: The Evolution of Learning Algorithms for Artificial Neural Networks

An LBNN

Page 5: The Evolution of Learning Algorithms for Artificial Neural Networks

Rules for LBNNs Weights are +1, -1, or 0 Nodes: ai(t+1) =sign( ∑ aj(t)wji(t) )

Weights: wij(t+1) = f(ai(t), aj(t))

Weights are classified as fixed or learnable. 0 weights are fixed.

Page 6: The Evolution of Learning Algorithms for Artificial Neural Networks

Training RulesBoolean functions of two variables16 possible varietiesAnalog of Hebb’s rule given by:

f(ai(t),aj(t)) = ai(t) aj(t)

Page 7: The Evolution of Learning Algorithms for Artificial Neural Networks

Training GoalLearn the 4 boolean functions of one variable Identity, Inverse, Always 1, Always 0Who wants to learn the boolean functions of one

variable anyway?

Page 8: The Evolution of Learning Algorithms for Artificial Neural Networks

Fitness DeterminationStart with an LBNN from the sample

populationClamp the output node to train for a

particular boolean function.Fitness is how well the network performs at

calculating that boolean function after training.

Page 9: The Evolution of Learning Algorithms for Artificial Neural Networks

A Successful LBNN

Page 10: The Evolution of Learning Algorithms for Artificial Neural Networks

FindingsHebb’s rule is the most efficient learning

rule.LBNNs can be thought of as state

machines

Page 11: The Evolution of Learning Algorithms for Artificial Neural Networks

LBNNs as State MachinesBoolean functions are encoded as transitions

between fixed points in the NNOther transitions seek to push the network

toward the appropriate fixed point.

Page 12: The Evolution of Learning Algorithms for Artificial Neural Networks

State Machine for an LBNN

Page 13: The Evolution of Learning Algorithms for Artificial Neural Networks

Questions

?