Encoding Robotic Sensor States for Q-Learning using the Self-Organizing Map Gabriel J. Ferrer...

24
Encoding Robotic Sensor States for Q-Learning using the Self-Organizing Map Gabriel J. Ferrer Department of Computer Science Hendrix College

Transcript of Encoding Robotic Sensor States for Q-Learning using the Self-Organizing Map Gabriel J. Ferrer...

Page 1: Encoding Robotic Sensor States for Q-Learning using the Self-Organizing Map Gabriel J. Ferrer Department of Computer Science Hendrix College.

Encoding Robotic Sensor Statesfor Q-Learning using the

Self-Organizing Map

Gabriel J. FerrerDepartment of Computer Science

Hendrix College

Page 2: Encoding Robotic Sensor States for Q-Learning using the Self-Organizing Map Gabriel J. Ferrer Department of Computer Science Hendrix College.

Outline

Statement of Problem

Q-Learning

Self-Organizing Maps

Experiments

Discussion

Page 3: Encoding Robotic Sensor States for Q-Learning using the Self-Organizing Map Gabriel J. Ferrer Department of Computer Science Hendrix College.

Statement of Problem

Goal

Make robots do what we want

Minimize/eliminate programming

Proposed Solution: Reinforcement Learning

Specify desired behavior using rewards

Express rewards in terms of sensor states

Use machine learning to induce desired actions

Target Platform

Lego Mindstorms NXT

Page 4: Encoding Robotic Sensor States for Q-Learning using the Self-Organizing Map Gabriel J. Ferrer Department of Computer Science Hendrix College.

Robotic Platform

Page 5: Encoding Robotic Sensor States for Q-Learning using the Self-Organizing Map Gabriel J. Ferrer Department of Computer Science Hendrix College.

Experimental Task

Drive forward

Avoid hitting things

Page 6: Encoding Robotic Sensor States for Q-Learning using the Self-Organizing Map Gabriel J. Ferrer Department of Computer Science Hendrix College.

Q-Learning

Table of expected rewards (“Q-values”) Indexed by state and action

Algorithm steps Calculate state index from sensor values

Calculate the reward

Update previous Q-value

Select and perform an action

Q(s,a) = (1 - α) Q(s,a) + α (r + γ max(Q(s',a)))

Page 7: Encoding Robotic Sensor States for Q-Learning using the Self-Organizing Map Gabriel J. Ferrer Department of Computer Science Hendrix College.

Certain sensors provide continuous values

Sonar

Motor encoders

Q-Learning requires discrete inputs

Group continuous values into discrete “buckets”

[Mahadevan and Connell, 1992]

Q-Learning produces discrete actions

Forward

Back-left/Back-right

Q-Learning and Robots

Page 8: Encoding Robotic Sensor States for Q-Learning using the Self-Organizing Map Gabriel J. Ferrer Department of Computer Science Hendrix College.

Creating Discrete Inputs

Basic approach Discretize continuous values into sets

Combine each discretized tuple into a single index

Another approach Self-Organizing Map

Induces a discretization of continuous values

[Touzet 1997] [Smith 2002]

Page 9: Encoding Robotic Sensor States for Q-Learning using the Self-Organizing Map Gabriel J. Ferrer Department of Computer Science Hendrix College.

Self-Organizing Map (SOM)

2D Grid of Output Nodes Each output corresponds to an ideal input value

Inputs can be anything with a distance function

Activating an Output Present input to the network

Output with the closest ideal input is the “winner”

Page 10: Encoding Robotic Sensor States for Q-Learning using the Self-Organizing Map Gabriel J. Ferrer Department of Computer Science Hendrix College.

Applying the SOM

Each input is a vector of sensor values Sonar

Left/Right Bump Sensors

Left/Right Motor Speeds

Distance function is sum-of-squared-differences

Page 11: Encoding Robotic Sensor States for Q-Learning using the Self-Organizing Map Gabriel J. Ferrer Department of Computer Science Hendrix College.

SOM Unsupervised Learning

• Present an input to the network

• Find the winning output node

• Update ideal input for winner and neighbors

– weightij = weightij + (α * (inputij – weightij))

• Neighborhood function

2

2

2c

d

e

Page 12: Encoding Robotic Sensor States for Q-Learning using the Self-Organizing Map Gabriel J. Ferrer Department of Computer Science Hendrix College.

Experiments

Implemented in Java (LeJOS 0.85)

Each experiment 240 seconds (800 Q-Learning iterations)

36 States

Three actions Both motors forward Left motor backward, right motor stopped Left motor stopped, right motor backward

Page 13: Encoding Robotic Sensor States for Q-Learning using the Self-Organizing Map Gabriel J. Ferrer Department of Computer Science Hendrix College.

Rewards

Either bump sensor pressed: 0.0

Base reward: 1.0 if both motors are going forward

0.5 otherwise

Multiplier: Sonar value greater than 20 cm: 1

Otherwise, (sonar value) / 20

Page 14: Encoding Robotic Sensor States for Q-Learning using the Self-Organizing Map Gabriel J. Ferrer Department of Computer Science Hendrix College.

Parameters

Discount (γ): 0.5

Learning rate (α): 1/(1 + (t/100)), t is the current iteration (time step)

Used for both SOM and Q-Learning [Smith 2002]

Exploration/Exploitation Epsilon = α/4

Probability of random action Selected using weighted distribution

Page 15: Encoding Robotic Sensor States for Q-Learning using the Self-Organizing Map Gabriel J. Ferrer Department of Computer Science Hendrix College.

Experimental Controls

Q-Learning without SOM

Qa States Current action (1-3)

Current bumper states

Quantized sonar values (0-19 cm; 20-39; 40+)

Qb States Current bumper states

Quantized sonar values (9) (0-11 cm…; 84-95; 96+)

Page 16: Encoding Robotic Sensor States for Q-Learning using the Self-Organizing Map Gabriel J. Ferrer Department of Computer Science Hendrix College.

SOM Formulations

36 Output Nodes

Category “a”: Length-5 input vectors

Motor speeds, bumper values, sonar value

Category “b”: Length-3 input vectors

Bumper values, sonar value

All sensor values normalized to [0-100]

Page 17: Encoding Robotic Sensor States for Q-Learning using the Self-Organizing Map Gabriel J. Ferrer Department of Computer Science Hendrix College.

SOM Formulations

QSOM Based on [Smith 2002] Gaussian Neighborhood

Neighborhood size is one-half SOM width

QT Based on [Touzet 1997] Learning rate is fixed at 0.9 Neighborhood is immediate Manhattan neighbors

Neighbor learning rate is 0.4

Page 18: Encoding Robotic Sensor States for Q-Learning using the Self-Organizing Map Gabriel J. Ferrer Department of Computer Science Hendrix College.

Quantitative Results

Qa Qb QSOMa QSOMb QTa QTb

Mean 607.97 578.91 468.86 534.49 456.19 545.61

StDv 81.92 76.95 39.39 160.41 85.07 57.98

Median 608.75 667.5 485.11 587.64 442.62 560.77

Min 506.47 528.67 410.2 354.25 378.72 481.55

Max 723 540.55 495 661.59 547.22 594.5

Mean/It 0.76 0.72 0.59 0.67 0.57 0.68

StDv/It 0.1 0.1 0.05 0.2 0.11 0.07

Page 19: Encoding Robotic Sensor States for Q-Learning using the Self-Organizing Map Gabriel J. Ferrer Department of Computer Science Hendrix College.

Qualitative Results

QSOMa Motor speeds ranged from 2% to 50%

Sonar values stuck between 90% and 94%

QSOMb Sonar values range from 40% to 95%

Best two runs arguably the best of the bunch

Very smooth SOM values in both cases

Page 20: Encoding Robotic Sensor States for Q-Learning using the Self-Organizing Map Gabriel J. Ferrer Department of Computer Science Hendrix College.

Qualitative Results

QTa Sonar values ranged from 10% to 100%

Still a weak performer on average

Best performer similar to QTb

QTb Developed bump-sensor oriented behavior

Made little use of sonar

Highly uneven SOM values in both cases

Page 21: Encoding Robotic Sensor States for Q-Learning using the Self-Organizing Map Gabriel J. Ferrer Department of Computer Science Hendrix College.

Experimental Area

Page 22: Encoding Robotic Sensor States for Q-Learning using the Self-Organizing Map Gabriel J. Ferrer Department of Computer Science Hendrix College.

First Movie

QSOMb

Strong performer (Reward: 661.89)

Minimum sonar value: 43.35% (110 cm)

Page 23: Encoding Robotic Sensor States for Q-Learning using the Self-Organizing Map Gabriel J. Ferrer Department of Computer Science Hendrix College.

Second Movie

Also QSOMb

Typical bad performer (Reward: 451.6) Learns to avoid by always driving backwards

Baseline “not-forward” reward: 400.0

Minimum sonar value: 57.51% (146 cm) Hindered by small filming area

Page 24: Encoding Robotic Sensor States for Q-Learning using the Self-Organizing Map Gabriel J. Ferrer Department of Computer Science Hendrix College.

Discussion

Use of SOM on NXT can be effective More research needed to address shortcomings

Heterogeneity of sensors is a problem Need to try NXT experiments with multiple sonars

Previous work involved homogeneous sensors

Approachable by undergraduate students Technique taught in junior/senior AI course