Robotics applications of vision-based action selection Master Project Matteo de Giacomi.

38
Robotics applications of vision-based action selection Master Project Matteo de Giacomi

Transcript of Robotics applications of vision-based action selection Master Project Matteo de Giacomi.

Robotics applications of vision-based action selection

Master Project

Matteo de Giacomi

Contents

Introduction Controller Architecture Webots implementation Visual System Amphibot II implementation Conclusion

Introduction

- Project Objectives- Related works- Used robots

Project Objectives

Use Stereo Vision to make a real robot reactively:

Avoid Obsacles Flee from Predators Follow Preys

Related Works

Schema-based architecture [Arkin] Potential Field [Andrews] [Kathib] Steering [Reynolds] Subsumption architecture [Brooks]

Used Robots

Amphibot II8 body elements

Salamandrabody elements and legs elements

Control of Speed through a Drive signal and of the direction through a Turn signal

Controller Architecture

- Overview- Behavioral Constants- Obstacle Avoidance

DRIVE,TURN

correctbehavior

obstacles

predator

prey

yes

yes

yes

yes

no

no

no

Memory

Turn, direction,pred_pos, prey_pos

pred_pos,Pred_dist,fear

Prey_pos,prey,_dist,persistance

Controller ArchitectureMotor

feedback

Visualinput

error

Disp_map

pred_pos,pred_distprey_pos,

prey_dist

motorposition

Behavioral Constants Reactivity (min time between two different

behaviors) Panic (when stuck, time after that the robot

starts moving randomly)

Confidence (min distance to an object before collision danger is triggered)

Daring (min distance the robot can approach the predator)

Fear (time in fleeing state after having lost eye contact with the predator)

Persistence (while a prey is lost, time in search state before giving up)

Obstacle Avoidance (1) Avoid Static Obstacles Avoid Sudden obsacles (ex. foot) Detect Dead-ends (requiring the implementation

of Backward locomotion)

FORWARDTurn = max(X)Drive = xcenter

BACKWARDTurn = const.

Drive = min(xcenter, max(xcenter, X\{xcenter}))

Drive <= 0

Drive > 0

Obstacle Avoidance (2)

Avoidance is triggered if an obstacle is too close (see confidence)

In a clutted environment, one tends to approach obstacles more than in an open

space

Confidence varies according to an estimation of obstacle density

Webots Implementation

- action selection- influence of behavioral constants

Interaction between behaviors

Video: obstacle avoidance, prey and predator action selection

Influence of behavioral constants When both a prey and a predator are detected

Fear and Daring affect robot behavior

Visual System

- Distance Measures Analysis- Prey and Predator Tracking

Input Mapping (1)

1 … m

1

n

Input: mxn distance grid

Output: Polar distance map.Sectors distance estimation: minima between the cells of every column (pessimist approach)

min(col1) min(…) min(colm)

Input Mapping (2) Issue: Filmed area depends on robot‘s head position Solution: Knowing Cam Angle and Angular Speed (depending on

Turn and Drive): Map Camera Field on Visual Field

Input Mapping (3)

Video: example of depth Map generation

Prey and Predator Tracking (1)

Shape recognition Prey:

small circle Turn so that circle centre is set in front

of the robot Stop when sufficiently close

Predator: big circle Turn away as fast as possible

Prey and Predator Tracking (2)

Circular Hough Transform

Left-Right Size check

Prey and Predator Tracking (3) Evaluate target expected size according to

distance and compare with measured size

Amphibot II implementation

- Introduction- Battery charge influence- Obstacle avoidance: results

Introduction

Differences from webots: Camera‘s range: 60° instead of 120° Input: more noisy Frame rate: is smaller Drive Signal: Its relation with amplitude

and frequency critically depends on the environment and the used hardware

Battery charge influence

Estimation or measure of battery charge impossible, world rotation phase in mapping must be skipped

Results

Video: setup presentation, obstacle avoidance

Conclusion

- Results- Further Works

Results

Stereo-Vision system Effective for both obstacle avoidance

and target recognition

Behavior Scalable (a joystick was added as a new

behavior with minimal variations) Quick, memory inexpensive „Natural“ parameters:

One architecture, many behaviors Several parameters to trim, „aestetic“

criteria

Further works

Camera-to-Wold mapping can be improved?

How to define parameter values? Possible addition of a planner? How can the visual system cope with

a water enviroment? Robot gait may adapt to the type of

surface?

THE END

Thank you! Any question?

Amphibot‘s Input Mapping

Polar map containing 19 sectors Robot kept on place while oscillating

parallel to a wall

Obstacle Avoidance

Video: Dead-end detection

Prey Cornering Behavior

Video: obstacle is ignored in case a prey is present (behavior feedback)

Turning vs. Reactivity Tracking in a webots simulation Low Reactivity produces an unnatural behavior High Reactivity makes the robot react too slowly

Turning Radius vs. Battery charge

Video: turning performance along time with constant drive and turn

Drive Signal vs. Amplitude and Frequency

Drive vs. Obstacle distance

Bonus: Hough Transform

Video: circle tracking