Real-Time Vision on a Mobile Robot Platform Mohan Sridharan Joint work with Peter Stone The...

Post on 20-Jan-2016

213 views 0 download

Tags:

Transcript of Real-Time Vision on a Mobile Robot Platform Mohan Sridharan Joint work with Peter Stone The...

Real-Time Vision on a Mobile Robot Platform

Mohan SridharanJoint work with Peter Stone

The University of Texas at Austinsmohan@ece.utexas.edu

Motivation

Computer vision challenging. “State-of-the-art” approaches not applicable to

real systems. Computational and/or memory constraints.

Focus: efficient algorithms that work in real-time on mobile robots.

Overview

Complete vision system developed on a mobile robot.

Challenges to address: Color Segmentation. Object recognition. Line detection. Illumination invariance.

On-board processing– computational and memory constraints.

Test Platform – Sony ERS7

20 degrees of freedom.

Primary sensor – CMOS camera.

IR, touch sensors, accelerometers.

Wireless LAN. Soccer on 4.5x3m

field – play humans by 2050!

The Aibo Vision System – I/O

Input: Image pixels in YCbCr Color space. Frame rate: 30 fps. Resolution: 208 x 160.

Output: Distances and angles to objects.

Constraints: On-board processing: 576 MHz. Rapidly varying camera positions.

Robot’s view of the world…

Vision System – Flowchart…

Vision System – Phase 1: Segmentation.

Color Segmentation: Hand-label discrete

colors. Intermediate color

maps. NNr weighted

average – Master color cube.

128x128x128 color map – 2MB.

Vision System – Phase 1: Segmentation.

Use perceptually motivated color space – LAB .

Offline training in LAB – generate equivalent YCbCr cube.

Vision System – Phase 1: Segmentation.

Vision System – Phase 1: Segmentation.

Use perceptually motivated color space – LAB.

Offline training in LAB – generate equivalent YCbCr cube.

Reduce problem to table lookup. Robust performance with shadows,

highlights. YCbCr – 82%, LAB – 91%.

Sample Images – Color Segmentation.

Sample Video – Color Segmentation.

Some Problems…

Sensitive to illumination. Frequent re-training. Robot needs to detect and adapt to change.

Off-board color labeling – time consuming. Autonomous color learning possible…

Vision System – Phase 2: Blobs.

Run-Length encoding. Starting point, length in pixels.

Region Merging. Combine run-lengths of same color. Maintain properties: pixels, runs.

Bounding boxes. Abstract representation – four corners. Maintains properties for further analysis.

Sample Images – Blob Detection.

Vision System – Phase 2: Objects.

Object Recognition. Heuristics on size, shape and color. Previously stored bounding box properties. Domain knowledge. Remove spurious blobs.

Distances and angles: known geometry.

Sample Images – Objects.

Vision System – Phase 3: Lines.

Popular approaches: Hough transform, Convolution kernels – computationally expensive.

Domain knowledge. Scan lines – green-

white transitions – candidate edge pixels.

Vision System – Phase 3: Lines.

Incremental least square fit for lines. Efficient and easy to implement. Reasonably robust to noise.

Lines provide orientation information. Line Intersections can be used as

markers. Inputs to localization. Ambiguity removed through prior position

knowledge.

Sample Images – Objects + Lines.

Some Problems…

Systems needs to be re-calibrated: Illumination changes. Natural light variations: day/night.

Re-calibration very time consuming. More than an hour spent each time…

Cannot achieve overall goal – play humans. That is not happening anytime soon, but still…

Illumination Sensitivity – Samples.

Trained under one illumination:

Under different illumination:

Illumination Sensitivity – Movie…

Illumination Invariance - Approach.

Three discrete illuminations – bright, intermediate, dark.

Training: Performed offline. Color map for each illumination. Normalized RGB (rgb – use only rg) sample

distributions for each illumination.

Illumination Invariance – Training.

Illumination: bright – color map

Illumination Invariance – Training.

Illumination: bright – map and distributions.

Illumination Invariance – Training.

Illumination Invariance – Testing.

Illumination Invariance – Testing.

Illumination Invariance – Testing.

Illumination Invariance – Testing.

Illumination Invariance – Testing.

Testing - KLDivergence as a distance measure: Robust to artifacts. Performed on-board the robot, about once a

second. Parameter estimation described in the paper.

Works for conditions not trained for… Paper has numerical results.

Adapting to Illumination changes – Video

Some Related Work…

CMU vision system: Basic implementation. James Bruce et al., IROS 2000

German Team vision system: Scan Lines. Rofer et al., RoboCup 2003

Mean-shift: Color Segmentation. Comaniciu and Peer: PAMI 2002

Conclusions

A complete real-time vision system – on board processing.

Implemented new/modified version of vision algorithms.

Good performance on challenging problems: segmentation, object recognition and illumination invariance.

Future Work…

Autonomous color learning. AAAI-05 paper available online.

Working in more general environments, outside the lab.

Automatic detection of and adaptation to illumination changes.

Still a long way to go to play humans .

Autonomous Color Learning – Video

More videos online www.cs.utexas.edu/~AustinVilla/

THAT’S ALL FOLKS

www.cs.utexas.edu/~AustinVilla/

Question – 1: So, what is new??

Robust color space for segmentation. Domain-specific object recognition + line

detection. Towards illumination invariance. Complete vision system – closed loop. Accept – cannot compare with other teams,

but overall performance good at competitions…

Vision – 1: Why LAB??

Robust color space for segmentation. Perceptually motivated. Tackles minor changes – shadows,

highlights. Used in robot rescue…

Vision – 2: Edge pixels + Least Squares??

Conventional approaches time consuming. Scan lines faster:

Reduces colors needing bounding boxes. LS easier to implement – fast too.

Accept – have not compared with any other method…

Vision – 3: Normalized RGB ??

YCbCr separates luminance – but not good for practice on Aibo.

Normalized RGB (rgb): Reduces number of dimensions - storage. More robust to minor variations.

Accept – have compared with YCbCr alone – LAB works but more storage and calculations…

Illumination Invariance – Training.

Illumination Invariance – Testing.