Perception for Robot Detection

43
Perception for Robot Detection 2011/12/08

description

Perception for Robot Detection. 2011/12/08. Robot Detection. Robot Detection. Better Localization and Tracking No Collisions with others. Goal. Robust Robot Detection Long Range Short Range. Long Range C urrent M ethod : Heuristic Color-Based . - PowerPoint PPT Presentation

Transcript of Perception for Robot Detection

Page 1: Perception for Robot Detection

Perception for Robot Detection

2011/12/08

Page 2: Perception for Robot Detection

Robot Detection

• Robot Detection • Better Localization and Tracking

• No Collisions with others

Page 3: Perception for Robot Detection

Goal

Robust Robot Detection• Long Range• Short Range

Page 4: Perception for Robot Detection

Long RangeCurrent Method: Heuristic Color-Based

non-line white segments are clustered. The extracted clusters are classified as Nao robots if the following three criterions are satisfied: • the number of segments in the cluster should be larger

than 3• the width-to-height ratio should be larger than 0.2• the highest point of the cluster should be close enough to

the border line within 10 pixels as the observed robot should intersect with the field border in the camera view if both of the observing and observed robots are standing in the field.

Page 5: Perception for Robot Detection

Long RangeImprovement: Feature-Based

Considerations• Scale Invariant• Affine Invariant• Complexity

Possible Solutions1. SIFT(Scale-invariant feature transform)2. SURF(Speeded Up Robust Features)3. MSER(Maximally Stable Extremal Regions)

Page 6: Perception for Robot Detection

Long RangeImprovement: Feature-Based

• Offline

Models

• Online

Object Recognition using Predefined Models

Feature Detection Feature Detection

Page 7: Perception for Robot Detection

Short RangeSonar and Vision

• Two-Stage1. Sonar2. Active Vision– Feet Detection(sufficiently large white spot)

Page 8: Perception for Robot Detection

New NAOPossible Improvement

• Using two cameras– One for ball, the other for localization– One for feet detection, the other for localization– ……

• Not Downsampling– 320 x 240 -> 640 x 480

Page 9: Perception for Robot Detection

References• bhuman11_coderelease• SIFT(http://www.cs.ubc.ca/~lowe/keypoints/)• SURF Paper: Speeded-Up Robust Features

(SURF)• MSER Tracking Paper: Efficient Maximally Stable

Extremal Region (MSER) Tracking

Page 10: Perception for Robot Detection

Perception for Robot Detection

2011/12/22

Page 11: Perception for Robot Detection

COLOR-BASED SUCCESSFUL CASES

Page 12: Perception for Robot Detection

Front View

SURF Affine SIFT

209.318ms 73 -> 201 7s 11655 -> 14237

Page 13: Perception for Robot Detection

Back View

SURF ASIFT

125.309ms 74->186 6s 11377 -> 13930

Page 14: Perception for Robot Detection

Side View

SURF ASIFT

7s 7763 -> 13060 18matches

Page 15: Perception for Robot Detection

COLOR-BASED FAILED CASES

Page 16: Perception for Robot Detection

False Alarm

SURF ASIFTMisclassifications of the field lines.

Page 17: Perception for Robot Detection

< 100cm front

SURF ASIFT

92 matches

Page 18: Perception for Robot Detection

< 100cm back

SURF ASIFT

Page 19: Perception for Robot Detection

< 100cm side

SURF ASIFT

40 matches

Page 20: Perception for Robot Detection

300cm front

SURF ASIFT

21 matches

Page 21: Perception for Robot Detection

300cm side

ASIFT

Page 22: Perception for Robot Detection

350cm front

ASIFT

Page 23: Perception for Robot Detection

Conclusion

• Performance is not significantly better• Processing time is an issue

Page 24: Perception for Robot Detection

Perception for Robot Detection

2011/1/5

Page 25: Perception for Robot Detection

ROBOT DETECTION USING ADABOOST WITH SIFT

Page 26: Perception for Robot Detection

Multi-Class Training Stage:Using Adaboost

• Classes = (different view point of nao robots) X (different scale of nao robots) X (different illuminations)

• Input: For each class, training images (I1, l1)…(In, ln) where li = 0, 1 for negative and positive examples, respectively.

• Output: strong classifier (set of weak classifiers) for each class.

Page 27: Perception for Robot Detection

Issues In Training Stage

• Number of Classes: It depends on the limits of SIFT features(angle of view-invariant, range of scale-invariant, degree of illumination-invariant)

Page 28: Perception for Robot Detection

Detection Stage

• Input: input image from nao camera• Output:1. Number of robots in the image2. Classes each robot belongs to => rough

distance and facing direction of the detected robot

Page 29: Perception for Robot Detection

Issues In Training Stage

• Speed: Using sharing and non-sharing features to speed up.

Extracted Features

Input Image

SIFT Feature Extraction

Detection using Sharing Features

from Training Stage

Detection using Non-sharing Features from

Training Stageyes

Class 1

Class 2

Class 3NO

Page 30: Perception for Robot Detection

References

• Hand Posture Recognition Using Adaboost with SIFT for Human Robot Interaction

• Sharing features: efficient boosting procedures for multiclass object detection

Page 31: Perception for Robot Detection

Aldebaran SDK

2012/3/16

Page 32: Perception for Robot Detection

NAOqi Framework

NAOqi is a process, which is like a module look-up server.

Page 33: Perception for Robot Detection

Aldebaran Modules• Local Modules:1. It is compiled as a library(xxxx.so), and can only be used on the

robot. 2. More efficient than a remote module.3. Launched in the same process. They speak to each other using

only ONE broker. They can share variables and call each others’ methods without serialization nor networking.

• Remote Modules:1. it is compiled as an executable file(xxxx), and can be run outside

the robot. 2. Less performance in terms of speed and memory usage.3. Modules communicate each other by using the network.

Page 34: Perception for Robot Detection

BHuman

Lib-bhuman is a Aldebaran module which manages Nao’s hardware-related memory(joints, sensor data).

Page 35: Perception for Robot Detection

C++ SDK 1.12 Installation• Installation Guide:http://www.aldebaran-robotics.com/documentation/dev/cpp/install_guide.html• Related Files:On Lab Server: /usr/home/markcsie/AldebaranSDKRequirements:1. Linux(Ubuntu 10.04)2. gcc > 4.4 is required.3. CMake 2.8(Used by qibuild)4. qibuild-1.125. naoqi-sdk-1.12-linux32.tar.gz6. nao-geode-cross-toolchain-1.12.0.tar.gz (For NAO 3.3)7. nao-atom-cross-toolchain-1.12.0.tar.gz (For NAO 4.0)8. nao-flasher-1.12.1.3-linux32.tar.gz (Flasher)9. opennao-geode-system-image-1.12.gz (OS For NAO 3.3 )10. opennao-atom-system-image-1.12.opn (OS For NAO 4.0 )10. IDE: QtCreator(optional)

Page 36: Perception for Robot Detection

Installation

1. Edit ~/.bashrc:export LD_LIBRARY_PATH=[path to sdk]/libexport PATH=${PATH}:~/.local/bin:~/bin2. $ [path to qibuild]/install-qibuild.sh3. $ cd [Programming Workspace]$ qibuild init –interactive(choose UNIX Makefiles)4. $ qitoolchain create [toolchain name] [path to sdk]/toolchain.xml –default

Page 37: Perception for Robot Detection

Create and Build a Project

1. $ qibuild create [project name]2. $ qibuild configure [project name] –c [toolchain name] (--release)3. $ qibuild make [project name] –c [toolchain name] (–release)4. $ qibuild open [project name]• 3. == running Makefile

Page 38: Perception for Robot Detection

Cross Compile(Local Module)

$ qitoolchain create opennao-geode [path to cross toolchain]/toolchain.xml –default$ qibuild configure [project name] –c opennao-geode$ qibuild make [project name] –c opennao-geode

Page 39: Perception for Robot Detection

Get Image Example/** Create a proxy to ALVideoDevice on the robot.*/ ALVideoDeviceProxy camProxy(robotIp, 9559); /** Subscribe a client image requiring 320*240 and BGR colorspace.*/ const std::string clientName = camProxy.subscribe("test", kQVGA, kBGRColorSpace, 30);/** Create an iplimage header to wrap into an opencv image.*/ IplImage* imgHeader = cvCreateImageHeader(cvSize(320, 240), 8, 3);/** Retrieve an image from the camera. * The image is returned in the form of a container object, with the * following fields: * 0 = width * 1 = height * 2 = number of layers * 3 = colors space index (see alvisiondefinitions.h) * 4 = time stamp (seconds) * 5 = time stamp (micro seconds) * 6 = image buffer (size of width * height * number of layers) */ ALValue img = camProxy.getImageRemote(clientName); /** Access the image buffer (6th field) and assign it to the opencv image * container. */ imgHeader->imageData = (char*)img[6].GetBinary();

There will be a compilation error due to openCV.http://users.aldebaran-robotics.com/index.php?option=com_kunena&Itemid=14&func=view&catid=68&id=8133

Page 41: Perception for Robot Detection

Connect to NAO

• Wired Connection(Windows only): Plug the Ethernet cable, then press the chest button, NAO will speak out his IP address.Connect to NAO using web browser.• Wireless Connection: http://www.aldebaran-robotics.com/documentation/nao/nao-connecting.html

Page 42: Perception for Robot Detection

Software, Documentation and Forum

http://users.aldebaran-robotics.com/

Account: nturobotpalPassword: xxxxxxxxxx

Page 43: Perception for Robot Detection