A Multi-Paradigm Object Tracker for Robot Navigation Assisted by External Computer Vision...

20
A Multi-Paradigm Object Tracker for Robot Navigation Assisted by External Computer Vision Marcel-Titus Marginean and Chao Lu A Multi-Paradigm Object Tracker for Robot Navigation Assisted by External Computer Vision Marcel-Titus Marginean and Chao Lu Computer and Information Sciences, Towson University 8000 York Rd, Towson, MD 21252, USA [email protected] [email protected]

Transcript of A Multi-Paradigm Object Tracker for Robot Navigation Assisted by External Computer Vision...

A Multi-Paradigm Object Tracker for Robot Navigation Assisted by External Computer Vision

Marcel-Titus Marginean and Chao Lu

A Multi-Paradigm Object Tracker for Robot Navigation Assisted by External Computer Vision

Marcel-Titus Marginean and Chao Lu

Computer and Information Sciences, Towson University8000 York Rd, Towson, MD 21252, USA

[email protected]@towson.edu

A Multi-Paradigm Object Tracker for Robot Navigation Assisted by External Computer Vision

Marcel-Titus Marginean and Chao Lu

System Overview

Paper: A Distributed Processing Architecture for Vision Based Domestic Robot Navigation

Performing CPU intensive tasks on Base Station

Aid robot navigation with external information from fixed camera

A Multi-Paradigm Object Tracker for Robot Navigation Assisted by External Computer Vision

Marcel-Titus Marginean and Chao Lu

Software Overview

CM-Camera Module

SAM–Situation Awareness Module

RM–Robot Module

ARM-Autonomous Robot Module

A Multi-Paradigm Object Tracker for Robot Navigation Assisted by External Computer Vision

Marcel-Titus Marginean and Chao Lu

Camera Module Overview

One Camera Module for each Fixed Camera

Capture images via HTTP at a fixed frame rate

Run Object Tracking Algorithm

Stream toward SAM a Data Vector with Basic Tracking Information for each tracked object

Act as a server providing information upon request

Answer queries from SAM about Extended Information for a tracked object, images around a tracked object or full captured frames

A Multi-Paradigm Object Tracker for Robot Navigation Assisted by External Computer Vision

Marcel-Titus Marginean and Chao Lu

MP-Tracker Highlights

Employs a novel combination of already known computer vision algorithms in order to fulfill the following requirements

Detection and tracking of multiple moving objects

Able to cope with temporary occlusions as a result of moving

Able to cope with sudden changes in direction of movement

Efficient use of CPU allowing multiple trackers to run on the same computer

Low CPU usage when idle allowing other less priority tasks to share the system

A Multi-Paradigm Object Tracker for Robot Navigation Assisted by External Computer Vision

Marcel-Titus Marginean and Chao Lu

Related Work

Multiple Hypothesis Tracking (MHT) by Donald Reid for RADAR in his seminal paper from 1979

MHT adapted for Computer Vision by Antunes, de Matos and Gaspar

Background subtraction and segmentation has been used for tracking vehicles on highway by Jun,Aggarwal and Gokmen

Raw Lucas-Kanade method for tracking has been used by Bissacco and Ghiasi taking advantage by specialized accelerated hardware to achieve real time operation

An application of Histograms in tracking has been presented by Benfold and Reid

A Multi-Paradigm Object Tracker for Robot Navigation Assisted by External Computer Vision

Marcel-Titus Marginean and Chao Lu

Method Overview

Multi-stage approach to tracking. First step is motion blobs detection

Generate a set of “One-to-Many” association hypothesis between already tracked targets and newly detected motion blobs

Refine the hypothesis in subsequent steps by employing various computer vision techniques

After each step we extract the hypothesis that can be picked unambiguously easing the load for the following steps

The motion blobs that were not associated with any target are candidates to be checked by MHT against previous leftover blobs to create new targets – Target Initilization

The leftovers after this step are put in history for next frame

A Multi-Paradigm Object Tracker for Robot Navigation Assisted by External Computer Vision

Marcel-Titus Marginean and Chao Lu

Target Modelling

Each Target contain a Kalman Filter modelling the equation of motion in 2D without control signal, the known previous trajectory, area and information about last associated blob with it

The Kalman filter is used into a predict-update cycle

A Multi-Paradigm Object Tracker for Robot Navigation Assisted by External Computer Vision

Marcel-Titus Marginean and Chao Lu

Target Modelling

A Multi-Paradigm Object Tracker for Robot Navigation Assisted by External Computer Vision

Marcel-Titus Marginean and Chao Lu

Hypothesis Management

Each Hypothesis is a triplet {TargetID, set<BlobID>, confidence}

The original confidence value is in reverse proportionality with the size of Uncertainty Rectangle

Ambiguity arise when two or more EBR intersect the same Blob

Ambiguity Resolving Algorithm update each Hypothesis confidence at every step based on the Score calculated on that step with formula: Conf=(1-α)Conf + α*Score

The coefficient depends on the level of trust into the accuracy of a given algorithm, level of trust assigned originally by experimental method and adjusted based on particular situation detected on the current frame.

A Multi-Paradigm Object Tracker for Robot Navigation Assisted by External Computer Vision

Marcel-Titus Marginean and Chao Lu

Hypothesis Management

Two methods of extracting a hypothesis: Unambiguous Picking and Ambiguous Picking

Refining steps attempts to allocate the blobs set unambiguously and updates the overall confidence in the hypothesis

Unambiguous Picking – pick any hypothesis with confidence over an Unambiguous Threshold if it’s Blob Set is disjoint from any other Hypothesis set. Attempted after each refining step.

Ambiguous Picking attempted once after all refining steps. It picks any hypothesis over a High Threshold Level if any hypothesis claiming the same Blob have a confidence level bellow a very Low Threshold

High Threshold > Unambiguous Threshold > Low Threshold

A Multi-Paradigm Object Tracker for Robot Navigation Assisted by External Computer Vision

Marcel-Titus Marginean and Chao Lu

Fuzzy Histogram

First Computer Vision method used in hypothesis refinement. Used both for Target / Blob association as well as for new Target initialization from Leftover Blobs

The FH is calculated over the pixels from the image that are located on top of a Motion Blob

Three Dimensional Histogram in RGB space with a small number (4..6) of sampling points

Classic Trapezoidal membership function

Comparison between two histogram is being done by calculation of a matching score

A Multi-Paradigm Object Tracker for Robot Navigation Assisted by External Computer Vision

Marcel-Titus Marginean and Chao Lu

Fuzzy Histogram

Classic Trapezoidal membership function

Score function used to compare two Fuzzy Histograms match

A Multi-Paradigm Object Tracker for Robot Navigation Assisted by External Computer Vision

Marcel-Titus Marginean and Chao Lu

Area Matching

The algorithm assumes that most of the time the object is not occluded

In that case changes in area of the blobs are relative small from frame to frame

An area matching score is calculated as the fraction between the smallest and largest area of the Motion Blob in two subsequent frames

An occlusion detection algorithm attempts to adjust the weight associated with this test whenever an occlusion is suspected

An occlusion is suspected when the blobs suffer larger than expected changes in size on the direction of motion

A Multi-Paradigm Object Tracker for Robot Navigation Assisted by External Computer Vision

Marcel-Titus Marginean and Chao Lu

Lucas-Kanade Matching

Sparse Optical Flow method detects matching points in two subsequent images

CPU intensive, it is not used on whole image but only on the cropped images around Blobs of interest

Shi-Tomasi corners are calculated on the rectangle of interests and are filtered only those located on or at the boundary of Motion Blobs.

Filtered features are passed to LK Optical Flow function for matching

Requires the Motion Blobs to have a certain size otherwise can not work

A Multi-Paradigm Object Tracker for Robot Navigation Assisted by External Computer Vision

Marcel-Titus Marginean and Chao Lu

End of Frame Processing

Second Chance Algorithm uses MSER segmentation to try to disambiguate between objects so close that their motion blobs merged, then Lucal-Kanade matching is restarted filtered by the segmented regions

Target Initialization checks the current un-associated blobs against the recent history of un-associated blobs using MHT to initialize new Targets

Any leftover blob is added to the recent history list to be checked by subsequent frames

Targets that were “Lost” for too long are removed from the system. The amount of time for removal is smaller for Targets on a trajectory that takes them out of the field of view

A Multi-Paradigm Object Tracker for Robot Navigation Assisted by External Computer Vision

Marcel-Titus Marginean and Chao Lu

Conclusions

Successfully tracked two RC vehicles and one person from a 640x480 IP camera mounted on the wall near ceiling overlooking the room

Comparison of average tracking time showed MP-Tracker over performing raw Lucas-Kanade matching of the whole image by a factor of 3 times (avg: 39.5 ms vs 121.5 ms respectively) on Pentium E5200 @ 2.5 Ghz

Having an average time bellow 50 ms/frame makes MP-Tracker suitable for robot-tracking

On un-occluded videos MP-Tracker is able to track two RC vehicles by Kalman Filter, Histogram and Area matching alone without LK or segmentation in over 95% of the frames.

A Multi-Paradigm Object Tracker for Robot Navigation Assisted by External Computer Vision

Marcel-Titus Marginean and Chao Lu

Conclusions

A Multi-Paradigm Object Tracker for Robot Navigation Assisted by External Computer Vision

Marcel-Titus Marginean and Chao Lu

Conclusions

The performance degrade significantly when the person walk close to the camera occluding most of the field of view.

Exploring alternatives for Second Chance Algorithm, like contour matching or less CPU intensive segmentation methods

Stop the tracking after a Hard Real-Time limit and broadcast predicted positions with very low confidence level

SAM will rely more on the tracking from the other camera or do it own prediction of 3D trajectory

The high performance on the “average” situations makes MP-Tracker a really promising algorithm for robot operations assisted by external CV once these issues are solved.

A Multi-Paradigm Object Tracker for Robot Navigation Assisted by External Computer Vision

Marcel-Titus Marginean and Chao Lu

Questions and Thank you!

? ? ?