Download - Overview { } Overview { } · Overview • Goal: simultaneously train classifier and bring data into correspondence. • Training data consists of: bags: Overview • Goal: simultaneously

Transcript
Page 1: Overview { } Overview { } · Overview • Goal: simultaneously train classifier and bring data into correspondence. • Training data consists of: bags: Overview • Goal: simultaneously

Overview• Goal: simultaneously train classifier

and bring data into correspondence.

• Training data consists of:

bags:

Overview• Goal: simultaneously group the

positive data, and train classifiers

for each of the K groups.

• Training data given in standard form.

Multiple Pose Learning (MPL)Multiple Instance Learning (MIL)

Boris BabenkoBoris Babenko11 Piotr DollárPiotr Dollár22 Zhuowen TuZhuowen Tu33 Serge BelongieSerge Belongie1,21,2

Simultaneous Learning and Alignment:Multi-Instance and Multi-Pose LearningSimultaneous Learning and Alignment:Multi-Instance and Multi-Pose Learning

1 Computer Science and Engineering

University of California, San Diego

{bbabenko,sjb}@cs.ucsd.edu

Abstract

IGERT

2 Electrical Engineering,

California Institute of Technology

[email protected]

3 Lab of Neuro Imaging

University of California, Los Angeles

[email protected]

{ }{ }{ }

In object recognition in general and in face detection in particular, dataalignment is necessary to achieve good classification results with certainstatistical learning approaches such as Viola-Jones. Data can be aligned inone of two ways: (1) by separating the data into coherent groups and trainingseparate classifiers for each; (2) by adjusting training samples so they lie incorrespondence. If done manually, both procedures are labor intensive andcan significantly add to the cost of labeling. In this paper we present a unifiedboosting framework for simultaneous learning and alignment. We present anovel boosting algorithm for Multiple Pose Learning (mpl), where the goal isto simultaneously split data into groups and train classifiers for each. We also

bag labels:

• Bag labels defined as but instance

labels are unknown during training (latent variables).

MIL-BOOST

• Define bag probability as a softmax of instance probabilities:

• Derivative of the loss function gives us the instance weights:

• Algorithm Summary:

• Instance labels are defined as

where is a latent (unknown) variable, which is

positive if example i is in group k.

MPL-BOOST

• Define probability as a softmax of probabilities determined by each classifier:

• Derivative of the loss function gives us the instance weights for each classifier:

• Algorithm Summary:

• Boosting trains a strong classifier of the form

• For a given loss function , perform gradient descent in

function space. Each step is one weak classifier.

• Let , and .

• At step t we want a weak classifier that is close to the gradient:

Gradient Boosting Review

chosen weak

classifier

other weak classifiers

current strong

classifier

={ }

[ Re-derivation of Viola et al. 2005]

to simultaneously split data into groups and train classifiers for each. We alsoreview Multiple Instance Learning (mil), and in particular mil-boost, anddescribe how to use it to simultaneously train a classifier and bring data intocorrespondence. We show results on variations of LFW and MNIST,demonstrating the potential of these approaches.

Experiments Experiments

• Log likelihood is a standard loss function that we will use:

where .

Approximating Max

function space

LFW

MN

IST

MILBoost Detections:

LF

W

M

NIS

T

( Boyd et al.)

( Viola et al.)

( Keeler et al. , Viola et al.)