Solving Markov Random Fields using Dynamic Graph Cuts & Second Order Cone Programming Relaxations M....

Post on 28-Mar-2015

217 views 1 download

Tags:

Transcript of Solving Markov Random Fields using Dynamic Graph Cuts & Second Order Cone Programming Relaxations M....

Solving Markov Random Fields using

Dynamic Graph Cuts &Second Order Cone Programming

Relaxations

M. Pawan Kumar, Pushmeet Kohli

Philip Torr

Talk Outline

• Dynamic Graph Cuts– Fast reestimation of cut– Useful for video– Object specific segmentation

• Estimation of non submodular MRF’s– Relaxations beyond linear!!

Example: Video Segmentation

Model Based Segmentation

Image Segmentation Pose Estimate

[Images courtesy: M. Black, L. Sigal]

Min-Marginals

ImageMAP Solution Belief - Foreground

Lowsmoothness

Highsmoothness

Moderatesmoothness

Colour Scale

1

0

0.5

Uses of Min marginals

• Estimate of true marginals (uncertainty)

• Parameter Learning.

• Get best n solutions easily.

Dynamic Graph Cuts

PB SB

cheaperoperation

computationally

expensive operation

Simplerproblem

PB*

differencesbetweenA and B

A and Bsimilar

PA SA

solve

First segmentation problem MAP solution

Ga

Our Algorithm

Gb

second segmentation problem

Maximum flow

residual graph (Gr)

G`

differencebetween

Ga and Gbupdated residual

graph

• The Max-flow Problem- Edge capacity and flow balance constraints

Computing the st-mincut from Max-flow algorithms

• Notation- Residual capacity (edge capacity – current flow)- Augmenting path

• Simple Augmenting Path based Algorithms- Repeatedly find augmenting paths and push flow.- Saturated edges constitute the st-mincut. [Ford-Fulkerson Theorem]

Sink (1)

Source (0)

a1 a2

2

5

9

42

1

9 + α

4 + α

Adding a constant to both thet-edges of a node does not change the edges constituting the st-mincut.

Key Observation

Sink (1)

Source (0)

a1 a2

2

5

2

1

E (a1,a2) = 2a1 + 5ā1+ 9a2 + 4ā2 + 2a1ā2 + ā1a2

E*(a1,a2 ) = E(a1,a2) + α(a2+ā2)

= E(a1,a2) + α [a2+ā2 =1]

Reparametrization

9 + α

4

All reparametrizations of the graph are sums of these two types.

Other type of reparametrization

Sink (1)

Source (0)

a1 a2

2

5 + α

2 + α

1 - α

Reparametrization, second type

Both maintain the solution and add a constant α to the energy.

Reparametrization

• Nice result (easy to prove)

• All other reparametrizations can be viewed in terms of these two basic operations.

• Proof in Hammer, and also in one of Vlad’s recent papers.

s

Gt

original graph

0/9

0/7

0/5

0/2 0/4

0/1

xi xj

flow/residual capacity

Graph Re-parameterization

s

Gt

original graph

0/9

0/7

0/5

0/2 0/4

0/1

xi xj

flow/residual capacity

Graph Re-parameterization

t residual graph

xi xj0/12

5/2

3/2

1/0

2/0 4/0st-mincut

ComputeMaxflow

Gr

Edges cut

Update t-edge Capacities

s

Gr

t residual graph

xi xj0/12

5/2

3/2

1/0

2/0 4/0

Update t-edge Capacities

s

Gr

t residual graph

xi xj0/12

5/2

3/2

1/0

2/0 4/0

capacitychanges from

7 to 4

Update t-edge Capacities

s

G`t

updated residual graph

xi xj0/12

5/-1

3/2

1/0

2/0 4/0

capacitychanges from

7 to 4

edge capacityconstraint violated!(flow > capacity)

= 5 – 4 = 1

excess flow (e) = flow – new capacity

add e to both t-edges connected to node i

Update t-edge Capacities

s

G`t

updated residual graph

xi xj0/12

3/2

1/0

2/0 4/0

capacitychanges from

7 to 4

edge capacityconstraint violated!(flow > capacity)

= 5 – 4 = 1

excess flow (e) = flow – new capacity

5/-1

Update t-edge Capacities

s

G`t

updated residual graph

xi xj0/12

3/2

1/0

4/0

capacitychanges from

7 to 4

excess flow (e) = flow – new capacity

add e to both t-edges connected to node i

= 5 – 4 = 1

5/0

2/1

edge capacityconstraint violated!(flow > capacity)

Update n-edge Capacities

s

Gr

t

residual graph

xi xj0/12

5/2

3/2

1/0

2/0 4/0

• Capacity changes from 5 to 2

Update n-edge Capacities

s

t

Updated residual graph

xi xj0/12

5/2

3/-1

1/0

2/0 4/0

G`

• Capacity changes from 5 to 2- edge capacity constraint violated!

Update n-edge Capacities

s

t

Updated residual graph

xi xj0/12

5/2

3/-1

1/0

2/0 4/0

G`

• Capacity changes from 5 to 2- edge capacity constraint violated!

• Reduce flow to satisfy constraint

Update n-edge Capacities

s

t

Updated residual graph

xi xj0/11

5/2

2/0

1/0

2/0 4/0

excess

deficiency

G`

• Capacity changes from 5 to 2- edge capacity constraint violated!

• Reduce flow to satisfy constraint- causes flow imbalance!

Update n-edge Capacities

s

t

Updated residual graph

xi xj0/11

5/2

2/0

1/0

2/0 4/0

deficiency

excess

G`

• Capacity changes from 5 to 2- edge capacity constraint violated!

• Reduce flow to satisfy constraint- causes flow imbalance!

• Push excess flow to/from the terminals

• Create capacity by adding α = excess to both t-edges.

Update n-edge Capacities

Updated residual graph

• Capacity changes from 5 to 2- edge capacity constraint violated!

• Reduce flow to satisfy constraint- causes flow imbalance!

• Push excess flow to the terminals

• Create capacity by adding α = excess to both t-edges.

G`

xi xj0/11

5/3

2/0

2/0

3/0 4/1

s

t

Update n-edge Capacities

Updated residual graph

• Capacity changes from 5 to 2- edge capacity constraint violated!

• Reduce flow to satisfy constraint- causes flow imbalance!

• Push excess flow to the terminals

• Create capacity by adding α = excess to both t-edges.

G`

xi xj0/11

5/3

2/0

2/0

3/0 4/1

s

t

Complexity analysis of MRF Update Operations

MRF Energy Operation

Graph Operation Complexity

modifying a unary term

modifying a pair-wise term

adding a latent variable

delete a latent variable

Updating a t-edge capacity

Updating a n-edge capacity

adding a node

set the capacities of all edges of a node zero

O(1)

O(1)

O(1)

O(k)*

*requires k edge update operations where k is degree of the node

• Finding augmenting paths is time consuming.

• Dual-tree maxflow algorithm [Boykov & Kolmogorov PAMI 2004]

- Reuses search trees after each augmentation.

- Empirically shown to be substantially faster.

• Our Idea

– Reuse search trees from previous graph cut computation

– Saves us search tree creation tree time [O(#edges)]

– Search trees have to be modified to make them consistent with new graphs

– Constrain the search of augmenting paths

• New paths must contain at least one updated edge

Improving the Algorithm

Reusing Search Trees

c’ = measure of change in the energy

– Running time

• Dynamic algorithm (c’ + re-create search tree )

• Improved dynamic algorithm (c’)

• Video Segmentation Example

- Duplicate image frames (No time is needed)

Dynamic Graph Cut vs Active Cuts

• Our method flow recycling

• AC cut recycling

• Both methods: Tree recycling

Experimental Analysis

MRF consisting of 2x105 latent variables connected in a 4-neighborhood.

Running time of the dynamic algorithm

Part II SOCP for MRF

Aim• Accurate MAP estimation of pairwise Markov random fields

2

5

4

2

6

3

3

7

0

1 1

0

0

2 3

1

1

4 1

0

V1 V2 V3 V4

Label ‘-1’

Label ‘1’

Labelling m = {1, -1, -1, 1}

Random Variables V = {V1,..,V4}

Label Set L = {-1,1}

Aim• Accurate MAP estimation of pairwise Markov random fields

2

5

4

2

6

3

3

7

0

1 1

0

0

2 3

1

1

4 1

0

V1 V2 V3 V4

Label ‘-1’

Label ‘1’

Cost(m) = 2

Aim• Accurate MAP estimation of pairwise Markov random fields

2

5

4

2

6

3

3

7

0

1 1

0

0

2 3

1

1

4 1

0

V1 V2 V3 V4

Label ‘-1’

Label ‘1’

Cost(m) = 2 + 1

Aim• Accurate MAP estimation of pairwise Markov random fields

2

5

4

2

6

3

3

7

0

1 1

0

0

2 3

1

1

4 1

0

V1 V2 V3 V4

Label ‘-1’

Label ‘1’

Cost(m) = 2 + 1 + 2

Aim• Accurate MAP estimation of pairwise Markov random fields

2

5

4

2

6

3

3

7

0

1 1

0

0

2 3

1

1

4 1

0

V1 V2 V3 V4

Label ‘-1’

Label ‘1’

Cost(m) = 2 + 1 + 2 + 1

Aim• Accurate MAP estimation of pairwise Markov random fields

2

5

4

2

6

3

3

7

0

1 1

0

0

2 3

1

1

4 1

0

V1 V2 V3 V4

Label ‘-1’

Label ‘1’

Cost(m) = 2 + 1 + 2 + 1 + 3

Aim• Accurate MAP estimation of pairwise Markov random fields

2

5

4

2

6

3

3

7

0

1 1

0

0

2 3

1

1

4 1

0

V1 V2 V3 V4

Label ‘-1’

Label ‘1’

Cost(m) = 2 + 1 + 2 + 1 + 3 + 1

Aim• Accurate MAP estimation of pairwise Markov random fields

2

5

4

2

6

3

3

7

0

1 1

0

0

2 3

1

1

4 1

0

V1 V2 V3 V4

Label ‘-1’

Label ‘1’

Cost(m) = 2 + 1 + 2 + 1 + 3 + 1 + 3

Aim• Accurate MAP estimation of pairwise Markov random fields

2

5

4

2

6

3

3

7

0

1 1

0

0

2 3

1

1

4 1

0

V1 V2 V3 V4

Label ‘-1’

Label ‘1’

Cost(m) = 2 + 1 + 2 + 1 + 3 + 1 + 3 = 13

Minimum Cost Labelling = MAP estimate

Pr(m) exp(-Cost(m))

Aim• Accurate MAP estimation of pairwise Markov random fields

2

5

4

2

6

3

3

7

0

1 1

0

0

2 3

1

1

4 1

0

V1 V2 V3 V4

Label ‘-1’

Label ‘1’

Objectives• Applicable to all types of neighbourhood relationships• Applicable to all forms of pairwise costs• Guaranteed to converge (Convex approximation)

MotivationSubgraph Matching - Torr - 2003, Schellewald et al - 2005

G1

G2

Unary costs are uniform

V2 V3

V1

MRF

ABCD A

BCD

ABCD

MotivationSubgraph Matching - Torr - 2003, Schellewald et al - 2005

G1

G2

| d(mi,mj) - d(Vi,Vj) | <

12

YES NO

Potts Model

Pairwise Costs

Motivation

V2 V3

V1

MRF

ABCD A

BCD

ABCD

Subgraph Matching - Torr - 2003, Schellewald et al - 2005

Motivation

V2 V3

V1

MRF

ABCD A

BCD

ABCD

Subgraph Matching - Torr - 2003, Schellewald et al - 2005

MotivationMatching Pictorial Structures - Felzenszwalb et al - 2001

Part likelihood Spatial Prior

Outline

Texture

Image

P1 P3

P2

(x,y,,)

MRF

Motivation

Image

P1 P3

P2

(x,y,,)

MRF

• Unary potentials are negative log likelihoods

Valid pairwise configuration

Potts Model

Matching Pictorial Structures - Felzenszwalb et al - 2001

12

YES NO

Motivation

P1 P3

P2

(x,y,,)

Pr(Cow)Image

• Unary potentials are negative log likelihoodsMatching Pictorial Structures - Felzenszwalb et al - 2001

Valid pairwise configuration

Potts Model

12

YES NO

Outline• Integer Programming Formulation

• Previous Work

• Our Approach– Second Order Cone Programming (SOCP)– SOCP Relaxation– Robust Truncated Model

• Applications– Subgraph Matching– Pictorial Structures

Integer Programming Formulation2

5

4

2

0

1 3

0

V1 V2

Label ‘-1’

Label ‘1’Unary Cost

Unary Cost Vector u = [ 5 2 ; 2 4 ]T

Labelling m = {1 , -1}

Label vector x = [ -1

V1=-1

1

V1 = 1

; 1 -1 ]T

Recall that the aim is to find the optimal x

Integer Programming Formulation2

5

4

2

0

1 3

0

V1 V2

Label ‘-1’

Label ‘1’Unary Cost

Unary Cost Vector u = [ 5 2 ; 2 4 ]T

Labelling m = {1 , -1}

Label vector x = [ -1 1 ; 1 -1 ]T

Sum of Unary Costs = 12

∑i ui (1 + xi)

Integer Programming Formulation2

5

4

2

0

1 3

0

V1 V2

Label ‘-1’

Label ‘1’Pairwise Cost

Labelling m = {1 , -1}

0Cost of V1 = -1 and V1 = -1

0

00

0Cost of V1 = -1 and V2 = -1

3

Cost of V1 = 0-1and V2 = 11 0

00

0 0

10

3 0

Pairwise Cost Matrix P

Integer Programming Formulation2

5

4

2

0

1 3

0

V1 V2

Label ‘-1’

Label ‘1’Pairwise Cost

Labelling m = {1 , -1}

Pairwise Cost Matrix P

0 0

00

0 3

1 0

00

0 0

10

3 0

Sum of Pairwise Costs14

∑ij Pij (1 + xi)(1+xj)

Integer Programming Formulation2

5

4

2

0

1 3

0

V1 V2

Label ‘0’

Label ‘1’Pairwise Cost

Labelling m = {1 , 0}

Pairwise Cost Matrix P

0 0

00

0 3

1 0

00

0 0

10

3 0

Sum of Pairwise Costs14

∑ij Pij (1 + xi +xj + xixj)

14

∑ij Pij (1 + xi + xj + Xij)=

X = x xT Xij = xi xj

Integer Programming FormulationConstraints

• Each variable should be assigned a unique label

∑ xi = 2 - |L|i Va

• Marginalization constraint

∑ Xij = (2 - |L|) xij Vb

Integer Programming FormulationChekuri et al. , SODA 2001

x* = argmin 12

∑ ui (1 + xi) + 14

∑ Pij (1 + xi + xj + Xij)

∑ xi = 2 - |L|i Va

∑ Xij = (2 - |L|) xij Vb

xi {-1,1}

X = x xT

ConvexNon-Convex

Key Point

• In modern optimization the issue is not linear vs non linear but convex vs nonconvex

• We want to find a convex and good relaxation of the integer program.

Outline• Integer Programming Formulation

• Previous Work

• Our Approach– Second Order Cone Programming (SOCP)– SOCP Relaxation– Robust Truncated Model

• Applications– Subgraph Matching– Pictorial Structures

Linear Programming Formulation

x* = argmin 12

∑ ui (1 + xi) + 14

∑ Pij (1 + xi + xj + Xij)

∑ xi = 2 - |L|i Va

∑ Xij = (2 - |L|) xij Vb

xi {-1,1}

X = x xT

Chekuri et al. , SODA 2001Retain Convex Part

Relax Non-convex Constraint

Linear Programming Formulation

x* = argmin 12

∑ ui (1 + xi) + 14

∑ Pij (1 + xi + xj + Xij)

∑ xi = 2 - |L|i Va

∑ Xij = (2 - |L|) xij Vb

xi [-1,1]

X = x xT

Chekuri et al. , SODA 2001Retain Convex Part

Relax Non-convex Constraint

Linear Programming Formulation

x* = argmin 12

∑ ui (1 + xi) + 14

∑ Pij (1 + xi + xj + Xij)

∑ xi = 2 - |L|i Va

∑ Xij = (2 - |L|) xij Vb

xi [-1,1]

Chekuri et al. , SODA 2001Retain Convex Part

X becomes a variable to be optimized

Feasible Region (IP) x {-1,1}, X = x2

Linear Programming Formulation

FeasibleRegionfor X.

Feasible Region (IP)Feasible Region (Relaxation 1)

x {-1,1}, X = x2

x [-1,1], X = x2

Linear Programming Formulation

FeasibleRegionfor X.

Feasible Region (IP)Feasible Region (Relaxation 1)Feasible Region (Relaxation 2)

x {-1,1}, X = x2

x [-1,1], X = x2

x [-1,1]

Linear Programming Formulation

FeasibleRegionfor X.

Linear Programming Formulation

• Bounded algorithms proposed by Chekuri et al, SODA 2001

• -expansion - Komodakis and Tziritas, ICCV 2005

• TRW - Wainwright et al., NIPS 2002

• TRW-S - Kolmogorov, AISTATS 2005

• Efficient because it uses Linear Programming

• Not accurate

Semidefinite Programming Formulation

x* = argmin 12

∑ ui (1 + xi) + 14

∑ Pij (1 + xi + xj + Xij)

∑ xi = 2 - |L|i Va

∑ Xij = (2 - |L|) xij Vb

xi {-1,1}

X = x xT

Lovasz and Schrijver, SIAM Optimization, 1990Retain Convex Part

Relax Non-convex Constraint

x* = argmin 12

∑ ui (1 + xi) + 14

∑ Pij (1 + xi + xj + Xij)

∑ xi = 2 - |L|i Va

∑ Xij = (2 - |L|) xij Vb

xi [-1,1]

X = x xT

Semidefinite Programming Formulation

Retain Convex Part

Relax Non-convex Constraint

Lovasz and Schrijver, SIAM Optimization, 1990

Semidefinite Programming Formulation

x1

x2

xn

1

...

1 x1 x2... xn

1 xT

x X

=

Rank = 1

Xii = 1

Positive SemidefiniteConvex

Non-Convex

Semidefinite Programming Formulation

x1

x2

xn

1

...

1 x1 x2... xn

1 xT

x X

=

Xii = 1

Positive SemidefiniteConvex

Schur’s Complement

A B

BT C

=I 0

BTA-1 I

A 0

0 C - BTA-1B

I A-1B

0 I

0

A 0 C -BTA-1B 0

Semidefinite Programming Formulation

X - xxT 0

1 xT

x X

=1 0

x I

1 0

0 X - xxT

I xT

0 1

Schur’s Complement

x* = argmin 12

∑ ui (1 + xi) + 14

∑ Pij (1 + xi + xj + Xij)

∑ xi = 2 - |L|i Va

∑ Xij = (2 - |L|) xij Vb

xi [-1,1]

X = x xT

Semidefinite Programming Formulation

Relax Non-convex Constraint

Retain Convex PartLovasz and Schrijver, SIAM Optimization, 1990

x* = argmin 12

∑ ui (1 + xi) + 14

∑ Pij (1 + xi + xj + Xij)

∑ xi = 2 - |L|i Va

∑ Xij = (2 - |L|) xij Vb

xi [-1,1]

Semidefinite Programming Formulation

Xii = 1 X - xxT 0

Retain Convex PartLovasz and Schrijver, SIAM Optimization, 1990

Feasible Region (IP) x {-1,1}, X = x2

Semidefinite Programming Formulation

FeasibleRegionfor X.

Feasible Region (IP)Feasible Region (Relaxation 1)

x {-1,1}, X = x2

x [-1,1], X = x2

Semidefinite Programming Formulation

FeasibleRegionfor X.

Feasible Region (IP)Feasible Region (Relaxation 1)Feasible Region (Relaxation 2)

x {-1,1}, X = x2

x [-1,1], X = x2

x [-1,1], X x2

Semidefinite Programming Formulation

FeasibleRegionfor X.

Semidefinite Programming Formulation

• Formulated by Lovasz and Schrijver, 1990

• Finds a full X matrix

• Max-cut - Goemans and Williamson, JACM 1995

• Max-k-cut - de Klerk et al, 2000

• Torr AI Stats for labeling problem (2003 TR 2002)

•Accurate, but not efficient •as Semidefinite Programming algorithms slow

Previous Work - Overview

LP SDP

ExamplesTRW-S,

-expansion

Max-k-Cut

Torr 2003

Accuracy Low High

Efficiency High Low

Is there a Middle Path ???

Outline• Integer Programming Formulation

• Previous Work

• Our Approach– Second Order Cone Programming (SOCP)– SOCP Relaxation– Robust Truncated Model

• Applications– Subgraph Matching– Pictorial Structures

Second Order Cone Programming

Second Order Cone || v || t OR || v ||2 st

x2 + y2 z2

Minimize fTx

Subject to || Ai x+ bi || <= ciT x + di

i = 1, … , L

Linear Objective Function

Affine mapping of Second Order Cone (SOC)

Constraints are SOC of ni dimensions

Feasible regions are intersections of conic regions

Second Order Cone Programming

Second Order Cone Programming

|| v || t tI v

vT t0

LP SOCP SDP

=1 0

vT I

tI 0

0 t2 - vTv

I v

0 1

Schur’s Complement

Outline• Integer Programming Formulation

• Previous Work

• Our Approach– Second Order Cone Programming (SOCP)– SOCP Relaxation– Robust Truncated Model

• Applications– Subgraph Matching– Pictorial Structures

First quick definition:Matrix Dot Product

A B = ∑ij Aij Bij

A11 A12

A21 A22

B11 B12

B21 B22

= A11 B11 + A12 B12 + A21 B21 + A22 B22

SDP Relaxation

x* = argmin 12

∑ ui (1 + xi) + 14

∑ Pij (1 + xi + xj + Xij)

∑ xi = 2 - |L|i Va

∑ Xij = (2 - |L|) xij Vb

xi [-1,1]

Xii = 1 X - xxT 0

We will derive SOCP relaxation from the SDP relaxation

FurtherRelaxation

1-D ExampleX - xxT 0

X - x2 ≥ 0

For two semidefinite matrices, the dot product is non-negative

A A 0

x2 X

SOC of the form || v ||2 st, s is a scalar constant.

Feasible Region (IP)Feasible Region (Relaxation 1)Feasible Region (Relaxation 2)

x {-1,1}, X = x2

x [-1,1], X = x2

x [-1,1], X x2

SOCP Relaxation

For 1D: Same as the SDP formulation

FeasibleRegionfor X.

2-D Example

X11 X12

X21 X22

1 X12

X12 1

=X =

x1x1 x1x2

x2x1 x2x2

xxT =x1

2 x1x2

x1x2

=x2

2

2-D Example(X - xxT)

1 - x12 X12-x1x2. 0

1 0

0 0 X12-x1x2 1 - x22

x12 1

-1 x1 1

C1. 0 C1 0

2-D Example(X - xxT)

1 - x12 X12-x1x2

C2. 0

. 00 0

0 1 X12-x1x2 1 - x22

x22 1

LP Relaxation-1 x2 1

C2 0

2-D Example(X - xxT)

1 - x12 X12-x1x2

C3. 0

. 01 1

1 1 X12-x1x2 1 - x22

(x1 + x2)2 2 + 2X12

SOC of the form || v ||2 st

C3 0

2-D Example(X - xxT)

1 - x12 X12-x1x2

C4. 0

. 01 -1

-1 1 X12-x1x2 1 - x22

(x1 - x2)2 2 - 2X12

SOC of the form || v ||2 st

C4 0

General form of SOC constraints

Consider a matrix C1 = UUT 0

(X - xxT)

||UTx ||2 X . C1

C1 . 0

Continue for C2, C3, … , Cn

SOC of the form || v ||2 st

Kim and Kojima, 2000

SOCP Relaxation

How many constraints for SOCP = SDP ?

Infinite. For all C 0

We specify constraints similar to the 2-D example

SOCP RelaxationMuramatsu and Suzuki, 2001

1 0

0 0

0 0

0 1

1 1

1 1

1 -1

-1 1

Constraints hold for the above semidefinite matrices

SOCP RelaxationMuramatsu and Suzuki, 2001

1 0

0 0

0 0

0 1

1 1

1 1

1 -1

-1 1

a + b

+ c + d

a 0

b 0

c 0

d 0

Constraints hold for the linear combination

SOCP RelaxationMuramatsu and Suzuki, 2001

a+c+d c-d

c-d b+c+d

a 0

b 0

c 0

d 0Includes all semidefinite matrices where

Diagonal elements Off-diagonal elements

SOCP Relaxation - A

x* = argmin 12

∑ ui (1 + xi) + 14

∑ Pij (1 + xi + xj + Xij)

∑ xi = 2 - |L|i Va

∑ Xij = (2 - |L|) xij Vb

xi [-1,1]

Xii = 1 X - xxT 0

SOCP Relaxation - A

x* = argmin 12

∑ ui (1 + xi) + 14

∑ Pij (1 + xi + xj + Xij)

∑ xi = 2 - |L|i Va

∑ Xij = (2 - |L|) xij Vb

xi [-1,1]

(xi + xj)2 2 + 2Xij (xi - xj)2 2 - 2Xij

Specified only when Pij 0 i.e. sparse!!

Triangular Inequality

• At least two of xi, xj and xk have the same sign

• At least one of Xij, Xjk, Xik is equal to one

Xij + Xjk + Xik -1Xij - Xjk - Xik -1-Xij - Xjk + Xik -1-Xij + Xjk - Xik -1

• SOCP-B = SOCP-A + Triangular Inequalities

Outline• Integer Programming Formulation

• Previous Work

• Our Approach– Second Order Cone Programming (SOCP)– SOCP Relaxation– Robust Truncated Model

• Applications– Subgraph Matching– Pictorial Structures

Robust Truncated ModelPairwise cost of incompatible labels is truncated

Potts ModelTruncated Linear Model

Truncated Quadratic Model

• Robust to noise

• Widely used in Computer Vision - Segmentation, Stereo

Robust Truncated ModelPairwise Cost Matrix can be made sparse

P = [0.5 0.5 0.3 0.3 0.5]

Q = [0 0 -0.2 -0.2 0]

Reparameterization

Sparse Q matrix Fewer constraints

Compatibility Constraint

Q(ma, mb) < 0 for variables Va and Vb

Relaxation ∑ Qij (1 + xi + xj + Xij) < 0

SOCP-C = SOCP-B + Compatibility Constraints

SOCP Relaxation

• More accurate than LP

• More efficient than SDP

• Time complexity - O( |V|3 |L|3)

• Same as LP

• Approximate algorithms exist for LP relaxation

• We use |V| 10 and |L| 200

Outline• Integer Programming Formulation

• Previous Work

• Our Approach– Second Order Cone Programming (SOCP)– SOCP Relaxation– Robust Truncated Model

• Applications– Subgraph Matching– Pictorial Structures

Subgraph MatchingSubgraph Matching - Torr - 2003, Schellewald et al - 2005

G1

G2

Unary costs are uniform

V2 V3

V1

MRF

ABCD A

BCD

ABCD

Pairwise costs form a Potts model

Subgraph Matching

• 1000 pairs of graphs G1 and G2

• #vertices in G2 - between 20 and 30

• #vertices in G1 - 0.25 * #vertices in G2

• 5% noise to the position of vertices

• NP-hard problem

Subgraph MatchingMethod Time

(sec)Accuracy (%)

LP 0.85 6.64

LBP 0.2 78.6

GBP 1.5 85.2

SDP-A 35.0 93.11

SOCP-A 3.0 92.01

SOCP-B 4.5 94.79

SOCP-C 4.8 96.18

Outline• Integer Programming Formulation

• Previous Work

• Our Approach– Second Order Cone Programming (SOCP)– SOCP Relaxation– Robust Truncated Model

• Applications– Subgraph Matching– Pictorial Structures

Pictorial Structures

Image

P1 P3

P2

(x,y,,)

MRF

Matching Pictorial Structures - Felzenszwalb et al - 2001

Outline

Texture

Pictorial Structures

Image

P1 P3

P2

(x,y,,)

MRF

Unary costs are negative log likelihoods

Pairwise costs form a Potts model

| V | = 10 | L | = 200

Pictorial StructuresROC Curves for 450 +ve and 2400 -ve images

Pictorial StructuresROC Curves for 450 +ve and 2400 -ve images

Conclusions• We presented an SOCP relaxation to solve MRF

• More efficient than SDP

• More accurate than LP, LBP, GBP

• #variables can be reduced for Robust Truncated Model

• Provides excellent results for subgraph matching and pictorial structures

Future Work

• Quality of solution– Additive bounds exist– Multiplicative bounds for special cases ??– What are good C’s.

• Message passing algorithm ??– Similar to TRW-S or -expansion– To handle image sized MRF