Bundle Adjustment in the Largegrail.cs.washington.edu/projects/bal/poster.pdfBundle Adjustment in...

1
Bundle Adjustment in the Large Sameer Agarwal Google Noah Snavely Cornell University Steven M. Seitz Google & University of Washington Richard Szeliski Microsoft Research Until convergence μ μ/2 μ 2 * μ else min Δx k J (x k x k + f (x k ) 2 + μD (x k x k 2 x k +1 x k x k x k +1 x k k k +1 if f (x k x k ) 2 < J (x k x k + f (x k ) 2 Exact Step Levenberg Marquardt J (x)J (x)+ μD (x) D (x) Δx = -J (x)f (x) H μ Δx = -g B E E C Δy Δz = v w B - EC -1 E Δy = v - EC -1 w Δz = C -1 (w - E Δy ) Normal Equations Hessian Approximation Cameras Points Schur Complement Calculating the LM Step 1. Expensive to compute and store 2. Extremely expensive to factorize S = B - EC -1 E explicit-direct explicit-sparse explicit-jacobi implicit-jacobi implicit-ssor normal-jacobi τ =0.01 0 0.2 0.4 0.6 0.8 1 τ =0.001 0 0.2 0.4 0.6 0.8 1 τ =0.0001 10 1 10 2 10 3 10 4 0 0.2 0.4 0.6 0.8 1 10 1 10 2 10 3 10 4 10 1 10 2 10 3 10 4 10 1 10 2 10 3 10 4 10 1 10 2 10 3 10 4 10 1 10 2 10 3 10 4 Experiments Code and data on our website http://grail.cs.washington.edu/projects/bal Use Preconditioned Conjugate Gradients. H μ (x k x k + g (x k )η k g (x k ) Inexact Step Levenberg Marquardt min Δx k J (x k x k + f (x k ) 2 + μD (x k x k 2 Replace the exact solution to Forcing sequence, controls the quality of LM step with an approximate solution satisfying Calculating the Inexact step H μ Δx = -g But which linear system ? B - EC -1 E Δy = v - EC -1 w Much smaller linear system. Expensive to compute and store, but Large linear system. Easy to evaluate matrix-vector products. Or, x 2 = E Δy, x 3 = C -1 x 2 ,x 4 = Ex 3 x 5 = B Δy [B - EC -1 E y = x 5 - x 4 i.e., We can run PCG implicitly on the reduced camera matrix at the same cost as the Normal equations! M (P )= P E 0 C P -1 0 0 C -1 P E C Lemma (Saad 2003): CG with on the reduced camera matrix with a preconditioner P is the same as CG on the normal equations with the SSOR preconditioner: Run time Solution Accuracy Current Bundle adjustment algorithms do not scale beyond 1-2K images. Our Algorithm 1. 14k images, 4.5M points in less than an hour. 2. Inexact step Levenberg Marquardt algorithm. 3. Predictable and minimal memory usage. 4. No need for high performance BLAS/LAPACK. 4. Easily parallelizable (shared and distributed memory) 5. Simple preconditioners give state of the art performance. Dataset 1. Ladybug: Images captured from a vehicle driving down a street. 2. Skeletal: Sparse incremental reconstruction of Flickr images. 3. Final: Reconstructions from the final stage of skeletal sets algorithm on Flickr images. 10 1 10 2 10 3 10 4 0 0.2 0.4 0.6 0.8 1 Images Schur Complement Sparsity Ladybug Skeletal Final Small Large Sparse Dense 10 0 10 1 10 2 10 3 10 4 10 5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Time (seconds) Relative decrease in RMS error normal-jacobi explicit-jacobi implicit-jacobi implicit-ssor 10 0 10 1 10 2 10 3 10 4 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Time (seconds) Relative decrease in RMS error explicit-direct explicit-sparse normal-jacobi explicit-jacobi implicit-jacobi implicit-ssor 10 0 10 1 10 2 10 3 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Time (seconds) Relative decrease in RMS error explicit-direct explicit-sparse normal-jacobi explicit-jacobi implicit-jacobi implicit-ssor 10 0 10 1 10 2 10 3 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Time (seconds) Relative decrease in RMS error explicit-direct explicit-sparse normal-jacobi explicit-jacobi implicit-jacobi implicit-ssor Venice Skeletal (1936 images) Venice Final (13696 images) Trafalgar Skeletal (257 images) Ladybug (1197 images) 1. The first few steps don't need to be very accurate, initial decrease can be quite fast. 2. The quality of preconditioner decides performance in later iterations. 3. Simpler preconditioners give crude solutions fast and then stall. 4. Small problems: SBA/direct-factorization. 5. Large problems: PCG with SSOR preconditioning.

Transcript of Bundle Adjustment in the Largegrail.cs.washington.edu/projects/bal/poster.pdfBundle Adjustment in...

Page 1: Bundle Adjustment in the Largegrail.cs.washington.edu/projects/bal/poster.pdfBundle Adjustment in the Large Sameer Agarwal Google Noah Snavely Cornell University Steven M. Seitz Google

Bundle Adjustment in the LargeSameer Agarwal

GoogleNoah Snavely

Cornell UniversitySteven M. Seitz

Google & University of WashingtonRichard Szeliski

Microsoft Research

Until convergence

µ ← µ/2

µ ← 2 ∗ µ

else

min∆xk

‖J(xk)∆xk + f(xk)‖2 + µ‖D(xk)∆xk‖2

xk+1 ← xk +∆xk

xk+1 ← xk

k ← k + 1

if ‖f(xk +∆xk)‖2 < ‖J(xk)∆xk + f(xk)‖2

Exact Step Levenberg Marquardt

[J!(x)J(x) + µD(x)!D(x)

]∆x = −J!(x)f(x)

Hµ∆x = −g

[B EE! C

] [∆y∆z

]=

[vw

]

[B − EC−1E"]∆y = v − EC−1w

∆z = C−1(w − E"∆y)

Normal Equations

Hessian Approximation

CamerasPoints

Schur Complement

Calculating the LM Step

1. Expensive to compute and store2. Extremely expensive to factorize

S = B − EC−1E"

explicit-direct explicit-sparse explicit-jacobi implicit-jacobi implicit-ssor normal-jacobi

τ=

0.01

0

0.2

0.4

0.6

0.8

1

τ=

0.001

0

0.2

0.4

0.6

0.8

1

τ=

0.0001

101

102

103

104

0

0.2

0.4

0.6

0.8

1

101

102

103

104

101

102

103

104

101

102

103

104

101

102

103

104

101

102

103

104

1

Experiments

Code and data on our website http://grail.cs.washington.edu/projects/bal

Use Preconditioned Conjugate Gradients.

‖Hµ(xk)∆xk + g(xk)‖ ≤ ηk‖g(xk)‖

Inexact Step Levenberg Marquardt

min∆xk

‖J(xk)∆xk + f(xk)‖2 + µ‖D(xk)∆xk‖2Replace the exact solution to

Forcing sequence, controls the quality of LM step

with an approximate solution satisfying

Calculating the Inexact step

Hµ∆x = −g

But which linear system ?

[B − EC−1E"]∆y = v − EC−1w

Much smaller linear system. Expensive to compute and store, but

Large linear system. Easy to evaluate matrix-vector products. Or,

x2 = E!∆y, x3 = C−1x2, x4 = Ex3

x5 = B∆y

[B − EC−1E!]∆y = x5 − x4

i.e., We can run PCG implicitly on the reduced camera matrix at the same cost as the Normal equations!

M(P ) =

[P E0 C

] [P−1 00 C−1

] [PE" C

]

Lemma (Saad 2003): CG with on the reduced camera matrix with a preconditioner P is the same as CG on the normal equations with the SSOR preconditioner:

Run time

Solution Accuracy

Current Bundle adjustment algorithms do not scale beyond 1-2K images.

Our Algorithm1. 14k images, 4.5M points in less than an hour.2. Inexact step Levenberg Marquardt algorithm.3. Predictable and minimal memory usage.4. No need for high performance BLAS/LAPACK.4. Easily parallelizable (shared and distributed memory)5. Simple preconditioners give state of the art performance.

Dataset

1. Ladybug: Images captured from a vehicle driving down a street.2. Skeletal: Sparse incremental reconstruction of Flickr images.3. Final: Reconstructions from the final stage of skeletal sets algorithm on Flickr images.10

110

210

310

40

0.2

0.4

0.6

0.8

1

Images

Sch

ur

Com

ple

ment

Spars

ity

LadybugSkeletalFinal

Small Large

Spar

seDe

nse

100

101

102

103

104

105

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Time (seconds)

Rela

tive d

ecr

ease

in R

MS

err

or

normal−jacobiexplicit−jacobiimplicit−jacobiimplicit−ssor

100

101

102

103

104

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Time (seconds)

Rela

tive d

ecr

ease

in R

MS

err

or

explicit−directexplicit−sparsenormal−jacobiexplicit−jacobiimplicit−jacobiimplicit−ssor

100

101

102

103

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Time (seconds)

Rela

tive d

ecr

ease

in R

MS

err

or

explicit−directexplicit−sparsenormal−jacobiexplicit−jacobiimplicit−jacobiimplicit−ssor

100

101

102

103

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Time (seconds)

Rela

tive d

ecr

ease

in R

MS

err

or

explicit−directexplicit−sparsenormal−jacobiexplicit−jacobiimplicit−jacobiimplicit−ssor

Venice Skeletal (1936 images) Venice Final (13696 images) Trafalgar Skeletal (257 images) Ladybug (1197 images)

1. The first few steps don't need to be very accurate, initial decrease can be quite fast.2. The quality of preconditioner decides performance in later iterations.3. Simpler preconditioners give crude solutions fast and then stall.4. Small problems: SBA/direct-factorization.5. Large problems: PCG with SSOR preconditioning.