Lecture 7 Regularized least-squares and Gauss-Newton method · Multi-objective least-squares in...

22
EE263 Autumn 2007-08 Stephen Boyd Lecture 7 Regularized least-squares and Gauss-Newton method multi-objective least-squares regularized least-squares nonlinear least-squares Gauss-Newton method 7–1

Transcript of Lecture 7 Regularized least-squares and Gauss-Newton method · Multi-objective least-squares in...

Page 1: Lecture 7 Regularized least-squares and Gauss-Newton method · Multi-objective least-squares in many problems we have two (or more) objectives • we want J1 = kAx−yk2 small •

EE263 Autumn 2007-08 Stephen Boyd

Lecture 7Regularized least-squares and Gauss-Newton

method

• multi-objective least-squares

• regularized least-squares

• nonlinear least-squares

• Gauss-Newton method

7–1

Page 2: Lecture 7 Regularized least-squares and Gauss-Newton method · Multi-objective least-squares in many problems we have two (or more) objectives • we want J1 = kAx−yk2 small •

Multi-objective least-squares

in many problems we have two (or more) objectives

• we want J1 = ‖Ax − y‖2 small

• and also J2 = ‖Fx − g‖2 small

(x ∈ Rn is the variable)

• usually the objectives are competing

• we can make one smaller, at the expense of making the other larger

common example: F = I, g = 0; we want ‖Ax − y‖ small, with small x

Regularized least-squares and Gauss-Newton method 7–2

Page 3: Lecture 7 Regularized least-squares and Gauss-Newton method · Multi-objective least-squares in many problems we have two (or more) objectives • we want J1 = kAx−yk2 small •

Plot of achievable objective pairs

plot (J2, J1) for every x:

J2

J1 x(1)

x(2)x(3)

note that x ∈ Rn, but this plot is in R2; point labeled x(1) is really(

J2(x(1)), J1(x

(1)))

Regularized least-squares and Gauss-Newton method 7–3

Page 4: Lecture 7 Regularized least-squares and Gauss-Newton method · Multi-objective least-squares in many problems we have two (or more) objectives • we want J1 = kAx−yk2 small •

• shaded area shows (J2, J1) achieved by some x ∈ Rn

• clear area shows (J2, J1) not achieved by any x ∈ Rn

• boundary of region is called optimal trade-off curve

• corresponding x are called Pareto optimal

(for the two objectives ‖Ax − y‖2, ‖Fx − g‖2)

three example choices of x: x(1), x(2), x(3)

• x(3) is worse than x(2) on both counts (J2 and J1)

• x(1) is better than x(2) in J2, but worse in J1

Regularized least-squares and Gauss-Newton method 7–4

Page 5: Lecture 7 Regularized least-squares and Gauss-Newton method · Multi-objective least-squares in many problems we have two (or more) objectives • we want J1 = kAx−yk2 small •

Weighted-sum objective

• to find Pareto optimal points, i.e., x’s on optimal trade-off curve, weminimize weighted-sum objective

J1 + µJ2 = ‖Ax − y‖2 + µ‖Fx − g‖2

• parameter µ ≥ 0 gives relative weight between J1 and J2

• points where weighted sum is constant, J1 + µJ2 = α, correspond toline with slope −µ on (J2, J1) plot

Regularized least-squares and Gauss-Newton method 7–5

Page 6: Lecture 7 Regularized least-squares and Gauss-Newton method · Multi-objective least-squares in many problems we have two (or more) objectives • we want J1 = kAx−yk2 small •

PSfrag

J2

J1 x(1)

x(2)x(3)

J1 + µJ2 = α

• x(2) minimizes weighted-sum objective for µ shown

• by varying µ from 0 to +∞, can sweep out entire optimal tradeoff curve

Regularized least-squares and Gauss-Newton method 7–6

Page 7: Lecture 7 Regularized least-squares and Gauss-Newton method · Multi-objective least-squares in many problems we have two (or more) objectives • we want J1 = kAx−yk2 small •

Minimizing weighted-sum objective

can express weighted-sum objective as ordinary least-squares objective:

‖Ax − y‖2 + µ‖Fx − g‖2 =

[

AõF

]

x −[

yõg

]∥

2

=∥

∥Ax − y

2

where

A =

[

AõF

]

, y =

[

yõg

]

hence solution is (assuming A full rank)

x =(

AT A)

−1

AT y

=(

ATA + µFTF)

−1 (

ATy + µFTg)

Regularized least-squares and Gauss-Newton method 7–7

Page 8: Lecture 7 Regularized least-squares and Gauss-Newton method · Multi-objective least-squares in many problems we have two (or more) objectives • we want J1 = kAx−yk2 small •

Example

f

• unit mass at rest subject to forces xi for i − 1 < t ≤ i, i = 1, . . . , 10

• y ∈ R is position at t = 10; y = aTx where a ∈ R10

• J1 = (y − 1)2 (final position error squared)

• J2 = ‖x‖2 (sum of squares of forces)

weighted-sum objective: (aTx − 1)2 + µ‖x‖2

optimal x:

x =(

aaT + µI)

−1a

Regularized least-squares and Gauss-Newton method 7–8

Page 9: Lecture 7 Regularized least-squares and Gauss-Newton method · Multi-objective least-squares in many problems we have two (or more) objectives • we want J1 = kAx−yk2 small •

optimal trade-off curve:

0 0.5 1 1.5 2 2.5 3 3.5

x 10−3

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

J2 = ‖x‖2

J1

=(y

−1)2

• upper left corner of optimal trade-off curve corresponds to x = 0

• bottom right corresponds to input that yields y = 1, i.e., J1 = 0

Regularized least-squares and Gauss-Newton method 7–9

Page 10: Lecture 7 Regularized least-squares and Gauss-Newton method · Multi-objective least-squares in many problems we have two (or more) objectives • we want J1 = kAx−yk2 small •

Regularized least-squares

when F = I, g = 0 the objectives are

J1 = ‖Ax − y‖2, J2 = ‖x‖2

minimizer of weighted-sum objective,

x =(

ATA + µI)

−1ATy,

is called regularized least-squares (approximate) solution of Ax ≈ y

• also called Tychonov regularization

• for µ > 0, works for any A (no restrictions on shape, rank . . . )

Regularized least-squares and Gauss-Newton method 7–10

Page 11: Lecture 7 Regularized least-squares and Gauss-Newton method · Multi-objective least-squares in many problems we have two (or more) objectives • we want J1 = kAx−yk2 small •

estimation/inversion application:

• Ax − y is sensor residual

• prior information: x small

• or, model only accurate for x small

• regularized solution trades off sensor fit, size of x

Regularized least-squares and Gauss-Newton method 7–11

Page 12: Lecture 7 Regularized least-squares and Gauss-Newton method · Multi-objective least-squares in many problems we have two (or more) objectives • we want J1 = kAx−yk2 small •

Nonlinear least-squares

nonlinear least-squares (NLLS) problem: find x ∈ Rn that minimizes

‖r(x)‖2 =

m∑

i=1

ri(x)2,

where r : Rn → Rm

• r(x) is a vector of ‘residuals’

• reduces to (linear) least-squares if r(x) = Ax − y

Regularized least-squares and Gauss-Newton method 7–12

Page 13: Lecture 7 Regularized least-squares and Gauss-Newton method · Multi-objective least-squares in many problems we have two (or more) objectives • we want J1 = kAx−yk2 small •

Position estimation from ranges

estimate position x ∈ R2 from approximate distances to beacons atlocations b1, . . . , bm ∈ R2 without linearizing

• we measure ρi = ‖x − bi‖ + vi

(vi is range error, unknown but assumed small)

• NLLS estimate: choose x to minimize

m∑

i=1

ri(x)2 =

m∑

i=1

(ρi − ‖x − bi‖)2

Regularized least-squares and Gauss-Newton method 7–13

Page 14: Lecture 7 Regularized least-squares and Gauss-Newton method · Multi-objective least-squares in many problems we have two (or more) objectives • we want J1 = kAx−yk2 small •

Gauss-Newton method for NLLS

NLLS: find x ∈ Rn that minimizes ‖r(x)‖2 =

m∑

i=1

ri(x)2, where

r : Rn → Rm

• in general, very hard to solve exactly

• many good heuristics to compute locally optimal solution

Gauss-Newton method:

given starting guess for xrepeat

linearize r near current guessnew guess is linear LS solution, using linearized r

until convergence

Regularized least-squares and Gauss-Newton method 7–14

Page 15: Lecture 7 Regularized least-squares and Gauss-Newton method · Multi-objective least-squares in many problems we have two (or more) objectives • we want J1 = kAx−yk2 small •

Gauss-Newton method (more detail):

• linearize r near current iterate x(k):

r(x) ≈ r(x(k)) + Dr(x(k))(x − x(k))

where Dr is the Jacobian: (Dr)ij = ∂ri/∂xj

• write linearized approximation as

r(x(k)) + Dr(x(k))(x − x(k)) = A(k)x − b(k)

A(k) = Dr(x(k)), b(k) = Dr(x(k))x(k) − r(x(k))

• at kth iteration, we approximate NLLS problem by linear LS problem:

‖r(x)‖2 ≈∥

∥A(k)x − b(k)

2

Regularized least-squares and Gauss-Newton method 7–15

Page 16: Lecture 7 Regularized least-squares and Gauss-Newton method · Multi-objective least-squares in many problems we have two (or more) objectives • we want J1 = kAx−yk2 small •

• next iterate solves this linearized LS problem:

x(k+1) =(

A(k)TA(k))

−1

A(k)T b(k)

• repeat until convergence (which isn’t guaranteed)

Regularized least-squares and Gauss-Newton method 7–16

Page 17: Lecture 7 Regularized least-squares and Gauss-Newton method · Multi-objective least-squares in many problems we have two (or more) objectives • we want J1 = kAx−yk2 small •

Gauss-Newton example

• 10 beacons

• + true position (−3.6, 3.2); ♦ initial guess (1.2,−1.2)

• range estimates accurate to ±0.5

−5 −4 −3 −2 −1 0 1 2 3 4 5−5

−4

−3

−2

−1

0

1

2

3

4

5

Regularized least-squares and Gauss-Newton method 7–17

Page 18: Lecture 7 Regularized least-squares and Gauss-Newton method · Multi-objective least-squares in many problems we have two (or more) objectives • we want J1 = kAx−yk2 small •

NLLS objective ‖r(x)‖2 versus x:

−5

0

5

−50

50

2

4

6

8

10

12

14

16

• for a linear LS problem, objective would be nice quadratic ‘bowl’

• bumps in objective due to strong nonlinearity of r

Regularized least-squares and Gauss-Newton method 7–18

Page 19: Lecture 7 Regularized least-squares and Gauss-Newton method · Multi-objective least-squares in many problems we have two (or more) objectives • we want J1 = kAx−yk2 small •

objective of Gauss-Newton iterates:

1 2 3 4 5 6 7 8 9 100

2

4

6

8

10

12

‖r(x

)‖2

iteration

• x(k) converges to (in this case, global) minimum of ‖r(x)‖2

• convergence takes only five or so steps

Regularized least-squares and Gauss-Newton method 7–19

Page 20: Lecture 7 Regularized least-squares and Gauss-Newton method · Multi-objective least-squares in many problems we have two (or more) objectives • we want J1 = kAx−yk2 small •

• final estimate is x = (−3.3, 3.3)

• estimation error is ‖x − x‖ = 0.31(substantially smaller than range accuracy!)

Regularized least-squares and Gauss-Newton method 7–20

Page 21: Lecture 7 Regularized least-squares and Gauss-Newton method · Multi-objective least-squares in many problems we have two (or more) objectives • we want J1 = kAx−yk2 small •

convergence of Gauss-Newton iterates:

−5 −4 −3 −2 −1 0 1 2 3 4 5−5

−4

−3

−2

−1

0

1

2

3

4

5

1

2

3

4

56

Regularized least-squares and Gauss-Newton method 7–21

Page 22: Lecture 7 Regularized least-squares and Gauss-Newton method · Multi-objective least-squares in many problems we have two (or more) objectives • we want J1 = kAx−yk2 small •

useful varation on Gauss-Newton: add regularization term

‖A(k)x − b(k)‖2 + µ‖x − x(k)‖2

so that next iterate is not too far from previous one (hence, linearizedmodel still pretty accurate)

Regularized least-squares and Gauss-Newton method 7–22