L.Vandenberghe ECE133A(Fall2019) 11.Constrainedleastsquaresvandenbe/133A/lectures/cls.pdf ·...

34
L. Vandenberghe ECE133A (Fall 2019) 11. Constrained least squares least norm problem least squares with equality constraints linear quadratic control 11.1

Transcript of L.Vandenberghe ECE133A(Fall2019) 11.Constrainedleastsquaresvandenbe/133A/lectures/cls.pdf ·...

Page 1: L.Vandenberghe ECE133A(Fall2019) 11.Constrainedleastsquaresvandenbe/133A/lectures/cls.pdf · Example exampleofpage1.23 F¹tº x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 t F¹tº 0 1 2 3 4 5 6

L. Vandenberghe ECE133A (Fall 2019)

11. Constrained least squares

• least norm problem

• least squares with equality constraints

• linear quadratic control

11.1

Page 2: L.Vandenberghe ECE133A(Fall2019) 11.Constrainedleastsquaresvandenbe/133A/lectures/cls.pdf · Example exampleofpage1.23 F¹tº x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 t F¹tº 0 1 2 3 4 5 6

Least norm problem

minimize ‖x‖2subject to Cx = d

• C is a p × n matrix, d is a p-vector

• in most applications p < n and the equation Cx = d is underdetermined

• the goal is to find the solution of the equation Cx = d with the smallest norm

we will assume that C has linearly independent rows

• the equation Cx = d has at least one solution for every d

• C is wide or square (p ≤ n)

• if p < n there are infinitely many solutions

Constrained least squares 11.2

Page 3: L.Vandenberghe ECE133A(Fall2019) 11.Constrainedleastsquaresvandenbe/133A/lectures/cls.pdf · Example exampleofpage1.23 F¹tº x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 t F¹tº 0 1 2 3 4 5 6

Example

example of page 1.23

F(t)

x1

x2

x3

x4

x5

x6

x7

x8

x9

x10

t

F(t)

0 1 2 3 4 5 6 7 8 9 10

• unit mass, with zero initial position and velocity

• piecewise-constant force F(t) = x j for t ∈ [ j − 1, j) for j = 1, . . . ,10

• position and velocity at t = 10 are given by y = Cx where

C =[

19/2 17/2 15/2 · · · 1/21 1 1 · · · 1

]Constrained least squares 11.3

Page 4: L.Vandenberghe ECE133A(Fall2019) 11.Constrainedleastsquaresvandenbe/133A/lectures/cls.pdf · Example exampleofpage1.23 F¹tº x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 t F¹tº 0 1 2 3 4 5 6

Example

forces that move mass over a unit distance with zero final velocity satisfy[19/2 17/2 15/2 · · · 1/2

1 1 1 · · · 1

]x =

[10

]

some interesting solutions:

• solutions with only two nonzero elements:

x = (1,−1,0, . . . ,0), x = (0,1,−1, . . . ,0), . . .

• least norm solution: minimizes∫ 10

0F(t)2dt = x2

1 + x22 + · · · + x2

10

Constrained least squares 11.4

Page 5: L.Vandenberghe ECE133A(Fall2019) 11.Constrainedleastsquaresvandenbe/133A/lectures/cls.pdf · Example exampleofpage1.23 F¹tº x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 t F¹tº 0 1 2 3 4 5 6

Example

0 2 4 6 8 10

−1

−0.5

0

0.5

1

Time

Forc

e

x = (1,−1, 0, . . . , 0)

0 2 4 6 8 10

0

0.2

0.4

0.6

0.8

1

Time

Posi

tion

0 2 4 6 8 10

−0.05

0

0.05

Time

Forc

e

Least norm solution

0 2 4 6 8 10

0

0.2

0.4

0.6

0.8

1

Time

Posi

tion

Constrained least squares 11.5

Page 6: L.Vandenberghe ECE133A(Fall2019) 11.Constrainedleastsquaresvandenbe/133A/lectures/cls.pdf · Example exampleofpage1.23 F¹tº x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 t F¹tº 0 1 2 3 4 5 6

Least distance solution

as a variation, we can minimize the distance to a given point a , 0:

minimize ‖x − a‖2subject to Cx = d

• reduces to least norm problem by a change of variables y = x − a

minimize ‖y‖2subject to Cy = d − Ca

• from least norm solution y, we obtain solution x = y + a of first problem

Constrained least squares 11.6

Page 7: L.Vandenberghe ECE133A(Fall2019) 11.Constrainedleastsquaresvandenbe/133A/lectures/cls.pdf · Example exampleofpage1.23 F¹tº x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 t F¹tº 0 1 2 3 4 5 6

Solution of least norm problem

if C has linearly independent rows (is right-invertible), then

x = CT(CCT)−1d

= C†d

is the unique solution of the least norm problem

minimize ‖x‖2subject to Cx = d

• in other words if Cx = d and x , x, then ‖x‖ > ‖ x‖• recall from page 4.26 that

CT(CCT)−1 = C†

is the pseudo-inverse of a right-invertible matrix C

Constrained least squares 11.7

Page 8: L.Vandenberghe ECE133A(Fall2019) 11.Constrainedleastsquaresvandenbe/133A/lectures/cls.pdf · Example exampleofpage1.23 F¹tº x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 t F¹tº 0 1 2 3 4 5 6

Proof

1. we first verify that x satisfies the equation:

Cx = CCT(CCT)−1d = d

2. next we show that ‖x‖ > ‖ x‖ if Cx = d and x , x

‖x‖2 = ‖ x + x − x‖2= ‖ x‖2 + 2xT(x − x) + ‖x − x‖2= ‖ x‖2 + ‖x − x‖2≥ ‖ x‖2

with equality only if x = x

on line 3 we use Cx = Cx = d in

xT(x − x) = dT(CCT)−1C(x − x) = 0

Constrained least squares 11.8

Page 9: L.Vandenberghe ECE133A(Fall2019) 11.Constrainedleastsquaresvandenbe/133A/lectures/cls.pdf · Example exampleofpage1.23 F¹tº x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 t F¹tº 0 1 2 3 4 5 6

QR factorization method

use the QR factorization CT = QR of the matrix CT :

x = CT(CCT)−1d

= QR(RTQTQR)−1d

= QR(RT R)−1d

= QR−T d

Algorithm

1. compute QR factorization CT = QR (2p2n flops)

2. solve RT z = d by forward substitution (p2 flops)

3. matrix-vector product x = Qz (2pn flops)

complexity: 2p2n flops

Constrained least squares 11.9

Page 10: L.Vandenberghe ECE133A(Fall2019) 11.Constrainedleastsquaresvandenbe/133A/lectures/cls.pdf · Example exampleofpage1.23 F¹tº x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 t F¹tº 0 1 2 3 4 5 6

Example

C =[

1 −1 1 11 0 1/2 1/2

], d =

[01

]• QR factorization CT = QR

1 1−1 0

1 1/21 1/2

=

1/2 1/√

2−1/2 1/

√2

1/2 01/2 0

[

2 10 1/

√2

]

• solve RT z = b [2 01 1/

√2

] [z1z2

]=

[01

]z1 = 0, z2 =

√2

• evaluate x = Qz = (1,1,0,0)

Constrained least squares 11.10

Page 11: L.Vandenberghe ECE133A(Fall2019) 11.Constrainedleastsquaresvandenbe/133A/lectures/cls.pdf · Example exampleofpage1.23 F¹tº x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 t F¹tº 0 1 2 3 4 5 6

Outline

• least norm problem

• least squares with equality constraints

• linear quadratic control

Page 12: L.Vandenberghe ECE133A(Fall2019) 11.Constrainedleastsquaresvandenbe/133A/lectures/cls.pdf · Example exampleofpage1.23 F¹tº x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 t F¹tº 0 1 2 3 4 5 6

Constrained least squares

minimize ‖Ax − b‖2subject to Cx = d

• A is an m × n matrix, C is a p × n matrix, b is an m-vector, d is a p-vector

• in most applications p < n, so equations are underdetermined

• the goal is to find the solution of Cx = d with smallest value of ‖Ax − b‖2

• we make no assumptions about the shape of A

Special cases

• least squares problem is a special case with p = 0 (no constraints)

• least norm problem is a special case with A = I and b = 0

Constrained least squares 11.11

Page 13: L.Vandenberghe ECE133A(Fall2019) 11.Constrainedleastsquaresvandenbe/133A/lectures/cls.pdf · Example exampleofpage1.23 F¹tº x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 t F¹tº 0 1 2 3 4 5 6

Piecewise-polynomial fitting

• fit two polynomials f (x), g(x) to points (x1, y1), . . . , (xN, yN)

f (xi) ≈ yi for points xi ≤ a, g(xi) ≈ yi for points xi > a

• make values and derivatives continuous at point a: f (a) = g(a), f ′(a) = g′(a)

a

two polynomials of degree 4

f (x)

g(x)

x

Constrained least squares 11.12

Page 14: L.Vandenberghe ECE133A(Fall2019) 11.Constrainedleastsquaresvandenbe/133A/lectures/cls.pdf · Example exampleofpage1.23 F¹tº x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 t F¹tº 0 1 2 3 4 5 6

Constrained least squares formulation

• assume points are numbered so that x1, . . . , xM ≤ a and xM+1, . . . , xN > a:

minimizeM∑

i=1( f (xi) − yi)2 +

N∑i=M+1

(g(xi) − yi)2

subject to f (a) = g(a), f ′(a) = g(a)

• for polynomials f (x) = θ1 + · · · + θd xd−1 and g(x) = θd+1 + · · · + θ2d xd−1

A =

1 x1 · · · xd−11 0 0 · · · 0

... ... ... ... ... ...

1 xM · · · xd−1M 0 0 · · · 0

0 0 · · · 0 1 xM+1 · · · xd−1M+1... ... ... ... ... ...

0 0 · · · 0 1 xN · · · xd−1N

, b =

y1...yMyM+1...yN

C =

[1 a · · · ad−1 −1 −a · · · −ad−1

0 1 · · · (d − 1)ad−2 0 −1 · · · −(d − 1)ad−2

], d =

[00

]Constrained least squares 11.13

Page 15: L.Vandenberghe ECE133A(Fall2019) 11.Constrainedleastsquaresvandenbe/133A/lectures/cls.pdf · Example exampleofpage1.23 F¹tº x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 t F¹tº 0 1 2 3 4 5 6

Assumptions

minimize ‖Ax − b‖2subject to Cx = d

we will make two assumptions:

1. the stacked (m + p) × n matrix [AC

]has linearly independent columns (is left-invertible)

2. C has linearly independent rows (is right-invertible)

• note that assumption 1 is a weaker condition than left invertibility of A

• assumptions imply that p ≤ n ≤ m + p

Constrained least squares 11.14

Page 16: L.Vandenberghe ECE133A(Fall2019) 11.Constrainedleastsquaresvandenbe/133A/lectures/cls.pdf · Example exampleofpage1.23 F¹tº x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 t F¹tº 0 1 2 3 4 5 6

Optimality conditions

x solves the constrained LS problem if and only if there exists a z such that[AT A CT

C 0

] [xz

]=

[AT b

d

](proof on next page)

• this is a set of n + p linear equations in n + p variables

• we’ll see that the matrix on the left-hand side is nonsingular

• equations are also known as Karush–Kuhn–Tucker (KKT) equations

Special cases

• least squares: when p = 0, reduces to normal equations AT Ax = AT b

• least norm: when A = I, b = 0, reduces to Cx = d and x + CT z = 0

Constrained least squares 11.15

Page 17: L.Vandenberghe ECE133A(Fall2019) 11.Constrainedleastsquaresvandenbe/133A/lectures/cls.pdf · Example exampleofpage1.23 F¹tº x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 t F¹tº 0 1 2 3 4 5 6

Proof

suppose x satisfies Cx = d, and (x, z) satisfies the equation on page 11.15

‖Ax − b‖2 = ‖A(x − x) + Ax − b‖2= ‖A(x − x)‖2 + ‖Ax − b‖2 + 2(x − x)T AT(Ax − b)= ‖A(x − x)‖2 + ‖Ax − b‖2 − 2(x − x)TCT z

= ‖A(x − x)‖2 + ‖Ax − b‖2≥ ‖Ax − b‖2

• on line 3 we use AT Ax + CT z = AT b; on line 4, Cx = Cx = d

• inequality shows that x is optimal

• x is the unique optimum because equality holds only if

A(x − x) = 0, C(x − x) = 0 =⇒ x = x

by the first assumption on page 11.14

Constrained least squares 11.16

Page 18: L.Vandenberghe ECE133A(Fall2019) 11.Constrainedleastsquaresvandenbe/133A/lectures/cls.pdf · Example exampleofpage1.23 F¹tº x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 t F¹tº 0 1 2 3 4 5 6

Nonsingularity

if the two assumptions hold, then the matrix[AT A CT

C 0

]is nonsingular

Proof. [AT A CT

C 0

] [xz

]= 0 =⇒ xT(AT Ax + CT z) = 0, Cx = 0

=⇒ ‖Ax‖2 = 0, Cx = 0

=⇒ Ax = 0, Cx = 0

=⇒ x = 0 by assumption 1

if x = 0, we have CT z = −AT Ax = 0; hence also z = 0 by assumption 2

Constrained least squares 11.17

Page 19: L.Vandenberghe ECE133A(Fall2019) 11.Constrainedleastsquaresvandenbe/133A/lectures/cls.pdf · Example exampleofpage1.23 F¹tº x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 t F¹tº 0 1 2 3 4 5 6

Nonsingularity

if the assumptions do not hold, then the matrix[AT A CT

C 0

]is singular

• if assumption 1 does not hold, there exists x , 0 with Ax = 0, Cx = 0; then[AT A CT

C 0

] [x0

]= 0

• if assumption 2 does not hold there exists a z , 0 with CT z = 0; then[AT A CT

C 0

] [0z

]= 0

in both cases, this shows that the matrix is singularConstrained least squares 11.18

Page 20: L.Vandenberghe ECE133A(Fall2019) 11.Constrainedleastsquaresvandenbe/133A/lectures/cls.pdf · Example exampleofpage1.23 F¹tº x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 t F¹tº 0 1 2 3 4 5 6

Solution by LU factorization

[AT A CT

C 0

] [xz

]=

[AT b

d

]Algorithm

1. compute H = AT A (mn2 flops)

2. compute c = AT b (2mn flops)

3. solve the linear equation [H CT

C 0

] [xz

]=

[cd

]by the LU factorization ((2/3)(p + n)3 flops)

complexity: mn2 + (2/3)(p + n)3 flops

Constrained least squares 11.19

Page 21: L.Vandenberghe ECE133A(Fall2019) 11.Constrainedleastsquaresvandenbe/133A/lectures/cls.pdf · Example exampleofpage1.23 F¹tº x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 t F¹tº 0 1 2 3 4 5 6

Solution by QR factorization

we derive one of several possible methods based on the QR factorization[AT A CT

C 0

] [xz

]=

[AT b

d

]

• if we make a change of variables w = z − d, the equation becomes[AT A + CTC CT

C 0

] [xw

]=

[AT b

d

]• assumption 1 guarantees that AT A + CTC is nonsingular (see page 4.22)

• assumption 1 guarantees that the following QR factorization exists:[AC

]= QR =

[Q1Q2

]R

Constrained least squares 11.20

Page 22: L.Vandenberghe ECE133A(Fall2019) 11.Constrainedleastsquaresvandenbe/133A/lectures/cls.pdf · Example exampleofpage1.23 F¹tº x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 t F¹tº 0 1 2 3 4 5 6

Solution by QR factorization

substituting the QR factorization gives the equation[RT R RTQT

2Q2R 0

] [xw

]=

[RTQT

1 bd

]

• multiply first equation with R−T and make change of variables y = Rx:[I QT

2Q2 0

] [y

w

]=

[QT

1 bd

]• next we note that the matrix Q2 = CR−1 has linearly independent rows:

QT2u = R−TCTu = 0 =⇒ CTu = 0 =⇒ u = 0

because C has linearly independent rows (assumption 2)

Constrained least squares 11.21

Page 23: L.Vandenberghe ECE133A(Fall2019) 11.Constrainedleastsquaresvandenbe/133A/lectures/cls.pdf · Example exampleofpage1.23 F¹tº x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 t F¹tº 0 1 2 3 4 5 6

Solution by QR factorization

we use the QR factorization of QT2 to solve[

I QT2

Q2 0

] [y

w

]=

[QT

1 bd

]

• from the 1st block row, y = QT1 b −QT

2w; substitute this in the 2nd row:

Q2QT2w = Q2QT

1 b − d

• we solve this equation for w using the QR factorization QT2 = QR:

RT Rw = RTQTQT1 b − d

which can be simpflified to

Rw = QTQT1 b − R−T d

Constrained least squares 11.22

Page 24: L.Vandenberghe ECE133A(Fall2019) 11.Constrainedleastsquaresvandenbe/133A/lectures/cls.pdf · Example exampleofpage1.23 F¹tº x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 t F¹tº 0 1 2 3 4 5 6

Summary of QR factorization method

[AT A + CTC CT

C 0

] [xw

]=

[AT b

d

]

Algorithm

1. compute the two QR factorizations[AC

]=

[Q1Q2

]R, QT

2 = QR

2. solve RTu = d by forward substitution and compute c = QTQT1 b − u

3. solve Rw = c by back substitution and compute y = QT1 b −QT

2w

4. compute Rx = y by back substitution

complexity: 2(p + m)n2 + 2np2 flops for the QR factorizations

Constrained least squares 11.23

Page 25: L.Vandenberghe ECE133A(Fall2019) 11.Constrainedleastsquaresvandenbe/133A/lectures/cls.pdf · Example exampleofpage1.23 F¹tº x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 t F¹tº 0 1 2 3 4 5 6

Comparison of the two methods

Complexity: roughly the same

• LU factorizationmn2 +

23(p + n)3 ≤ mn2 +

163

n3 flops

• QR factorization

2(p + m)n2 + 2np2 ≤ 2mn2 + 4n3 flops

upper bounds follow from p ≤ n (assumption 2)

Stability: 2nd method avoids calculation of Gram matrix AT A

Constrained least squares 11.24

Page 26: L.Vandenberghe ECE133A(Fall2019) 11.Constrainedleastsquaresvandenbe/133A/lectures/cls.pdf · Example exampleofpage1.23 F¹tº x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 t F¹tº 0 1 2 3 4 5 6

Outline

• least norm problem

• least squares with equality constraints

• linear quadratic control

Page 27: L.Vandenberghe ECE133A(Fall2019) 11.Constrainedleastsquaresvandenbe/133A/lectures/cls.pdf · Example exampleofpage1.23 F¹tº x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 t F¹tº 0 1 2 3 4 5 6

Linear quadratic control

Linear dynamical system

xt+1 = At xt + Btut, yt = Ct xt, t = 1,2, . . .

• n-vector xt is system state at time t

• m-vector ut is system input

• p-vector yt is system output

• xt, ut, yt often represent deviations from a standard operating condition

Objective: choose inputs u1, . . . , uT−1 that minimizes Joutput + ρJinput with

Joutput = ‖y1‖2 + · · · + ‖yT ‖2, Jinput = ‖u1‖2 + · · · + ‖uT−1‖2

State constraints: initial state and (possibly) the final state are specified

x1 = xinit, xT = xdes

Constrained least squares 11.25

Page 28: L.Vandenberghe ECE133A(Fall2019) 11.Constrainedleastsquaresvandenbe/133A/lectures/cls.pdf · Example exampleofpage1.23 F¹tº x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 t F¹tº 0 1 2 3 4 5 6

Linear quadratic control problem

minimize ‖C1x1‖2 + · · · + ‖CT xT ‖2 + ρ(‖u1‖2 + · · · + ‖uT−1‖2)subject to xt+1 = At xt + Btut, t = 1, . . . ,T − 1

x1 = xinit, xT = xdes

variables: x1, . . . , xT , u1, . . . , uT−1

Constrained least squares formulation

minimize ‖ Az − b‖2subject to Cz = d

variables: the (nT + m(T − 1))-vector

z = (x1, . . . , xT,u1, . . . ,uT−1)

Constrained least squares 11.26

Page 29: L.Vandenberghe ECE133A(Fall2019) 11.Constrainedleastsquaresvandenbe/133A/lectures/cls.pdf · Example exampleofpage1.23 F¹tº x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 t F¹tº 0 1 2 3 4 5 6

Linear quadratic control problem

Objective function: ‖ Az − b‖2 with

A =

C1 · · · 0 0 · · · 0... . . . ... ... . . . ...0 · · · CT 0 · · · 00 · · · 0 √

ρI · · · 0... . . . ... ... . . . ...0 · · · 0 0 · · · √ρI

, b = 0

Constraints: Cz = d with

C =

A1 −I 0 · · · 0 0 B1 0 · · · 00 A2 −I · · · 0 0 0 B2 · · · 0... ... ... ... ... ... ... . . . ...0 0 0 · · · AT−1 −I 0 0 · · · BT−1I 0 0 · · · 0 0 0 0 · · · 00 0 0 · · · 0 I 0 0 · · · 0

, d =

00...0

xinit

xdes

Constrained least squares 11.27

Page 30: L.Vandenberghe ECE133A(Fall2019) 11.Constrainedleastsquaresvandenbe/133A/lectures/cls.pdf · Example exampleofpage1.23 F¹tº x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 t F¹tº 0 1 2 3 4 5 6

Example

• a system with three states, one input, one output

• system is time-invariant (matrices At = A, Bt = B, and Ct = C are constant)

• figure shows “open-loop” output CAt−1xinit

0 20 40 60 80 100

0

0.1

0.2

0.3

0.4

t

Ope

n-lo

opou

tput

• we minimize Joutput + ρJinput with final state constraint xdes = 0 at T = 100

Constrained least squares 11.28

Page 31: L.Vandenberghe ECE133A(Fall2019) 11.Constrainedleastsquaresvandenbe/133A/lectures/cls.pdf · Example exampleofpage1.23 F¹tº x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 t F¹tº 0 1 2 3 4 5 6

Optimal trade-off curve

0 1 2 3 43.5

4

4.5

ρ = 0.05ρ = 0.2

ρ = 1

Jin

J out

Constrained least squares 11.29

Page 32: L.Vandenberghe ECE133A(Fall2019) 11.Constrainedleastsquaresvandenbe/133A/lectures/cls.pdf · Example exampleofpage1.23 F¹tº x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 t F¹tº 0 1 2 3 4 5 6

Three solutions on the trade-off curve

−0.5

0

0.5

u t

0

0.2

0.4

ρ = 0.05y t

−0.5

0

0.5

u t

0

0.2

0.4

ρ = 0.2y t

0 20 40 60 80 100

−0.5

0

0.5

t

u t

0 20 40 60 80 100

0

0.2

0.4

ρ = 1

t

y t

Constrained least squares 11.30

Page 33: L.Vandenberghe ECE133A(Fall2019) 11.Constrainedleastsquaresvandenbe/133A/lectures/cls.pdf · Example exampleofpage1.23 F¹tº x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 t F¹tº 0 1 2 3 4 5 6

Linear state feedback control

Linear state feedback

• linear state feedback control uses the input

ut = K xt, t = 1,2, . . .

• K is the state feedback gain matrix

• widely used, especially when xt should converge to zero, T is not specified

One possible choice for K

• solve the linear quadratic control problem with xdes = 0

• solution ut is a linear function of xinit, hence u1 can be written as u1 = K xinit

• columns of K can be found by computing u1 for xinit = e1, . . . , en

• use this K as state feedback gain matrix

Constrained least squares 11.31

Page 34: L.Vandenberghe ECE133A(Fall2019) 11.Constrainedleastsquaresvandenbe/133A/lectures/cls.pdf · Example exampleofpage1.23 F¹tº x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 t F¹tº 0 1 2 3 4 5 6

Example

0 40 80 120

−0.1

0

0.1

State feedback

Optimal

t

u t

0 40 80 120

0

0.2

0.4

State feedback

Optimal

t

y t• system matrices of previous example

• blue curve uses optimal linear quadratic control for T = 100

• red curve uses simple linear state feedback ut = K xt

Constrained least squares 11.32