221 Homework

45
EE221A Linear System Theory Problem Set 1 Professor C. Tomlin Department of Electrical Engineering and Computer Sciences, UC Berkeley Fall 2011 Issued 9/1; Due 9/8 Problem 1: Functions. Consider f : R 3 R 3 , defined as f (x)= Ax, A = 1 0 1 0 0 0 0 1 1 ,x R 3 Is f a function? Is it injective? Is it surjective? Justify your answers. Problem 2: Fields. (a) Use the axioms of the field to show that, in any field, the additive identity and the multiplicative identity are unique. (b) Is GL n , the set of all n × n nonsingular matrices, a field? Justify your answer. Problem 3: Vector Spaces. (a) Show that (R n , R), the set of all ordered n-tuples of elements from the field of real numbers R, is a vector space. (b) Show that the set of all polynomials in s of degree k or less with real coefficients is a vector space over the field R. Find a basis. What is the dimension of the vector space? Problem 4: Subspaces. Suppose U 1 ,U 2 ,...,U m are subspaces of a vector space V . The sum of U 1 ,U 2 ,...,U m , denoted U 1 + U 2 + ... + U m , is defined to be the set of all possible sums of elements of U 1 ,U 2 , ..., U m : U 1 + U 2 + ... + U m = {u 1 + u 2 + ... + u m : u 1 U 1 ,...,u m U m } (a) Is U 1 + U 2 + ... + U m a subspace of V ? (b) Prove or give a counterexample: if U 1 ,U 2 ,W are subspaces of V such that U 1 + W = U 2 + W , then U 1 = U 2 . Problem 5: Subspaces. Consider the space F of all functions f : R + R, which have a Laplace transform ˆ f (s)= 0 f (t)e -st dt defined for all Re(s) > 0. For some fixed s 0 in the right half plane, is {f | ˆ f (s 0 )=0} a subspace of F ? Problem 6: Linear Independence. Let V be the set of 2-tuples whose entries are complex-valued rational functions. Consider two vectors in V : v 1 = 1/(s + 1) 1/(s + 2) ,v 2 = (s + 2)/((s + 1)(s + 3)) 1/(s + 3) Is the set {v 1 ,v 2 } linearly independent over the field of rational functions? Is it linearly independent over the field of real numbers? Problem 7: Bases. Let U be the subspace of R 5 defined by U = {[x 1 ,x 2 ,...,x 5 ] T R 5 : x 1 =3x 2 and x 3 =7x 4 } 1

Transcript of 221 Homework

Page 1: 221 Homework

EE221A Linear System Theory

Problem Set 1

Professor C. Tomlin

Department of Electrical Engineering and Computer Sciences, UC Berkeley

Fall 2011

Issued 9/1; Due 9/8

Problem 1: Functions. Consider f : R3 → R

3, defined as

f(x) = Ax,A =

1 0 10 0 00 1 1

, x ∈ R3

Is f a function? Is it injective? Is it surjective? Justify your answers.

Problem 2: Fields.

(a) Use the axioms of the field to show that, in any field, the additive identity and the multiplicative identityare unique.

(b) Is GLn, the set of all n × n nonsingular matrices, a field? Justify your answer.

Problem 3: Vector Spaces.

(a) Show that (Rn, R), the set of all ordered n-tuples of elements from the field of real numbers R, is a vectorspace.

(b) Show that the set of all polynomials in s of degree k or less with real coefficients is a vector space overthe field R. Find a basis. What is the dimension of the vector space?

Problem 4: Subspaces.

Suppose U1, U2, . . . , Um are subspaces of a vector space V . The sum of U1, U2, . . . , Um, denoted U1 + U2 +. . . + Um, is defined to be the set of all possible sums of elements of U1, U2, ..., Um:

U1 + U2 + . . . + Um = u1 + u2 + . . . + um : u1 ∈ U1, . . . , um ∈ Um

(a) Is U1 + U2 + . . . + Um a subspace of V ?

(b) Prove or give a counterexample: if U1, U2,W are subspaces of V such that U1 + W = U2 + W , thenU1 = U2.

Problem 5: Subspaces. Consider the space F of all functions f : R+ → R, which have a Laplace transformf(s) =

0f(t)e−stdt defined for all Re(s) > 0. For some fixed s0 in the right half plane, is f |f(s0) = 0 a

subspace of F?

Problem 6: Linear Independence.

Let V be the set of 2-tuples whose entries are complex-valued rational functions. Consider two vectors in V :

v1 =

[

1/(s + 1)1/(s + 2)

]

, v2 =

[

(s + 2)/((s + 1)(s + 3))1/(s + 3)

]

Is the set v1, v2 linearly independent over the field of rational functions? Is it linearly independent over thefield of real numbers?

Problem 7: Bases. Let U be the subspace of R5 defined by

U = [x1, x2, . . . , x5]T ∈ R

5 : x1 = 3x2 and x3 = 7x4

1

Page 2: 221 Homework

Find a basis for U .

Problem 8: Bases. Prove that if v1, v2, . . . vn is linearly independent in V , then so is the set v1−v2, v2−v3, . . . , vn−1 − vn, vn.

2

Page 3: 221 Homework

EE221A Problem Set 1 Solutions - Fall 2011

Note: these solutions are somewhat more terse than what we expect you to turn in, though the important thing isthat you communicate the main idea of the solution.

Problem 1. Functions. It is a function; matrix multiplication is well defined. Not injective; easy to finda counterexample where f(x1) = f(x2) ; x1 = x2. Not surjective; suppose x = (x1, x2, x3)

T . Then f(x) =(x1 + x3, 0, x2 + x3)

T ; the range of f is not the whole codomain.Problem 2. Fields. a) Suppose 0′ and 0 are both additive identities. Then x + 0′ = x + 0 = 0 ⇐⇒ 0′ = 0.Suppose 1 and 1′ are both multiplicative identities. Consider for x 6= 0, x · 1 = x = x · 1′. Premultiply by x−1 tosee that 1 = 1′.b) We are not given what the operations + and · are but we can assume at least that + is componentwise addition.The identity matrix I is nonsingular so I ∈ GLn. But I + (−I) = 0 is singular so it cannot be a field.Problem 3. Vector Spaces. a) This is the most familiar kind of vector space; all the vector space axioms can betrivially shown.b) First write a general vector as x(s) = akx

k + ak−1xk−1 + · · · + a1x + a0. It’s easy to show associativity and

commutativity (just look at operations componentwise). The additive identity is the zero polynomial (a0 = a1 =· · · = ak = 0) and the additive inverse just has each coefficient negated. The axioms of scalar multiplication aresimilarly trivial to show as are the distributive laws.A natural basis is B :=

1, x, x2, . . . , xk

. It spans the space (we can write a general x(s) as linear combinations of

the basis elements) and they are linearly independent since only a0 = a1 = · · · = ak = 0 solves akxk + ak−1xk−1 +

· · ·+ a1x+ a0 = 0. The dimension of the vector space is thus the cardinality of B, which is k + 1.Problem 4. Subspaces. a) Yes, it is a subspace. First, U1 + · · ·+ Um is a subset since its elements are sums ofvectors in subspaces (hence also subsets) of V and since V is a vector space, those sums are also in V . Also a linearcombination will be of the form

α11u

11 + α2

1u21 + · · ·+ α1

mu1m + α2

mu2m = w1 + · · ·+ wm ∈ U1 + · · ·+ Um

where u1k, u2k, wk ∈ Uk.

b) Counterexample: U1 = 0 , U2 =W 6= U1. Then U1 +W =W = U2 +W .Problem 5. Subspaces. If we assume that S =

f |f(s0) = 0

is a subset of F then all that must be shown is

closure under linear combinations. Let f, g ∈ S and α, β ∈ R. Then

L (αf + βg) =

ˆ ∞0

[αf(t) + βg(t)] e−stdt

= α

ˆ ∞0

f(t)e−stdt+ β

ˆ ∞0

g(t)e−stdt

= αf(s) + βg(s)

and thus we have closure since αf(s0) + βg(s0) = α · 0 + β · 0 = 0.If on the other hand we do not assume S ⊂ F , then one could construct a counterexample of a transfer functionwith a zero at s0 and a pole somewhere else in the RHP that will be in S but not in F . f(t) := es0t cos bt is onesuch counterexample.Problem 6. Linear Independence. a) Linearly dependent. Take α = s+3

s+2 , then v1 = αv2. b) Linearlyindependent. Let α, β ∈ R. Then αv1 + βv2 = 0 ⇐⇒ α = −β(s + 2)(s + 3)−1 for all s, which requires thatα = β = 0.Problem 7. Bases. B := b1, b2, b3 =

[1, 13 , 0, 0, 0

]T,[0, 0, 1, 17 , 0

]T, [0, 0, 0, 0, 1]

Tis a basis. They are linearly

independent by inspection and they span U since we can find a1, a2, a3 such that u = a1b1 + a2b2 + a3b3 for allu ∈ U .Problem 8. Bases. Form the usual linear combination equalling zero:

α1(v1 − v2) + α2(v2 − v3) + · · ·+ αn−1(vn−1 − vn) + αnvn = 0

⇐⇒ α1v1 + (α2 − α1)v2 + · · ·+ (αn−1 − αn−2)vn−1 + (αn − αn−1)vn = 0

Now, since v1, . . . , vn is linearly independent, this requires that α1 = 0 and α2 − α1 = α2 = 0, ..., αn = 0. Thusthe new set is also linearly independent.

Page 4: 221 Homework

EE221A Linear System Theory

Problem Set 2

Professor C. Tomlin

Department of Electrical Engineering and Computer Sciences, UC Berkeley

Fall 2011

Issued 9/8; Due 9/16

All answers must be justified.

Problem 1: Linearity. Are the following maps A linear?

(a) A(u(t)) = u(−t) for u(t) a scalar function of time

(b) How about y(t) = A(u(t)) =∫ t

0e−σu(t − σ)dσ?

(c) How about the map A : as2 + bs + c →∫ s

0(bt + a)dt from the space of polynomials with real coefficients

to itelf?

Problem 2: Nullspace of linear maps. Consider a linear map A. Prove that N(A) is a subspace.

Problem 3: Linearity. Given A,B,C,X ∈ Cn×n, determine if the following maps (involving matrix multi-

plication) from Cn×n → C

n×n are linear.

1. X 7→ AX + XB

2. X 7→ AX + BXC

3. X 7→ AX + XBX

Problem 4: Solutions to linear equations (this was part of Professor El Ghaoui’s prelim questionlast year). Consider the set S = x : Ax = b where A ∈ R

m×n, b ∈ Rm are given. What is the dimension

of S? Does it depend on b?

Problem 5: Rank-Nullity Theorem. Let A be a linear map from U to V with dimU = n and dimV = m.Show that

dimR(A) + dimN(A) = n

Problem 6: Representation of a Linear Map. Let A : (U,F ) → (V, F ) with dim U = n and dim V = m

be a linear map with rank(A) = k. Show that there exist bases (ui)ni=1

, and (vj)mj=1

of U, V respectively suchthat with respect to these bases A is represented by the block diagonal matrix

A =

[

I 00 0

]

What are the dimensions of the different blocks?

Problem 7: Sylvester’s Inequality. In class, we’ve discussed the Range of a linear map, denoting the rankof the map as the dimension of its range. Since all linear maps between finite dimensional vector spaces canbe represented as matrix multiplication, the rank of such a linear map is the same as the rank of its matrixrepresentation.

Given A ∈ Rm×n and B ∈ R

n×p show that

rank(A) + rank(B) − n ≤ rank AB ≤ min [rank(A), rank(B)]

1

Page 5: 221 Homework

EE221A Problem Set 2 Solutions - Fall 2011

Problem 1. Linearity.a) Linear: A(u(t) + v(t)) = u(−t) + v(−t) = A(u(t)) +A(v(t))b) Linear:

A(u(t) + v(t)) =

ˆ t

0

e−σ(u(t− σ) + v(t− σ))dσ =

ˆ t

0

e−σu(t− σ)dσ +

ˆ t

0

e−σu(t− σ)dσ

= A(u(t)) +A(v(t))

c) Linear:

A(a1s2 + b1s+ c1 + a2s2 + b2s+ c2) =

ˆ s

0

((b1 + b2)t+ (a1 + a2))dt =

=

ˆ s

0

(b1t+ a1)dt+

ˆ s

0

(b2t+ a2)dt

= A(a1s2 + b1s+ c1) +A(a2s2 + b2s+ c2)

Problem 2. Nullspace of linear maps. Assume that A : U → V and that U is a vector space over thefield F. N (A) := x ∈ U : A(x) = θv. So by definition N (A) ⊆ U . Let x, y ∈ N (A) and α, β ∈ F. ThenA(αx+ βy) = αA(x) + βA(y) = α · θV + β · θV = θV . So N (A) is closed under linear combinations and is a subsetof U , therefore it is a subspace of U .Problem 3. Linearity. Call the map A in each example for clarity.i) Linear: A(X + Y ) = A(X + Y ) + (X + Y )B = AX +AY +XB+ Y B = AX +XB+AY + Y B = A(X) +A(Y )ii) Linear: A(X + Y ) = A(X + Y ) + B(X + Y )C = AX + AY + BXC + BY C = AX + BXC + AY + BY C =A(X) +A(Y )iii) Nonlinear:

A(X + Y ) = A(X + Y ) + (X + Y )B(X + Y ) =

= AX +AY +XBX +XBY + Y BX + Y BY

= AX +XBX +AY + Y BY +XBY + Y BX

= A(X) +A(Y ) +XBY + Y BX

6= A(X) +A(Y )

Problem 4. Solutions to linear equations. If b /∈ R(A), then there are no solutions, S = ∅ 6= 0 (dimS = 0,−1, or undefined depending on convention—though 0 is somewhat less preferable since it would make sense toreserve zero for the dimension of a singleton set). If b ∈ R(A), then A(x + z) = b for any x ∈ S, z ∈ N (A) =⇒dimS = dimN (A).

Lemma. A : U → V linear, dimU = n, ujnk+1 a basis for N (A), ujn1 a basis for U (use thm. of incompletebasis). Then S = A(uj)k1 is a basis for R(A).

Proof. R(A) = A(u) : u ∈ U = A (∑n

1 ajuj) : aj ∈ F =aj∑k

1 A(uj), so S spans R(A). Now suppose

S wasn’t linearly independent, so a1A(u1) + · · · + akA(uk) = 0 where aj 6= 0 for some j. Then by linearityA(a1u1 + · · ·+ akuk) = 0 =⇒ a1u1 + · · ·+ akuk ∈ N (A). Since ujn1 is a basis for U and ujnk+1 is a basis forN (A), we must have a1u1 + · · · + akuk = 0 →←. Thus S is linearly independent and spans R(A) so it is a basisfor R(A).

Problem 5. Rank-Nullity Theorem. The theorem follows directly from the above lemma.

Page 6: 221 Homework

2

Problem 6. Representation of a Linear Map. We have from the rank-nullity theorem that dimN (A) = n−k.Let uink+1 be a basis of N (A). Then A(ui) = θV for all i = k + 1, . . . , n. Since the zero vector has all itscoordinates zero in any basis, this implies that the last n− k columns of A are zero. Now it remains to show thatwe can complete the basis for U and choose a basis for V such that the first k columns are as desired. But thelemma above gives us what we need. The form of the matrix A tells us that we want the i-th basis vector of V tobe A(ui), for i = 1, . . . , k. So let the basis for U be BU = uin1 (where the last n− k basis vectors are a basis forN (A) and the first k are arbitrarily chosen to complete the basis), and the basis for V be BV = vim1 where thefirst k basis vectors are defined by vi = A(ui) and the remaining m− k are arbitrarily chosen (but we know we canfind them by the theorem of the incomplete basis). Thus the block sizes are as follows:

A =

[Ik×k 0k×(n−k)

0(m−k)×k 0(m−k)×(n−k)

]Problem 7. Sylvester’s Inequality.

Let U = Rp, V = Rn, W = Rm. So B : U → V , A : V → W . Define A|R(B) : R(B) → W : v 7→ Av, “Arestricted in domain to the range of B”. Clearly R(AB) = R(A|R(B)). Rank/nullity gives that dimR(A|R(B)) +dimN (A|R(B)) = dimR(B), so dimR(AB) ≤ dimR(B). Now R(A|R(B)) ⊆ R(A) =⇒ dimR(A|R(B)) =dimR(AB) ≤ dimR(A). We now have one of the inequalities: dimR(AB) ≤ min dimR(A),dimR(B). ClearlyN (A|R(B)) ⊆ N (A) =⇒ dimN (A|R(B)) ≤ dimN (A), so by rank/nullity, dimR(A|R(B)) + dimN (A) ≥dimR(B) = rank (B). Finally by rank/nullity again, dimN (A) = n − rank (A). So we have rank (AB) + n −rank (A) ≥ rank (B). Rearranging this gives the other inequality we are looking for.

Page 7: 221 Homework

EE221A Linear System Theory

Problem Set 3

Professor C. Tomlin

Department of Electrical Engineering and Computer Sciences, UC Berkeley

Fall 2010

Issued 9/22; Due 9/30

Problem 1. Let A : R3 → R

3 be a linear map. Consider two bases for R3: E = e1, e2, e3 of standard basis

elements for R3, and

B =

102

,

201

,

051

Now suppose that:

A(e1) =

2−10

,A(e2) =

000

,A(e3) =

042

Write down the matrix representation of A with respect to (a) E and (b) B.

Problem 2: Representation of a Linear Map. Let A be a linear map of the n-dimensional linear space(V, F ) onto itself. Assume that for some λ ∈ F and basis (vi)

ni=1 we have

Avk = λvk + vk+1 k = 1, . . . , n − 1

andAvn = λvn

Obtain a representation of A with respect to this basis.

Problem 3: Norms. Show that for x ∈ Rn, 1√

n||x||1 ≤ ||x||2 ≤ ||x||1.

Problem 4. Prove that the induced matrix norm: ||A||1,i = maxj∈1,...,n∑m

i=1|aij |.

Problem 5. Consider an inner product space V , with x, y ∈ V . Show, using properties of the inner product,that

||x + y||2 + ||x − y||2 = 2||x||2 + 2||y||2

where || · || is the norm induced by the inner product.

Problem 6. Consider an inner product space (Cn, C), equipped with the standard inner product in Cn, and

a map A : Cn → C

n which consists of matrix multiplication by an n × n matrix A. Find the adjoint of A.

Problem 7: Continuity and Linearity. Show that any linear map between finite dimensional vector spacesis continuous.

1

Page 8: 221 Homework

EE221A Problem Set 3 Solutions - Fall 2011

Problem 1.a) A w.r.t. the standard basis is, by inspection,

AE =

2 0 0−1 0 40 0 2

.b) Now consider the diagram from LN3, p.8. We are dealing with exactly this situation; we have one matrixrepresentation, and two bases, but we are using them in both the domain and the codomain so we have all theingredients. So the matrices P and Q for the similarity transform in this case are,

P =[e1 e2 e3

]−1 [b1 b2 b3

]=[b1 b2 b3

],

since the matrix formed from the E basis vectors is just the identity; and

Q =[b1 b2 b3

]−1 [e1 e2 e3

]=[b1 b2 b3

]−1= P−1.

Let AB be the matrix representation of A w.r.t. B. From the diagram, we have

AB = QAEP

= P−1AEP

=[b1 b2 b3

]−1AE

[b1 b2 b3

]=

1 2 00 0 52 1 1

−1 2 0 0−1 0 40 0 2

1 2 00 0 52 1 1

=

1

15

16 −4 127 32 −621 6 12

Problem 2. Representation of a linear map. This is straightforward from the definition of matrix represen-tation,

A =

λ 0 · · · 0

1 λ. . .

...

0 1. . . 0

.... . . λ 0

0 · · · 1 λ

Problem 3. Norms.

Proof. 1st inequality: Consider the Cauchy-Schwarz inequality, (∑n

i=1 xiyi)2 ≤

(∑ni=1 x

2i

) (∑ni=1 y

2i

). Now, let

y = 1 (vector of all ones). Then we have ‖x‖21 ≤ n ‖x‖22 which is equivalent to the first inequality.

2nd inequality: Note that ‖x‖2 ≤ ‖x‖1 ⇐⇒ ‖x‖22 ≤ ‖x‖21 . Consider that

‖x‖22 = |x1|2 + · · ·+ |xn|2 ,

while

‖x‖21 = (|x1|+ · · ·+ |xn|)2

= |x1|2 + |x1| |x2|+ · · ·+ |x1| |xn|+ |x2|2 + |x2| |x1|+ · · ·+ |xn| |xn−1|+ |xn|2

= ‖x‖22 + (cross terms),

showing the second inequality.

Page 9: 221 Homework

2

Problem 4.

Proof. First note that the problem implies that A ∈ Fm×n. By definition,

‖A‖1,i = supu∈U

‖Au‖1‖u‖1

.

Consider ‖Au‖1 =∥∥∥∑n

j=1Ajuj

∥∥∥1, where Aj and uj represent the j-th column of A and the j-th component of u

respectively. Then ‖Au‖1 ≤∑n

j=1 ‖Aj‖1 |uj |. Let Amax be the column of A with the maximum 1-norm; that is,

Amax = maxj∈1,...,n

m∑i=1

|aij | .

Then ‖Au‖1 ≤∑n

j=1Amax |uj | = Amax

∑nj=1 |uj | = Amax ‖u‖1. So we have that

‖Au‖1‖u‖1

≤ Amax.

Now, it remains to find a u such that equality holds. Chose u = (0, . . . , 1, . . . 0)T , where the 1 is in the k-th

component such that Au pulls out a column of A having the maximum 1-norm. Note that ‖u‖1 = 1, and we seethen that

‖Au‖1‖u‖1

= Amax.

Thus in this case the supremum is achieved and we have the desired result.Problem 5.

Proof. Straightforward; we simply use properties of the inner product at each step:

‖x+ y‖2 + ‖x− y‖2 = 〈x+ y, x+ y〉 + 〈x− y, x− y〉= 〈x+ y, x〉 + 〈x+ y, y〉 + 〈x− y, x〉 + 〈x− y,−y〉

= (〈x, x+ y〉 + 〈y, x+ y〉 + 〈x, x− y〉 + 〈−y, x− y〉)

= (〈x, x〉 + 〈x, y〉 + 〈y, x〉 + 〈y, y〉 + 〈x, x〉 + 〈x,−y〉 + 〈−y, x〉 + 〈−y,−y〉)

= 2 ‖x‖2 + 2 ‖y‖2 +(〈x, y〉 + 〈y, x〉 − 〈x, y〉 + 〈x,−y〉

)= 2 ‖x‖2 + 2 ‖y‖2 +

(〈x, y〉 + 〈x,−y〉

)= 2 ‖x‖2 + 2 ‖y‖2 + (〈x, y〉 + 〈x,−y〉)

= 2 ‖x‖2 + 2 ‖y‖2 + 〈x, y − y〉

= 2 ‖x‖2 + 2 ‖y‖2

Problem 6. We will show that the adjoint map A∗ : Cn → Cn is identical to matrix multiplication by the complexconjugate transpose of A. Initially we will use the notation Aa for the matrix representation of the adjoint of A andreserve the notation v∗ for the complex conjugate transpose of v. First, we know that we can represent A (w.r.t.the standard basis of Cn) by a matrix in Cn×n; call this matrix A. Then we can use the defining property of theadjoint to write,

〈Au, v〉 = 〈u,Aav〉u∗A∗v = u∗Aav

Now, this must hold for all u, v ∈ Cn. Choose u = ei, v = ej (where ek is a vector that is all zeros except for 1 inthe k-th entry). This will give,

a∗ij = aaij ,

for all i, j ∈ 1, . . . , n. Thus Aa = A∗; it is no accident that we use the ∗ notation for both adjoints and complexconjugate transpose.

Page 10: 221 Homework

3

Problem 7. Continuity and Linearity.

Proof. Let A : (U,F ) → (V, F ) with dimU = n and dimV = m be a linear map. Let x, y ∈ U , x 6= y, andz = x − y. Since A is a linear map between finite dimensional vector spaces we can represent it by a matrix A.Now, the induced norm,

‖A‖i := supz∈U,z 6=0

‖Az‖‖z‖

=⇒ ‖Az‖ ≤ ‖A‖i ‖z‖ .

Given some ε > 0, letδ =

ε

‖A‖iSo

‖x− y‖ = ‖z‖ < δ =⇒ ‖Az‖ < ‖A‖i δ = ‖A‖iε

‖A‖i=⇒ ‖Az‖ < ε

and we have continuity.Alternatively, we can also use the induced matrix norm to show Lipschitz continuity,

∀x, y ∈ U, ‖Ax−Ay‖ < K ‖x− y‖ ,

where K > ‖A‖i, which shows that the map is Lipschitz continuous, and thus is continuous (LC =⇒ C , notethat the reverse implication is not true!).

Page 11: 221 Homework

EE221A Linear System Theory

Problem Set 4

Professor C. Tomlin

Department of Electrical Engineering and Computer Sciences, UC Berkeley

Fall 2011

Issued 9/30; Due 10/7

Problem 1: Existence and uniqueness of solutions to differential equations.

Consider the following two systems of differential equations:

x1 = −x1 + et cos(x1 − x2)

x2 = −x2 + 15 sin(x1 − x2)

and

x1 = −x1 + x1x2

x2 = −x2

(a) Do they satisfy a global Lipschitz condition?

(b) For the second system, your friend asserts that the solutions are uniquely defined for all possible initialconditions and they all tend to zero for all initial conditions. Do you agree or disagree?

Problem 2: Existence and uniqueness of solutions to linear differential equations.

Let A(t) and B(t) be respectively n × n and n × ni matrices whose elements are real (or complex) valuedpiecewise continuous functions on R+. Let u(·) be a piecewise continuous function from R+ to R

ni . Show thatfor any fixed u(·), the differential equation

x(t) = A(t)x(t) + B(t)u(t) (1)

satisfies the conditions of the Fundamental Theorem.

Problem 3: Local or global Lipschitz condition. Consider the pendulum equation with friction andconstant input torque:

x1 = x2

x2 = − g

lsin x1 −

km

x2 + Tml2

(2)

where x1 is the angle that the pendulum makes with the vertical, x2 is the angular rate of change, m is themass of the bob, l is the length of the pendulum, k is the friction coefficient, and T is a constant torque. LetBr = x ∈ R2 : ||x|| < r. For this system (represented as x = f(x)) find whether f is locally Lipschitz inx on Br for sufficiently small r, locally Lipschitz in x on Br for any finite r, or globally Lipschitz in x (ie.Lipschitz for all x ∈ R

2).

Problem 4: Local or global Lipschitz condition. Consider the scalar differential equation x = x2 forx ∈ R, with x(t0) = x0 = c where c is a constant.

(a) Is this system locally or globally Lipschitz?

(b) Solve this scalar differential equation directly (using methods from undergraduate calculus) and discussthe existence of this solution (for all t ∈ R, and for c both non-zero and zero).

1

Page 12: 221 Homework

Problem 5: Perturbed nonlinear systems.

Suppose that some physical system obeys the differential equation

x = p(x, t), x(t0) = x0, ∀t ≥ t0

where p(·, ·) obeys the conditions of the fundamental theorem. Suppose that as a result of some perturbationthe equation becomes

z = p(z, t) + f(t), z(t0) = x0 + δx0, ∀t ≥ t0

Given that for t ∈ [t0, t0 + T ], ||f(t)|| ≤ ǫ1 and ||δx0|| ≤ ǫ0, find a bound on ||x(t)− z(t)|| valid on [t0, t0 + T ].

2

Page 13: 221 Homework

EE221A Problem Set 4 Solutions - Fall 2011

Problem 1. Existence and uniqueness of solutions to differential equations.Call the first system f(x, t) =

[x1 x2

]T and the second one g(x) =[x1 x2

]T .a) Construct the Jacobians:

D1f(x, t) =

[−1− et sin (x1 − x2) et sin(x1 − x2)

15 cos (x1 − x2) −1− 15 cos(x1 − x2)

],

Dg(x) =

[−1 + x2 x1

0 −1

].

D1f(x, t) is bounded ∀x, and f(x, t) is continuous in x, so f(x) is globally Lipschitz continuous. But while g(x) iscontinuous, Dg(x) is unbounded (consider the 1,1 entry as x2 →∞ or the 1,2 entry as x1 →∞) so the function isnot globally LC.

b) Agree. Note that x2 does not depend on x1; it satisfies the conditions of the Fundamental Theorem, andone can directly find the (unique by the FT) solution x2(t) = x2(0)e−t → 0 as t→∞. This solution for x2 can besubstituted into the first equation to get

x1 = −x1 + x1x2(0)e−t = x1(x2(0)e−t − 1

),

which again satisfies the conditions of the Fundamental Theorem, and can be solved to find the unique solution

x1(t) = x1(0) exp((

1− e−t)x2(0)− t

)which also tends to zero as t→∞, for any x1(0), x2(0).

Problem 2. Existence and uniqueness of solutions to differential equations.The FT requires:i) a differential equation x = f(x, t)ii) an initial condition x(t0) = x0iii) f(x, t) piecewise continuous (PC) in tiv) f(x, t) Lipschitz continuous (LC) in xWe clearly have i), f(x, t) = A(t)x(t) + B(t)u(t), and any IC will do for ii). We are given that A(t), B(t), u(t)

are PC in t so clearly f is also. It remains to be shown the f is LC in x. This is easily shown:

‖f(x, t)− f(y, t)‖ = ‖A(t)(x− y)‖ ≤ ‖A(t)‖i ‖x− y‖

Let k(t) := ‖A(t)‖i. Since A(t) is PC and norms are continous, k(t) is PC. Thus f is LC in x so all the conditionsof the FT are satisfied.Problem 3. Local or global Lipschitz condition.

Construct the Jacobian, Df =

[0 1

− gl cosx − km

]. This is bounded for all x so the system is globally LC in x.

Problem 4. Local or global Lipschitz condition.

a) It is only locally LC since the derivative is unbounded for x ∈ R.b) The equation is solved by x(t) = c

1−c(t−t0) , for c 6= 0. (For c = 0, the solution is simply x(t) ≡ 0 defined

on R). We can see that x(t0) = c (initial condition is satisfied) and x(t) = c2

(1−c(t−t0))2= (x(t))

2 (satisfies thedifferential equation). However, this is not defined on all of R; consider the solution value as t→ t0 + 1

c .

Page 14: 221 Homework

2

t

x(t)

t=t0+1/c

Problem 5. Perturbed nonlinear systems.Let φ be a solution of x = p(x, t), x(τ) = x0, and ψ be a solution of z = p(z, t), z(τ) = x0 + δx0. Then we have

φ(t) = x0 +

ˆ t

τ

p (φ (σ) , σ) dσ,

ψ(t) = x0 + δx0 +

ˆ t

τ

p (ψ (σ) , σ) + f (σ) dσ,

so

‖φ(t)− ψ(t)‖ =

∥∥∥∥δx0 +

ˆ t

τ

p (φ (σ) , σ)− p (ψ (σ) , σ)− f (σ) dσ

∥∥∥∥≤ ‖δx0‖+

ˆ t

τ

ε1 + ‖p (φ (σ) , σ)− p (ψ (σ) , σ)‖ dσ

≤ ε0 +

ˆ t

τ

ε1 +K (σ) ‖φ(σ)− ψ(σ)‖ dσ

= ε0 + ε1(t− t0) +

ˆ t

τ

K(σ) ‖φ(σ)− ψ(σ)‖ dσ

Now, identify u(t) := ‖φ(t)− ψ(t)‖, k(t) = K(t), c1 = ε0 + ε1(t− t0) and apply Bellman-Gronwall to get,

‖φ(t)− ψ(t)‖ ≤ (ε0 + ε1(t− t0)) exp

ˆ t

t0

K(σ)dσ

Now, take K := supσ∈[t0,t0+T ]K(σ), then

‖φ(t)− ψ(t)‖ ≤ (ε0 + ε1(t− t0)) exp

ˆ t

t0

Kdσ

= (ε0 + ε1(t− t0)) exp(K(t− t0)

)

Page 15: 221 Homework

EE221A Linear System Theory

Problem Set 5

Professor C. Tomlin

Department of Electrical Engineering and Computer Sciences, UC Berkeley

Fall 2011

Issued 10/18; Due 10/27

Problem 1: Dynamical systems, time invariance.

Suppose that the output of a system is represented by

y(t) =

∫ t

−∞

e−(t−τ)u(τ)dτ

Show that it is a (i) dynamical system, and that it is (ii) time invariant. You may select the input space U tobe the set of bounded, piecewise continuous, real-valued functions defined on (−∞,∞).

Problem 2: Jacobian Linearization I. Consider the now familiar pendulum equation with friction andconstant input torque:

x1 = x2

x2 = − glsin x1 −

km

x2 + Tml2

(1)

where x1 is the angle that the pendulum makes with the vertical, x2 is the angular rate of change, m is the massof the bob, l is the length of the pendulum, k is the friction coefficient, and T is a constant torque. ConsideringT as the input to this system, derive the Jacobian linearized system which represents an approximate modelfor small angular motion about the vertical.

Problem 3: Satellite Problem, linearization, state space model.

Model the earth and a satellite as particles. The normalized equations of motion, in an earth-fixed inertialframe, simplified to 2 dimensions (from Lagrange’s equations of motion, the Lagrangian L = T − V =12 r2 + 1

2r2θ2 − kr):

r = rθ2 − kr2 + u1

θ = −2 θrr + 1

ru2

with u1, u2 representing the radial and tangential forces due to thrusters. The reference orbit with u1 = u2 = 0is circular with r(t) ≡ p and θ(t) = ωt. From the first equation it follows that p3ω2 = k. Obtain the linearizedequation about this orbit. (How many state variables are there?)

Problem 4: Solution of a matrix differential equation.

Let A1(·), A2(·), and F (·), be known piecewise continuous n × n matrices. Let Φi be the transition matrix ofx = Ai(t)x, for i = 1, 2. Show that the solution of the matrix differential equation:

X(t) = A1(t)X(t) + X(t)A′

2(t) + F (t),X(t0) = X0

is

X(t) = Φ1(t, t0)X0Φ′

2(t, t0) +

∫ t

t0

Φ1(t, τ)F (τ)Φ′

2(t, τ)dτ

Problem 5: State Transition Matrix, calculations.

1

Page 16: 221 Homework

Calculate the state transition matrix for x(t) = A(t)x(t), with the following A(t):

(a) A(t) =

[

−1 02 −3

]

; (b) A(t) =

[

−2t 01 −1

]

; (c) A(t) =

[

0 ω(t)−ω(t) 0

]

Hint: for part (c) above, let Ω(t) =∫ t

0ω(t′)dt′; and consider the matrix

[

cos Ω(t) sin Ω(t)− sin Ω(t) cos Ω(t)

]

Problem 6: State transition matrix is invertible.

Consider the matrix differential equation X(t) = A(t)X(t). Show that if there exists a t0 such that detX(t0) 6=0 then detX(t) 6= 0,∀t ≥ t0.

HINT: One way to do this is by contradiction. Assume that there exists some t∗ for which detX(t∗) = 0, finda non-zero vector k in N(X(t∗)), and consider the solution x(t) := X(t)k to the vector differential equationx(t) = A(t)x(t).

2

Page 17: 221 Homework

EE221A Problem Set 5 Solutions - Fall 2011

Problem 1. Dynamical systems, time invariance.i) To show that this is a dynamical system we have to identify all the ingredients:First we need a differential equation of the form x = f(x, u, t): Let x(t) = y(t) (so h(x, u, t) = f(x, u, t)) and

differentiate (using Liebniz) the given integral equation to get

d

dtx(t) = −x(t) + u(t)

this a linear time invariant dynamical system by inspection (it’s of the form x(t) = Ax(t) = Bu(t)) but we canshow the axioms. First let’s call the system D = (U ,Σ,Y, s, r). The time domain is T = R. The input space Uis as specified in the problem; the state space Σ and output space Y are identical and are R. The state transitionfunction is

s(t, t0, x0, u) = x(t) = e−(t−t0)x0 +

ˆ t

t0

e−(t−τ)u(τ)dτ

and the readout function isr(t, x(t), u(t)) = y(t) = x(t)

Now to show the axioms. The state transition axiom is easy to prove, since u(·) only enters the state transitionfunction within the integral where it is only evaluated on [t0, t1] (where t0 and t1 are the limits of the integral). Forthe semi group axiom, let s(t1, t0, x0, u) = x(t1) be as defined above. Then plug this into

s(t2, t1, s(t1, t0, x0, u), u) = e−(t2−t1)(e−(t1−t0)x0 +

ˆ t1

t0

e−(t1−τ)u(τ)dτ

)+

ˆ t2

t1

e−(t2−τ)u(τ)dτ

= e−(t2−t0)x0 +

ˆ t1

t0

e−(t2−τ)u(τ)dτ +

ˆ t2

t1

e−(t2−τ)u(τ)dτ

= e−(t2−t0)x0 +

ˆ t2

t0

e−(t2−τ)u(τ)dτ

= s(t2, t0, x0, u),

for all t0 ≤ t1 ≤ t2, as required.ii) To show that this d.s. is time invariant, we need to show that the space of inputs is closed under the time

shift operator Tτ ; it is (clearly if u(t) ∈ U , u(t− τ) ∈ U). Then we need to check that:

ρ(t1, t0, x0, u) = e−(t1−t0)x0 +

ˆ t1

t0

e−(t1−σ)u(σ)dσ

= e−(t1+τ−(t0+τ))x0 +

ˆ t1+τ

t0+τ

e−(t1+τ−σ)u(σ − τ)dσ

= ρ(t1 + τ, t0 + τ, x0, Tτu)

Problem 2. Jacobian Linearization I.Let x := [x1, x2]

T . We are given

d

dtx = f(x, u) =

[x2

− gl sinx1 − kmx2

]+

[01ml2

]u

Note that at the desired equilibrium, the equation for x2 implies that the nominal torque input is zero, so u0 = 0.The Jacobian (w.r.t. x) evaluated at x0 = [0, 0]

T is,

D1f(x, u)|x0,u0=

[0 1

− gl cosx1 − km

]∣∣∣∣x=x0,u=u0

=

[0 1− gl − k

m

].

Page 18: 221 Homework

2

We can see by inspection that

D2f(x, u) = D2f(x, u)|x=x0,u=u0=

[01ml2

]So the linearized system is,

δx(t) =

[0 1− gl − k

m

]δx+

[01ml2

]δu

(Note: If you assumed based on the wording of the question that the torque was held constant for the linearizedsystem, i.e. δu ≡ 0, then this will also be accepted)Problem 3. Satellite Problem, linearization, state space model.

Write as a first-order system: x1 = r, x2 = r, x3 = θ, x4 = θ. In these variables the equations of motion are,

d

dt

x1x2x3x4

=

x2

x1x24 − k

x21

+ u1

x4−2x2x4

x1+ 1

x1u2

.The reference orbit has x1 = p, x2 = 0, x3 = ωt, x4 = ω, with u1 = u2 = 0, i.e. x0 = [p, 0, ωt, ω]

T, u0 = [0, 0]

T . Letu = u0 + δu, which produces the trajectory x = x0 + δx, and take δx(t0) = 0. So

x = x0 + δx = f(x0 + δx, u0 + δu)

We can write this in a Taylor series approximation:

x0 + δx = f(x0 + δx, u0 + δu) = f(x0, u0) + D1f(x, u)|x0,u0· δx+ D2f(x, u)|x0,u0

· δu+ h.o.t.

δx = D1f(x, u)|x0,u0· δx+ D2f(x, u)|x0,u0

· δu

D1f(x, u)|x0,u0=

0 1 0 0

x24 + 2kx−31 0 0 2x1x40 0 0 1

2x2x4x−21 − x

−21 u2 −2x4

x10 −2x2

x1

∣∣∣∣∣∣∣∣x0,u0

=

0 1 0 0

3ω2 0 0 2ωp0 0 0 10 −2ωp 0 0

D2f(x, u)|x0,u0=

0 01 00 00 1

x1

∣∣∣∣∣∣∣∣x0,u0

=

0 01 00 00 1

p

Problem 4. Solution of a matrix differential equation.

Proof. First check that the initial condition is satisfied:

X(t0) =:

IΦ1(t0, t0)X0

:IΦ′2(t0, t0) +

:0ˆ t0

t0

Φ1(t0, τ)F (τ)Φ′2(t0, τ)dτ

= X0

Now check that the differential equation is satisfied (taking appropriate care of differentiation under the integral

Page 19: 221 Homework

3

sign):

d

dtX(t) = A1(t)Φ1(t, t0)X0Φ′2(t, t0) + Φ1(t, t0)X0Φ′2(t, t0)A′2(t) +

d

dt

ˆ t

t0

Φ1(t, τ)F (τ)Φ′2(t, τ)dτ

= A1(t)Φ1(t, t0)X0Φ′2(t, t0) + Φ1(t, t0)X0Φ′2(t, t0)A′2(t) + Φ1(t, t)F (t)Φ′2(t, t)

+

ˆ t

t0

d

dt(Φ1(t, τ)F (τ)Φ′2(t, τ)) dτ

= A1(t)Φ1(t, t0)X0Φ′2(t, t0) + Φ1(t, t0)X0Φ′2(t, t0)A′2(t) + F (t)

+

ˆ t

t0

(A1(t)Φ1(t, τ)F (τ)Φ′2(t, τ) + Φ1(t, τ)F (τ)Φ′2(t, τ)A′2(τ)) dτ

= A1(t)

(Φ1(t, t0)X0Φ′2(t, t0) +

ˆ t

t0

Φ1(t, τ)F (τ)Φ′2(t, τ)dτ

)+

(Φ1(t, t0)X0Φ′2(t, t0) +

ˆ t

t0

Φ1(t, τ)F (τ)Φ′2(t, τ)dτ

)A′2(t) + F (t)

= A1(t)X(t) +X(t)A′2(t) + F (t)

Problem 5. State Transition Matrix, calculations.(a)

Φ(t, 0) = eAt = L−1

(sI −A)−1

= L−1[

s+ 1 0−2 s+ 3

]−1

= L−1

1

(s+ 1)(s+ 3)

[s+ 3 0

2 s+ 1

]=

[e−t 0

e−t − e−3t e−3t

]Thus,

Φ(t, t0) = Φ(t− t0, 0) =

[e−(t−t0) 0

e−(t−t0) − e−3(t−t0) e−3(t−t0)

](b) Here our approach will be to directly solve the system of equations. Let x(t) = [x1(t), x2(t)]

T . Thenwe have x1(t) = −2tx1(t). Recall from undergrad (or if not, from section 8) that the solution to the linearhomogeneous equation x(t) = a(t)x(t) with initial condition x(t0) is x(t) = e

´ tt0a(s)ds

x(t0). In this case that givesx1(t) = x1(t0)e

´ tt0−2sds

= x1(t0) exp(−s2

∣∣tt0

)= x1(t0) exp

(−t2 + t20

)= e−(t

2−t0)2x1(t0).

We also have x2(t) = x1(t) − x2(t) = x1(0)e−(t2−t20) − x2(t). This can be considered a linear time-invariant

system ddtx2(t) = −x2(t)+u(t), with state x2 and input u(t) = x1(0)e−(t

2−t20), with solution x2(t) = e−(t−t0)x2(t0)+

x1(0)´ tt0e−(t−τ)e−(τ

2−t20)dτ . We can now write down the s.t.m.,

Φ(t, t0) =

[e−(t

2−t20) 0´ tt0e−(t−τ)e−(τ

2−t20)dτ e−(t−t0)

]

(c) Let Ω(t, t0) =´ tt0ω(τ)dτ . Guess that Φ(t, t0) =

[cos Ω(t, t0) sin Ω(t, t0)− sin Ω(t, t0) cos Ω(t, t0)

]. This is the s.t.m. if it

satisfies the matrix d.e. X(t) = A(t)X(t) with X(t0) = I. Note that Ω(t0, t0) = 0, so X(t0) = Φ(t0, t0) = I. First

Page 20: 221 Homework

4

notice ddtΩ(t, t0) = d

dt

´ tt0ω(τ)dτ = ω(t). Now look at the derivative,

d

dtΦ(t, t0) =

[− sin Ω(t, t0)ω(t) cos Ω(t, t0)ω(t)− cos Ω(t, t0)ω(t) − sin Ω(t, t0)ω(t)

]= ω(t)

[− sin Ω(t, t0) cos Ω(t, t0)− cos Ω(t, t0) − sin Ω(t, t0)

]=

[0 ω(t)

−ω(t)

] [cos Ω(t, t0) sin Ω(t, t0)− sin Ω(t, t0) cos Ω(t, t0)

]= A(t)Φ(t, t0)

Problem 6. State transition matrix is invertible.

Proof. By contradiction: Suppose that there exists t∗ such that X(t∗) is singular; this means that there existsk 6= θ, X(t∗)k = θ. Now let x(t) := X(t)k = θ. Then we have that x(t) = X(t)k = A(t)X(t)k = A(t)x(t),and x(t∗) = X(t∗)k = θ. This has the unique solution x(t) ≡ θ, for all t. But in particular this implies thatx(t0) = X(t0)k = θ, which implies that X(t0) is singular, i.e. detX(t0) = 0, giving our contradiction.

Page 21: 221 Homework

EE221A Linear System Theory

Problem Set 6

Professor C. Tomlin

Department of Electrical Engineering and Computer Sciences, UC Berkeley

Fall 2011

Issued 10/27; Due 11/4

Problem 1: Linear systems. Using the definitions of linear and time-invariance discussed in class, showthat:

(a) x = A(t)x + B(t)u, y = C(t)x + D(t)u, x(t0) = x0 is linear;

(b) x = Ax + Bu, y = Cx + Du, x(0) = x0 is time invariant (it’s clearly linear, from the above).

Here, the matrices in the above are as defined in class for multiple input multiple output systems.

Problem 2: A linear time-invariant system.

Consider a single-input, single-output, time invariant linear state equation

x(t) = Ax(t) + bu(t), x(0) = x0 (1)

y(t) = cx(t) (2)

If the nominal input is a non-zero constant, u(t) = u, under what conditions does there exist a constantnominal solution x(t) = x0, for some x0?

Under what conditions is the corresponding nominal output zero?

Under what conditions do there exist constant nominal solutions that satisfy y = u for all u?

Problem 3: Sampled Data System

You are given a linear, time-invariant system

x = Ax + Bu (3)

which is sampled every T seconds. Denote x(kT ) by x(k). Further, the input u is held constant between kT

and (k+1)T , that is, u(t) = u(k) for t ∈ [kT, (k+1)T ]. Derive the state equation for the sampled data system,that is, give a formula for x(k + 1) in terms of x(k) and u(k).

Problem 4: Discrete time linear system solution.

Consider the discrete time linear system:

x(k + 1) = Ax(k) + Bu(k) (4)

y(k) = Cx(k) + Du(k) (5)

Here, k ∈ N , A ∈ Rn×n, B ∈ R

n×ni , C ∈ Rno×n D ∈ R

no×ni . Use induction to obtain formulae for y(k), x(k)in terms of x(k0) and the input sequence (uk0

, . . . , uk).

Problem 5: Linear Quadratic Regulator. Consider the system described by the equations x = Ax+Bu,y = Cx, where

A =

[

0 10 0

]

, B =

[

01

]

, C = [1 0]

1

Page 22: 221 Homework

(a) Determine the optimal control u∗(t) = F ∗x(t), t ≥ 0 which minimizes the performance index J =∫

0(y2(t) + ρu2(t))dt where ρ is positive and real.

(b) Observe how the eigenvalues of the dynamic matrix of the resulting closed loop system change as a functionof ρ. Can you comment on the results?

Problem 6. Preservation of Eigenvalues under Similarity Transform.

Consider a matrix A ∈ Rn×n, and a non-singular matrix P ∈ R

n×n. Show that the eigenvalues of A = PAP−1

are the same as those of A.

Remark: This important fact in linear algebra is the basis for the similarity transform that a redefinition ofthe state (to a new set of state variables in which the equations above may have simpler representation) doesnot affect the eigenvalues of the A matrix, and thus the stability of the system. We will use this similaritytransform in our analysis of linear systems.

Problem 7. Using the dyadic expansion discussed in class (Lecture Notes 12), determine eAt for square,diagonalizable A (and show your work).

2

Page 23: 221 Homework

EE221A Problem Set 6 Solutions - Fall 2011

Problem 1. Linear systems.

a) Call this dynamical system L = (U ,Σ,Y, s, r), where U = Rni ,Σ = Rn,Y = Rno . So clearly U ,Σ,Y are alllinear spaces over the same field (R). We also have the response map

ρ(t, t0, x0, u) = y(t) = C(t)x(t) +D(t)u(t)

and the state transition function

s(t, t0, x0, u) = x(t) = Φ(t, t0)x0 +

ˆ t

t0

Φ(t, τ)B(τ)u(τ)dτ

We need to check the linearity of the response map; we have that, ∀t ≥ t0, t ∈ R+:

ρ(t, t0α1x1 + α2x2, α1u1 + α2u2) = C(t)

(Φ(t, t0) (α1x1 + α2x2) +

ˆ t

t0

Φ(t, τ)B(τ) (α1u1(τ) + α2u2(τ)) dτ

)+D(t) (α1u1(τ) + α2u2(τ))

= α1

(C(t)Φ(t, t0)x1 +

ˆ t

t0

Φ(t, τ)B(τ)u1(τ)dτ +D(t)u1(t)

)+α2

(C(t)Φ(t, t0)x1 +

ˆ t

t0

Φ(t, τ)B(τ)u2(τ)dτ +D(t)u2(t)

)= α1ρ(t, t0, x1, u1) + α2ρ(t, t0, x2, u2)

b) Using the definition of time-invariance for dynamical systems, check:

ρ(t1 + τ, t0 + τ, x0, Tτu) = Cx(t1 + τ) +Du((t1 + τ)− τ)

= C

(eA(t1+τ−(t0+τ))x0 +

ˆ t1+τ

t0+τ

eA(t1+τ−σ)Bu(σ − τ)dσ

)+Du(t1)

= CeA(t1−t0)x0 +

ˆ t1

t0

eA(t1−s)Bu(s)ds+Du(t1)

= ρ(t1, t0, x0, u)

Problem 2. A linear time-invariant system.

a) The solution is constant exactly when x(t) = 0, so 0 = Ax0 + bu ⇐⇒ Ax0 = −bu. Such an x0 exists iff−bu ∈ R(A) ⇐⇒ b ∈ R(A) (since u 6= 0).

b) For the output to be zero, we also need y(t) = cx0 = 0. We can write both conditions as[Ac

]x0 =

[−bu

0

]= −u

[b0

],

which is equivalent to[b0

]∈ R(

[Ac

]).

c) Now we must have u = cx0. Similar to the above analysis, this leads to[Ac

]x0 =

[−buu

]= u

[−b1

],

and such an x0 will exist whenever[−b1

]∈ R(

[Ac

])

Problem 3. Sampled Data System.To prevent confusion between the continuous time system and its discretization, we will use the notation x [k] :=

x(kT ), u [k] := u(kT ) in the following:

Page 24: 221 Homework

2

x [k + 1] = x((k + 1)T ) = eA((k+1)T−kT )x(kT ) +

ˆ (k+1)T

kT

eA((k+1)T−τ)Bu(τ)dτ

= eATx [k] +

ˆ (k+1)T

kT

eA((k+1)T−τ)dτBu [k]

Now, make the change of variables σ = (k + 1)T − τ in the integral, to get

x [k + 1] = eATx [k] +

ˆ T

0

eAσdσBu [k]

= Adx [k] +Bdu [k] ,

where

Ad := eAT , Bd :=

ˆ T

0

eAσdσB.

Remark. This is known as the ‘exact discretization’ of the original continuous-time system. If A is invertible, thenconsider (with the usual disclaimer about ‘proceeding formally’ where the infinite series is concerned),

ˆ T

0

eAσdσ =

ˆ T

0

(I +Aσ +

1

2A2σ2 + · · ·

)dσ

= I

ˆ T

0

dσ +A

ˆ T

0

σdσ +1

2A2

ˆ T

0

σ2dσ + · · ·

= T +1

2AT 2 +

1

3 · 2A2T 3 + · · ·

= A−1(AT +

1

2A2T 2 +

1

3!A3T 3 + · · ·

)= A−1

(eAT − I

)So in this case we have Ad = eAT , Bd = A−1

(eAT − I

)B.

Problem 4. Discrete time linear system solution.Assume k > k0, and let N = k − k0 (not to be confused with N in the problem statement, which might have

better been printed as N). Then,

Page 25: 221 Homework

3

x(k0 + 1) = Ax(k0) +Buk0

x(k0 + 2) = A(Ax(k0) +Buk0) +Buk0+1

= A2x(k0) +ABuk0 +Buk0+1

x(k0 + 3) = A(A2x(k0) +ABuk0 +Buk0+1) +Buk0+2

= A3x(k0) +A2Buk0 +ABuk0+1 +Buk0+2

...

x(k) = x(k0 +N) = ANx(k0) +AN−1Buk0 +AN−2Buk0+1 + · · ·+ABuk−2 +Buk−1

= ANx(k0) +[AN−1B AN−2B · · · AB B

]

uk0uk0+1

...uk−2uk−1

,

= ANx(k0) +N∑i=1

AN−iBuk0+i−1

= Ak−k0x(k0) +

k−1∑i=k0

Ak−1−iBui (1)

= Ak−k0x(k0) +

k−k0∑i=1

Ak−k0−iBuk0+i−1 (alternate form)

= Ak−k0x(k0) +

k−k0−1∑i=0

AiBuk−i−1 (alternate form)

Thus,

y(k) = Cx(k) +Du(k)

= CAk−k0x(k0) + C

k−1∑i=k0

Ak−1−iBui +Du(k)

Remark. Note the similarity between the form of (1) and the usual form of the analogous continous time case,

x(t) = eA(t−t0)x(t0) +

ˆ t

t0

eA(t−τ)Bu(τ)dτ.

Problem 5. Linear Quadratic Regulator.

a) We have a cost function of the form

J =

ˆ ∞0

(yTQy + uTRu

)dt,

where in this case Q = 1, R = ρ. In LN11 we have a proof that the optimal control is

u? = −F ?x(t)

= −R−1BTPx(t)

= −ρ−1BTPx(t),

where P is the unique positive definite solution to the (algebraic) Riccatti equation

PA+ATP − PBR−1BTP + CTQC = 0

Page 26: 221 Homework

4

In this case the sparsity of A,B,C suggests that we may be able to determine the solution to the ARE by hand:[p11 p12p21 p22

] [0 10 0

]+

[0 01 0

] [p11 p12p21 p22

]−1

ρ

[p11 p12p21 p22

] [0 00 1

] [p11 p12p21 p22

]+

[1 00 0

]=

[0 00 0

]=⇒ p11 =

√2ρ1/4

p12 = p21 =√p

p22 =√

2ρ3/4

=⇒ P =

[ √2ρ1/4

√ρ√

ρ√

2ρ3/4

]Thus,

u?(t) = −1

ρ

[0 1

] [ √2ρ1/4√ρ√

ρ√

2ρ3/4

]x(t)

=[−ρ−1/2 −

√2ρ−1/4

]x(t)

= −F ?x(t)

b) The closed loop system is

x(t) = Ax(t)−BF ?x(t)

= (A−BF ?)x(t)

=

([0 10 0

]+

[01

] [−ρ−1/2 −

√2ρ−1/4

])x(t)

=

[0 1

−ρ−1/2 −√

2ρ−1/4

]x(t),

and the closed loop dynamics ACL = A−BF ? has eigenvalues,√

2

2ρ−1/4 (1± j)

so the poles lie on 45-degree lines from the origin in the left half plane. Since ρ appears in the denominator, smallvalues in ρ correspond to poles far away from the origin; the system response will be faster than for larger valuesof ρ. However in all cases the damping ratio of ζ =

√22 will be the same.

Problem 6. Preservation of Eigenvalues under Similarity Transform.

Recall the property of determinants that detAB = detAdetB. Then,

det(sI − A) = det(sI − PAP−1)

= det(sPP−1 − PAP−1)

= detP det(sI −A) detP−1

= detP detP−1 det(sI −A)

= det(sI −A)

Thus the characteristic polynomials of A and A are identical and so are their eigenvalues.

Problem 7.

First, consider (At)n = Antn = (∑ni=1 λieiv

Ti )ntn =

∑ni=1 λ

ni tneiv

Ti (using the same argument as the n = 2 case

Page 27: 221 Homework

5

in the lecture notes). Recall that∑ni=1 eiv

Ti = I. Then,

eAt = I +At+t2

2!A2 +

t3

3!A3 + · · ·

=

n∑i=1

eivTi + t

(n∑i=1

λieivTi

)+t2

2!

(n∑i=1

λ2i eivTi

)+t3

3!

(n∑i=1

λ3i eivTi

)+ · · ·

=

n∑i=1

(1 + λit+t2

2!λ2i +

t3

3!λ3i + · · · )eivTi

=

n∑i=1

eλiteivTi ,

where we are treating the infinite series representation of the exponentia ‘formally’.

Page 28: 221 Homework

EE221A Linear System Theory

Problem Set 7

Professor C. Tomlin

Department of Electrical Engineering and Computer Sciences, UC Berkeley

Fall 2011

Issued 11/3; Due 11/10

Problem 1.

A has characteristic polynomial (s − λ1)5(s − λ2)

3, it has four linearly independent eigenvectors, the largestJordan block associated to λ1 is of dimension 2, the largest Jordan block associated to λ2 is of dimension 3.Write down the Jordan form J of this matrix and write down cos(eA) explicitly.

Problem 2.

A matrix A ∈ R6×6 has minimal polynomial s3. Give bounds on the rank of A.

Problem 3: Jordan Canonical Form.

Given

A =

−3 1 0 0 0 0 00 −3 1 0 0 0 00 0 −3 0 0 0 00 0 0 −4 1 0 00 0 0 0 −4 0 00 0 0 0 0 0 00 0 0 0 0 0 0

(a) What are the eigenvalues of A? How many linearly independent eigenvectors does A have?

How many generalized eigenvectors?

(b) What are the eigenvalues of eAt?

(c) Suppose this matrix A were the dynamic matrix of an LTI system. What happens to the state trajectoryover time (magnitude grows, decays, remains bounded...)?

Problem 4.

You are told that A : Rn → R

n and that R(A) ⊂ N(A). Can you determine A up to a change of basis? Whyor why not?

Problem 6.

Let A ∈ Rn×n be non-singular. True or false: the nullspace of cos(log(A)) is an A−invariant subspace?

Problem 7.

Consider A ∈ Rn×n, b ∈ R

n. Show that spanb, Ab, . . . , An−1b is an A−invariant subspace.

1

Page 29: 221 Homework

EE221A Problem Set 7 Solutions - Fall 2011

Problem 1.With the given information, we can determine the Jordan form J = TAT−1 of A to be,

J =

λ1 10 λ1

λ1 10 λ1

λ1λ2 1 00 λ2 10 0 λ2

.

Thus,

cos(eJ)

=

cos eλ1 −eλ1 sin eλ1

0 cos eλ1

cos eλ1 −eλ1 sin eλ1

0 cos eλ1

cos eλ1

cos eλ2 −eλ2 sin eλ2 − 12

(eλ2 sin eλ2 + e2λ2 cos eλ2

)0 cos eλ2 −eλ2 sin eλ2

0 0 cos eλ2

,

and cos(eA)

= T−1(cos(eJ))T .

Problem 2.

We know that there is a single eigenvalue λ = 0 with multiplicity 6, and that the size of the largest Jordan blockis 3. We know that rank (A) = rank

(T−1JT

)= rank (J) since T is full rank (apply Sylvester’s inequality). Then

J must have rank of at least 2, arising from the 1’s in the superdiagonal in the Jordan block of size 3. If all theother Jordan blocks were size 1, then there would be no additional 1’s on the superdiagonal, so the lower bound onrank (A) is 2. Now the most 1’s on the superdiagonal that this matrix could have is 4, which would be the case ifthere were two Jordan blocks of size 3. So rank (A) ≤ 4. Thus the bounds are

2 ≤ rank (A) ≤ 4.

Problem 3. Jordan Canonical Form.

a) Since this matrix is upper triangular (indeed, already in Jordan form) we can read the eigenvalues from thediagonal elements: σ(A) = −3,−4, 0. Since there are 4 Jordan blocks, there are also 4 linearly independenteigenvectors, and 3 generalized eigenvectors (2 associated with the eigenvalue of -3 and 1 with the eigenvalue of -4).

b) By the spectral mapping theorem,

σ(eAt) = eσ(A)t =e−3t, e−4t, 1

c) Since σ(A) has an eigenvalue not in the open left half plane, it is not (internally) asymptotically stable. (Note,

however that it is (internally) stable since the questionable eigenvalues are on the jω-axis and have Jordan blocksof size 1). In particular, the first 5 states will decay to zero asymptotically (indeed, exponentially), and the lasttwo will remain bounded (indeed, constant).

Problem 4.

No. The given property R(A) ⊂ N (A) is equivalent to A2v = θ, ∀v ∈ Rn. Clearly A0 = 0n×n has this property,but so does, e.g.,

A1 =

0 1 0 · · · 0

0 0 0...

.... . .

. . .

00 · · · 0 0

,

Page 30: 221 Homework

2

and since A0, A1 are both in Jordan form, but are not the same (even with block reordering), this means that Acannot be determined up to a change of basis.

Problem 6.

True.

Proof. Let f(x) := cos (log (x)). We can write A = T−1JT. So f(A) = f(T−1JT ) = T−1f(J)T . Now consider

N (f(A)) = N (T−1f(J)T )

Now if x ∈ N (T−1f(J)T ), T−1f(J)Tx = θ ⇐⇒ f(J)Tx = θ ⇐⇒ Tx ∈ N (f(J)). We need show thatf(A)Ax = θ ⇐⇒ T−1f(J)TT−1JTx = θ ⇐⇒ f(J)JTx = θ. This will be true if J and f(J) commute, becauseif so, then f(J)JTx = Jf(J)Tx = θ since we have shown that Tx ∈ N (f(J)) whenever x ∈ N (f(A)).

Note that the block structure of f(J) and J leads to f(J)J and Jf(J) having the same block structure, and weonly need to check if Ji and f(Ji) commute, where Ji is the i-th Jordan block. Write Ji = λiI + S where S is an“upper shift” matrix (all zeros except for 1’s on the superdiagonal).

So we want to know if (λiI + S) f(Ji) = λif(Ji) + Sf(Ji) = f(Ji)λi + f(Ji)S. In other words does Sf(Ji) =f(Ji)S. Note that when S pre-multiplies a matrix, the result is the original matrix with its entries shifted up, andthe last row being filled with zeros; when S post-multiplies a matrix, the result is the original matrix with its entriesshifted to the right and the first column filled with zeros. Since f(Ji) is an upper-triangular, banded matrix, theresult is the same in either case and so f(J) and J commute.

So indeed, the nullspace of cos (log (A)) is an A-invariant subspace.

Alternate proof: Let f(x) := cos (log(x)). By the spectral mapping theorem, σ(f(A)) = f(σ(A)); since we areinterested in the nullspace of f(A), this means we want to consider eigenvectors associated with eigenvalues at zeroof f(A). So these are the values of x that make cos (log(x)) = 0. These are eπ/2, e3π/2, and so on. We have seenthat for any eigenvalue λ of A, the space N (A−λI) spanned by the eigenvectors associated with that eigenvalue isA-invariant. The nullspace of f(A) is thus the direct sum of such subspaces and is hence also A-invariant. (Thanksto Roy Dong for this proof).

Another alternate proof: Since f(x) := cos (log(x)) is analytic for x 6= 0 and A nonsingular means 0 /∈ σ(A),f(A) = p(A) for some polynomial p of finite degree. Then A-invariance of the nullspace is easy to check. Letv ∈ N (A), so Av = 0. Then

Af(A)v = A(c0I + c1A+ · · ·+ cn−1A

n−1)v

= c0Av + c1A2v + · · ·+ cn−1A

nv

=(c0I + c1A+ · · ·+ cn−1A

n−1)Av

= 0

Problem 7.

Proof. Let v ∈ Ω := spanb, Ab,A2b, . . . , An−1b

. Then v = α0b+ α1Ab+ α2A

2b+ · · ·+ αn−1An−1b.

Now consider Av = α0Ab+ α1A2b+ α2A

3b+ · · ·+ αn−2An−1b+ αn−1A

nb.Apply the C-H theorem:

An = β0I + β1A+ · · ·+ βn−1An−1,

so we have

Av = (αn−1β0)b+ (α0 + αn−1β1)Ab+ (α1 + αn−1β2)A2b+ · · ·+ (αn−2 + αn−1βn−1)An−1b

and so Av ∈ Ω.

Page 31: 221 Homework

EE221A Linear System Theory

Problem Set 8

Professor C. Tomlin

Department of Electrical Engineering and Computer Sciences, UC Berkeley

Fall 2011

Issued 11/10; Due 11/18

Problem 1: BIBO Stability.

TH

TC

f

fH

C

TH

TCi

i

,

,

VH

VC

Figure 1: A simple heat exchanger, for Problem 1.

Consider the simple heat exchanger shown in Figure 1, in which fC and fH are the flows (assumed constant)of cold and hot water, TH and TC represent the temperatures in the hot and cold compartments, respectively,THi and TCi denote the temperature of the hot and cold inflow, respectively, and VH and VC are the volumesof hot and cold water. The temperatures in both compartments evolve according to:

VC

dTC

dt= fC(TCi − TC) + β(TH − TC) (1)

VH

dTH

dt= fH(THi − TH)− β(TH − TC) (2)

Let the inputs to this system be u1 = TCi, u2 = THi, the outputs are y1 = TC and y2 = TH , and assume thatfC = fH = 0.1 (m3/min), β = 0.2 (m3/min) and VH = VC = 1 (m3).

(a) Write the state space and output equations for this system in modal form.

(b) In the absence of any input, determine y1(t) and y2(t).

(c) Is the system BIBO stable? Show why or why not.

Problem 2: BIBO Stability

Consider a single input single output LTI system with transfer function G(s) = 1

s2+1. Is this system BIBO

stable?

1

Page 32: 221 Homework

Problem 3: Exponential stability of LTI systems.

Prove that if the A matrix of the LTI system x = Ax has all of its eigenvalues in the open left half plane, thenthe equilibrium xe = 0 is asymptotically stable.

Problem 4: Characterization of Internal (State Space) Stability for LTI systems.

(a) Show that the system x = Ax is internally stable if all of the eigenvalues of A are in the closed left halfof the complex plane (closed means that the jω-axis is included), and each of the jω-axis eigenvalues has aJordan block of size 1.

(b) Given

A =

−3 1 0 0 0 0 00 −3 1 0 0 0 00 0 −3 0 0 0 00 0 0 −4 1 0 00 0 0 0 −4 0 00 0 0 0 0 0 00 0 0 0 0 0 0

Is the system x = Ax exponentially stable? Is it stable?

2

Page 33: 221 Homework

EE221A Problem Set 8 Solutions - Fall 2011

Problem 1. BIBO Stability.

a) First write this LTI system in state space form,

x = Ax+Bu

=

[−(β+fC)

VC

βVC

βVH

−(β+fH)VH

]x+

[fCVC

0

0 fHVH

]u,

=

[−0.3 0.20.2 −0.3

]x+

[0.1 00 0.1

]u

y = Cx =

[1 00 1

]x

where x := (TC , TH)T , u := (TCi, THi

)T . This has two distinct eigenvalues (so we know it can be diagonalized)λ1 = −0.5 with eigenvector e1 = (1,−1) and λ2 = −0.1 with eigenvector e2 = (1, 1). Let T−1 =

[e1 e2

], so

T = 12

[1 −11 1

]and the modal form is

z = Az + Bu,

y = Cz,

where A = TAT−1 =

[−0.5 0

0 −0.1

], B = TB =

[0.05 −0.050.05 0.05

], C = CT−1 =

[1 1−1 1

].

b)

y(t) = Cz(t) = CeAtz(0) = CeAtTx0

=1

2

[1 1−1 1

] [e−0.5t 0

0 e−0.1t

] [1 −11 1

] [x0,1x0,2

]=

1

2

[1 1−1 1

] [e−0.5t −e−0.5te−0.1t e−0.1t

] [x0,1x0,2

]=

1

2

[e−0.5t + e−0.1t −e−0.5t + e−0.1t

−e−0.5t + e−0.1t e−0.5t + e−0.1t

] [x0,1x0,2

]=⇒ y1(t) =

1

2e−0.5t (x0,1 − x0,2) +

1

2e−0.1t (x0,1 + x0,2)

y2(t) =1

2e−0.5t (x0,2 − x0,1) +

1

2e−0.1t (x0,1 + x0,2)

c) Since all the eigenvalues are in the open left half plane, the system is (internally) exponentially stable, andsince we have a minimal realization ((A,B) completely controllable and (A,C) completely observable; clear byinspection since B and C are both full rank), it is thus BIBO stable.

Problem 2. BIBO Stability.

The transfer function has poles at ±j; thus there are some poles that are not in Co− (open left half plane),therefore the system cannot be BIBO stable.

Consider for example the bounded input u(t) = sin t. So u(s) = 1s2+1 and

y(s) = G(s)u(s) =1

(s2 + 1)2 =

1

2

[1

s2 + 1− s2 − 1

(s2 + 1)2

]

=⇒ y(t) =1

2[sin t− t cos t]

which will clearly grow without bound as t→∞.

Page 34: 221 Homework

2

Problem 3. Exponential stability of LTI systems.

We have seen that for an LTI system, Φ(t, t0) = eA(t−t0). By the spectral mapping theorem, σ(eA(t−t0)

)=

f(σ(A)) where f(x) = ex(t−t0), thus f ′(x) = (t− t0) ex(t−t0), f ′′(x) = (t− t0)2ex(t−t0), ..., f (k)(x) = (t− t0)

kex(t−t0).

Note that the Jordan form of eA(t−t0) will be comprised solely of entries of this form (scaled by 1(k−1)! < 1), ie.

products of polynomials in t and eλi(t−t0). When Re(λi) < 0, all these entries will go to zero as t → ∞, sinceany decaying exponential eventually dominates any growing polynomial. So the magnitude of the state must alsogo to zero. The state is also bounded by continuity the polynomial-matrix products. This implies that xe = 0 isasymptotically stable.

This is developed a bit more formally in LN15, p.5 but the idea is the same; and we don’t need all the mechanicsof that proof since we aren’t trying to show that the state goes to zero exponentially fast.

Problem 4. Characterization of Internal (State Space) Stability for LTI systems.(a) For internal stability we simply need the state to be bounded for all t ≥ t0. This implies that

∥∥eJt∥∥ must bebounded, where J is the Jordan form of A. By the analysis in problem 3, this is clearly true for the subspaces of thestate space corresponding to eigenvalues in the open left half plane. For subspaces corresponding to an eigenvalueλi = jω on the imaginary axis, note that the corresponding Jordan block Ji with block size 1 leads to simplyeJit = eλit = ejωt = cosωt+ sinωt, hence

∥∥eJit∥∥ = 1 .(b) This system is in Jordan form; the eigenvalues have either negative imaginary part, or they are on the

imaginary axis and have Jordan block size 1, so by the result of part (a) the system is (internally) stable. However,because of the eigenvalues at zero, the system is not exponentially stable.

Page 35: 221 Homework

EE221A Linear System Theory

Problem Set 9

Professor C. Tomlin

Department of Electrical Engineering and Computer Sciences, UC Berkeley

Fall 2011

Issued 11/21; Due 12/1

Problem 1: Lyapunov Equation.

(a) Consider the linear map L : Rn×n → R

n×n defined by L(P ) = AT P + PA. Show that if λi + λj 6=0,∀λi, λj ∈ σ(A), the equation:

AT P + PA = Q

has a unique symmetric solution for given symmetric Q.

(b) Show that if σ(A) ⊂ C

−then for given Q > 0, there exists a unique positive definite P solving

AT P + PA = −Q

(Hint: try P =∫

0eAT tQeAtdt)

Problem 2: Asymptotic and exponential stability.

True or False: If a linear time-varying system is asymptotically stable, it is also exponentially stable. If true,prove, if false, give a counterexample.

Problem 3: State observation problem.

Consider the linear time varying system:

x(t) = A(t)x(t)

y(t) = C(t)x(t)

This system is not necessarily observable. The initial condition at time 0 is x0.

(a) Suppose the output y(t) is observed over the interval [0, T ]. Under what conditions can the initial statex0 be determined? How would you determine it?

(b) Now suppose the output is subject to some error or measurement noise. Determine the “best” estimate ofx0 given y(·) and the system model.

(c) Consider all initial conditions x0 such that ||x0|| = 1. Defining the energy in the output signal as< y(t), y(t) >, is it possible for the energy of the output signal to be zero?

Problem 4: State vs. Output Feedback.

Consider a dynamical system described by:

x = Ax + Bu (1)

y = Cx (2)

where

A =

[

0 17 −4

]

, B =

[

12

]

, C = [1 3] (3)

For each of cases (a) and (b) below, derive a state space representations of the resulting closed loop system,and determine the characteristic equation of the resulting closed loop “A” matrix (called the closed loopcharacteristic equation): (a) u = −[f1 f2]x, and (b) u = −ky.

Problem 5: Controllable canonical form.

1

Page 36: 221 Homework

Consider the linear time invariant system with state equation:

x1

x2

x3

=

0 1 00 0 1

−α3 −α2 −α1

x1

x2

x3

+

001

u (4)

Insert state feedback: the input to the overall closed loop system is v and u = v − kx where k is a constantrow vector. Show that given any polynomial p(s) =

3

k=0aks3−k with a0 = 1, there is a row vector k such

that the closed loop system has p(s) as its characteristic equation. (This naturally extends to n dimensions,and implies that any system with a representation that can be put into the form above can be stabilized bystate feedback.)

2

Page 37: 221 Homework

EE221A Linear System Theory

Problem Set 10

Professor C. Tomlin

Department of Electrical Engineering and Computer Sciences, UC Berkeley

Fall 2011

Issued 12/2; Due 12/9

Problem 1: Feedback control design by eigenvalue placement. Consider the dynamic system:

d4θ

dt4+ α1

d3θ

dt3+ α2

d2θ

dt2+ α3

dt+ α4θ = u

where u represents an input force, αi are real scalars. Assuming that d3θdt3

, d2θdt2

, dθdt

, and θ can all be measured,design a state feedback control scheme which places the closed-loop eigenvalues at s1 = −1, s2 = −1, s3 =−1 + j1, s4 = −1 − j1.

Problem 2: Controllability of Jordan Forms.

Given the Jordan Canonical Form of Problem Set 7:

A =

−3 1 0 0 0 0 00 −3 1 0 0 0 00 0 −3 0 0 0 00 0 0 −4 1 0 00 0 0 0 −4 0 00 0 0 0 0 0 00 0 0 0 0 0 0

Suppose this matrix A were the dynamic matrix of a system to be controlled. What is the minimum numberof inputs needed for the system to be controllable?

Problem 3: Observer design. Figure 1 shows a velocity observation system where x1 is the velocity to be

x1 x2

z1

uinput

velocity

observed variable

observeroutput

observer

1s

2+s

2 s

Figure 1: Velocity Observation System.

observed. An observer is to be constructed to track x1, using u and x2 as inputs. The variable x2 is obtainedfrom x1 through a sensor having the known transfer function

2 − s

2 + s(1)

1

Page 38: 221 Homework

as shown in Figure 1.

(a) Derive a set of state-space equations for the system with state variables x1 and x2, input u and outputx2.

(b) Design an observer with states z1 and z2 to track x1 and x2 respectively. Choose both observer eigenvaluesto be at −4. Write out the state space equations for the observer.

(c) Derive the combined state equation for the system plus observer. Take as state variables x1, x2, e1 = x1−z1,and e2 = x2 − z2. Take u as input and z1 as the output. Is this system controllable and/or observable? Givephysical reasons for any states being uncontrollable or unobservable.

(d) What is the transfer function relating u to z1? Explain your result.

Problem 4: Observer-controller for a nonlinear system.

The simplified dynamics of a magnetically suspended steel ball are given by:

my = mg − cu2

y2

where the input u represents the current supplied to the electromagnet, y is the vertical position of the ball,which may be measured by a position sensor, g is gravitational acceleration, m is the mass of the ball, and c

is a positive constant such that the force on the ball due to the electromagnet is cu2

y2 . Assume a normalizationsuch that m = g = c = 1.

(a) Using the states x1 = y and x2 = y write down a nonlinear state space description of this system.

(b) What equilibrium control input ue must be applied to suspend the ball at y = 1 m?

(c) Write the linearized state space equations for state and input variables representing perturbations awayfrom the equilibrium of part (b).

(d) Is the linearized model stable? What can you conclude about the stability of the nonlinear system closeto the equilibrium point xe?

(e) Is the linearized model controllable? Observable?

(f) Design a state feedback controller for the linearized system, to place the closed loop eigenvalues at −1,−1.

(g) Design a full order observer, so that the state estimate error dynamics has eigenvalues at −5,−5.

(h) Now, suppose that you applied this controller to the original nonlinear system; discuss how you wouldexpect the system to behave. How would the behavior change if you had chosen controller eigenvalues at−5,−5, and observer eigenvalues at −20,−20?

Problem 5. Given a linear time varying system R(·), show that if R(·) is completely controllable on [t0, t1],then R is completely controllable on any [t′

0, t′

1], where t′

0≤ t0 < t1 ≤ t′

1. Show that this is no longer true

when the interval [t0, t1] is not a subset of [t′0, t′

1].

2

Page 39: 221 Homework

EE221A Problem Set 10 Solutions - Fall 2011

Problem 1. Feedback control design by eigenvalue placement.

First write the system in state space form:

x =

θ

θ˙θ˙ ˙θ

=

0 1 0 00 0 1 00 0 0 1−α4 −α3 −α2 −α1

x+

0001

u= Ax+Bu

y = x

We can check the controllability by considering

Q =[sI −A B

]=

s −1 0 0 00 s −1 0 00 0 s −1 0α4 α3 α2 s− α1 1

which clearly has rank 4 for any s ∈ C—moreover, by inspection (A,B) is in controllable canonical form—so (A,B)is completely controllable. Now, let

u = −fTx = −[f1 f2 f3 f4

]x

The closed loop system is then

x =

θ

θ˙θ˙ ˙θ

=

0 1 0 00 0 1 00 0 0 1−α4 −α3 −α2 −α1

x−

0001

[ f1 f2 f3 f4]x

=

0 1 0 00 0 1 00 0 0 1

−α4 − f1 −α3 − f2 −α2 − f3 −α1 − f4

x= ACLx

We can compute the characteristic polynomial of the closed loop system,

χACL(s) = det (sI −ACL) = s4 + (α1 + f4) s3 + (α2 + f3) s2 + (α3 + f2) s+ α4 + f1

While our desired characteristic polynomial is,

Xdes(s) = (s+ 1)2

(s+ 1 + j) (s+ 1− j)= s4 + 4s3 + 7s2 + 6s+ 2

and by matching terms we conclude that

f1 = 2− α4

f2 = 6− α3

f3 = 7− α2

f4 = 4− α1

Problem 2. Controllability of Jordan Forms.

Page 40: 221 Homework

2

A minimum of two inputs are needed. Proof: The PBH test shows that no B matrix with a single column canprovide complete controllability; it is easy to find a two-column B matrix that does, for example

B =

0 00 01 00 01 01 00 1

Problem 3. Observer design.

a) We have x1(s) = 1su(s) =⇒ sx1(s) = u(s) =⇒ x1(t) = u(t). Also (2 + s)x2(s) = (2− s)x1(s) =⇒ x2(t) =

2x1(t)− 2x2(t)− x1(t) = 2x1(t)− 2x2(t)− u(t). So the system in state-space form is,[x1x2

]=

[0 02 −2

] [x1x2

]+

[1−1

]u

y =[

0 1] [ x1

x2

]

b) We want to place the eigenvalues of A−TC where T =

[t1t2

], C =

[0 1

]=⇒ A−TC =

[0 −t12 −2− t2

].

The characteristic polynomial of A− TC is,

det(sI − (A− TC)) = det

[s t1−2 s+ 2 + t2

]= s2 + (2 + t2)s+ 2t1

and we want it to equal the desired characteristic polynomial, (s+ 4)2 = s2 + 8s+ 16. Thus 2 + t2 = 8 =⇒ t2 = 6and 2t1 = 16 =⇒ t1 = 8. The observer state space equations are therefore,

z =

[0 −82 −8

]z +

[1−1

]u+

[86

]y

c) The overall dynamics are described by[xe

]=

[A 00 A− TC

] [xe

]+

[B0

]u

=

0 0 0 02 −2 0 00 0 0 −80 0 2 −8

[ xe

]+

1−100

uy =

[1 0 −1 0

] [ xe

]where

x =[x1 x2

]T, e =

[e1 e2

]TB =

[1−1

]The overall system is not completely controllable, nor observable; the controllability matrixQ = [B|AB|A2B|A3B]

is

Q =

1 0 0 0−1 4 −8 160 0 0 00 0 0 0

Page 41: 221 Homework

3

which has rank 2, and the observability matrix is

O =

CCACA2

CA3

=

1 0 −1 00 0 0 −80 0 16 640 0 −128 384

which has rank 3. The error states are not controllable because the observer is designed to such that the errorconverges to zero. Also intuitively it makes sense that one should not be able to control the state estimates separatelyfrom the states that are being estimated! The state x2 is not observable. This is because the system is designedto ensure z1 → x1 independently of u, and one does not with variations in the controlled variable x2 to affect theestimate of x1.

d)

C(sI −A)−1B =[

1 0 −1 0]

s 0 0 0−2 s+ 2 0 00 0 s 80 0 −2 s+ 8

−1

1−100

=[

1 0 −1 0]

s+2s(s+2) 0 0 0

2s(s+2)

ss(s+2) 0 0

0 0 ♠ ♠0 0 ♠ ♠

1−100

=[

1s 0 ♠ ♠

] 1−100

=

1

s

where ♠ denotes terms that don’t matter since they will be multiplied by zero.The observer is essentially inverting the dynamics of the sensor, such that the transfer function from input to

the estimated velocity is identical to the transfer function to the actual velocity.

Problem 4. Observer-controller for a nonlinear system.

a)

x :=

[x1x2

]=

[x2

1− u2

x21

]= f(x), y =

[1 0

]x = Cx

b) When u = 1, x2 = 0, so this input will keep the system in equilibrium at y = x1 = 1 m.

c) Let A := Dxf |x0,u0, B := Duf |x0,u0

, x0 := (1, 0), u0 = 1. Then A =

[0 12 0

], B =

[0−2

].

d) The eigenvalues of A are ±√

2 so the equilibrium x0 is unstable in the linearized system. The same equilibriumwill consequently also be unstable in the nonlinear system.

e)

C =[B AB

]=

[0 −2−2 0

]=⇒ controllable

O =

[CCA

]=

[1 00 1

]=⇒ observable

f) Let the feedback system be u = −Fx, thus the closed loop dynamics are x = (A−BF )x. By comparingthe desired characteristic polynomial we can determine that F =

[−3/2 −1

]gives the desired closed loop

eigenvalues.

Page 42: 221 Homework

4

g) Let the observer gain matrix be T =

[t1t2

]. Then

det [sI − (A− TC)] = det

[s+ t1 −1−2 + t2 s

]= s2 + t1s− 2 + t2

While the desired characteristic polynomial is,

(s+ 5)2

= s2 + 10s+ 25

Thus, T =

[1027

]gives the desired spectrum for observer dynamics A− TC.

h) In principle, near the equilibrium this controller/observer system will both control and observe the nonlin-ear system. More aggressive eigenvalue placement leads to higher gains in the controller, potentially degradingperformance (especially in the presence of measurement noise, actuator saturation, signal digitization, unmodeleddisturbances, etc.).

Problem 5.a) Let (x′0, t

′0) be the initial phase and (x′1, t

′1) be an arbitrary final phase. Construct a control u(·) piecewise

such that u(t) = 0, t ∈ [t′0, t0) ∪ (t1, t′1]. Then we have that x(t0) = Φ(t0, t

′0)x′0, and x(t′1) = Φ(t′1, t1)x(t1) ⇐⇒

x(t1) = Φ(t1, t′1)x(t′1). But since R(·) is c.c. on [t0, t1], we know there exists a control u on [t0, t1] that will transfer

any (x0, t0) to any (x1, t1). So let u(t) = u(t), t ∈ [t0, t1].b) Counterexample: Consider a system R(·) = (A(·), B(·), C(·), D(·)), where t0 = t′0 < t′1 < t1

B(t) =

0n×n, t0 ≤ t ≤ t′1In×n, t′1 < t ≤ t1

Then clearly R(·) is c.c. on [t0, t1], but not on [t′0, t′1].

Page 43: 221 Homework

EE221A Problem Set 9 Solutions - Fall 2011

Problem 1. Lyapunov Equation.

(a) We want to show that L(P ) = Q has a unique symmetric solution. So we are interested in whetherL(P ) 7→ ATP + PA is injective (for uniqueness) and surjective (solution exists for any given symmetric Q). Thuswe want to know if L is bijective or equivalently (since L maps from Rn×n to itself), if N (L) = θ.

A sketch of the proof is as follows: We use the (ordinary and generalized) eigenvalues and eigenvectors of A andthe property that sums of eigenvalues cannot be zero, to show that v ∈ N (P ) for each v (ordinary and generalized)eigenvector of A. Since the set of all (ordinary and generalized) eigenvectors is a basis for Rn, the only P thatsatisfies this is P = 0, hence N (L) = θ as desired.

Let e be an eigenvector of A with eigenvalue λ. Then

L(P ) = 0 =⇒ ATPe+ PAe = 0

ATPe = −λPe,

and since σ(A) = σ(AT ), this means that either: i) −λ is an eigenvalue of A, with left eigenvector eTP , or ii)Pv = 0. But the first case is precluded by the given property on the eigenvalues of A. So we have shown that forevery eigenvector e of A, Pe = 0. If A happens to be diagonable (i.e. it has a complete set of n linearly independenteigenvectors), then we are done. However we can’t assume this. Thus, consider also a generalized eigenvector v ofA of degree 1 (so Av = λv + e where e is some eigenvector of A). Then,

L(P ) = 0 =⇒ ATPv + PAv = 0

ATPv = −λPv − PeATPv = −λPv,

where we recall that we have already shown that Pe = 0. By the same reasoning as before, we now have that Pv = 0for all generalized eigenvectors of degree 1. One can continue this until all of the eigenvectors and generalizedeigenvectors of A have been exhausted, with the result that P maps every eigenvector and generalized eigenvectorof A to zero. But since the eigenvectors and generalized eigenvectors of A form a basis for Rn, this implies thatL(P ) = 0 =⇒ P = 0. So we have that L(P ) = Q has unique solutions. Now to show that any solution issymmetric,

Q = QT =⇒ ATP + PA = PTA+ATPT

=⇒ L(P ) = L(PT ) =⇒ P = PT

(b) Note that σ(A) ⊂ Co− implies the property in part (a) so by that result we have existence of a unique,

symmetric solution. Check that the hinted P is this solution:

ATP + PA = AT

ˆ ∞0

eAT tQeAtdt+

ˆ ∞0

eAT tQeAtdtA

=

ˆ ∞0

(d

dteAt

)T

QeAtdt+

ˆ ∞0

eAT t

(d

dteAt

)dt

=

ˆ ∞0

d

dt

(eA

T tQeAt)dt

= eAT tQeAt

∣∣∣∞t=0

= −Q

And P is clearly positive definite because eAt is invertible and Q is positive definite.

Problem 2. Asymptotic and exponential stability.

False. Counterexample: Consider the system x = − 11+tx. This has solution x(t) =

(1+t01+t

)x0, i.e. Φ(t, t0) =

1+t01+t . So Φ(t, 0) → 0 as t → ∞; therefore xe = 0 is asymptotically stable. But note that for any α > 0,|x(t)| exp [α(t− t0)]→∞ as t→∞, so we can never satisfy the requirements of exponential stability.

Page 44: 221 Homework

2

Problem 3. State observation problem.

(a) We have Lox0 = y(t) = C(t)Φ(t, 0)x0. So of course it is necessary that y ∈ R(Lo); however this should beguaranteed if y is the output of our system and there are no unmodeled dynamics or noise. Since Φ(t, 0) is invertiblefor all t, a sufficient condition would be if there exists t ∈ [0, T ] for which C−1(t) exists. Generally however wedon’t have such a simple case. However, we know that if the observability Grammian

Wo [0, T ] =

ˆ T

0

Φ∗(τ, 0)C∗(τ)C(τ)Φ(τ, 0)dτ

is full rank, then the system is completely observable on [0, T ] or in other words, we can determine x0 exactly fromthe output. We could determine it as in the derivation of the continuous time Kalman filter from lecture notes 18:

x0 = (L?oLo)

−1 L?oy

= W−1o [0, T ]

ˆ T

0

Φ∗(τ, 0)C∗(τ)y(τ)dτ.

(b) In this case we are not guaranteed that y(t) ∈ R(Lo). But we can look for the least-norm approximatesolution to Lox0 = y(t). Let y = yR + yN , where yR ∈ R(Lo) and yN ∈ R(Lo)⊥ = N (L?

o). Note then that yR isthe orthogonal projection of y onto the range of Lo—it is the vector in R(Lo) that is closest to y in the least L2

norm sense. So we are looking for those x0 such that

Lox0 = yR + yN

=⇒ L?oLox0 = L?

o (yR + yN )

= L?oyR

Now as we have seen, if N (Lo) = N (L?oLo) = θ, i.e. Lo is injective, then L?

oLo is invertible and we can recover aunique x0 that is the initial condition that, with no noise, would produce the output closest (in the sense we havedescribed) to the observed output. Now consider the other cases. Note that Lo cannot be surjective, since it mapsto an infinite-dimensional vector space. Now, if Lo is not injective, then at best we can define a set of possible initialconditions that would all result in the output yR, X := x|Lox = yR. The x0 obtained via the Moore-Penrosepseudoinverse, x0 = V1Σ−1r U∗1L?

oy = V1Σ−1r U∗1´ T0

Φ?(τ, 0)C?(τ)y(τ)dτ , would be the solution of least (L2) norm(here, SVD L?

oLo := U1ΣrV∗1 ).

(c) Yes; in the case that Lo is not injective, N (Lo) is nontrivial and there exist x0 ∈ N (Lo) with unit norm;then for any such x0, 〈y, y〉 = 〈Lox0,Lox0〉 = 〈θ, θ〉 = 0.

Problem 4. State vs. Output Feedback.(a) We have

x = Ax+B(−[f1 f2

]x)

=(A−B

[f1 f2

])x

= Acl,

whereAcl =

[0 17 −4

]−[

12

] [f1 f2

]=

[−f1 1− f2

7− 2f1 −4− 2f2

]with characteristic equation,

χAcl(s) = (s+ f1)(s+ 4 + 2f2)− (1− f2)(7− 2f1)

= s2 + 4s+ 2f2s+ f1s+ 4f1 + 2f1f2 − 7 + 2f1 + 7f2 − 2f1f2

= s2 + (4 + 2f2 + f1)s+ 6f1 + 7f2 − 7

(b) We have

x = Ax+B(−ky)

= Ax− kBCx= (A− kBC)x

= Acl,

Page 45: 221 Homework

3

whereAcl =

[0 17 −4

]− k

[12

] [1 3

]=

[−k 1− 3k

7− 2k −4− 6k

]with characteristic equation,

χAcl(s) = s2 + (7k + 4)s+ 27k − 7

Problem 5. Controllable canonical form.The closed loop system is, x1

x2x3

=

0 1 00 0 1−α3 −α2 −α1

x1x2x3

+

001

(v − [ k1 k2 k3]x)

=

0 1 00 0 1

−α3 − k1 −α2 − k2 −α1 − k3

x1x2x3

+

001

vSo

χACL(s) = s3 + (α3 + k1) s2 + (α2 + k2) s+ (α1 + k3)

The desired characteristic polynomial is,

p(s) = s3 + a1s2 + a2s+ a3

So setting k such that,

k1 = a3 − α3

k2 = a2 − α2

k3 = a1 − α1

gives the desired characteristic polynomial.