MIT System Theory Solutions

download MIT System Theory Solutions

of 75

Transcript of MIT System Theory Solutions

  • 8/10/2019 MIT System Theory Solutions

    1/75

    MASSACHUSETTSINSTITUTEOFTECHNOLOGYDepartment

    of

    Electrical

    Engineering

    and

    Computer

    Science

    6.241: DynamicSystemsSpring2011

    Homework

    1

    Solutions

    Exercise1.1 a)GivensquarematricesA1 andA4,weknowthatA issquareaswell:

    A

    1

    A2A=0

    A4

    I 0 A

    1 A2= .0

    A4 0 I

    Note

    that

    I

    0det

    =

    det(I)det(A4) =det(A4),0

    A4

    which

    can

    be

    verified

    by

    recursively

    computing

    the

    principal

    minors.

    Also,

    by

    the

    elementary

    operationsofrows,wehave

    A1 A2 A1 0det

    =

    =

    det

    =

    det(A1).0

    I

    0

    I

    FinallynotethatwhenAandB aresquare,wehavethatdet(AB) =det(A)det(B). Thuswehave

    det(A) =det(A1)det(A4).

    b) AssumeA11

    andA4

    1 exist. Then

    1

    A1 A2 B1 B2 I 0AA =

    0

    =

    0 I

    ,

    A4 B3 B4

    whichyieldsfourmatrixequations:

    1. A1B1

    +A2B3

    =I,

    2. A1B2

    +A2B4

    =0,

    3. A4B3

    =0,

    4. A4B4

    =I.

    From Eqn (4), B 1 1 14 = A

    4, with which Eqn (2) yields B2 = A1

    A2A4

    . Also, from Eqn (3)B3

    =0,withwhic 1hfromEqn(1)B1

    =A1

    . Therefore,

    A1 A1A2A1

    A1 =

    1

    1 4

    0

    A14

    .

    1

  • 8/10/2019 MIT System Theory Solutions

    2/75

    Exercise1.2 a)

    0

    I A1 A2 A A=

    3

    I 0 A3

    A4

    4

    A1

    A2

    b)Letusfind

    B2B =

    B

    1

    B3 B4

    suchthat A1BA =

    A20 A A A 14 3 1

    A2

    Theaboveequation impliesfourequationsforsubmatrices

    1. B1A1

    +B2A3

    =A1,

    2.

    B1A2+

    B2A4 =

    A2,

    3. B3A1+B4A3 =0,

    4. B3A2

    +B4A4

    =A A3A

    14

    1

    A2.

    First two equations yield B1 = I and B2 = 0. Express B3 from the third equation as B =3 4 A fourth.B A3 1

    1 and plug it into the After gathering the terms we get B 14 A4

    A3A

    1A2 =

    A4A3A1

    1

    A2,whichturns intoidentity ifwesetB4 =I. Therefore

    I 0B =

    1A3A1 I

    c)

    Using

    linear

    operations on rows we see that det(B) = 1. Then, det(A) = det(B)det(A) =det

    (BA) =

    det

    (A1)det A4A3A11A2

    .

    Note

    that

    A4A3A

    1

    1

    A2

    does

    not

    have

    to

    be

    invertiblefortheproof.

    Exercise

    1.3

    We

    have

    to

    prove

    that

    det(IAB) =det(IBA).

    Proof: SinceI andIBAaresquare, I 0

    det(IBA) = detB

    IBA

    I A I=

    detA

    = det

    B I 0

    I

    I

    A

    Idet

    A ,

    B I 0 I

    yet,

    from

    Exercise

    1.1,

    we

    have

    det

    I A=

    det(I)det(I) = 1.

    0 I

    Thus, I A

    det(IBA) =det .B I

    2

  • 8/10/2019 MIT System Theory Solutions

    3/75

    d

    1

    d d

    (A(t)B(t))

    =

    A(t)B(t) + t A(t)B(t) + tA(t)

    B(t) + h.o.t.

    A(t)B(t)

    .

    oflimits,i.e.d A(t+ t)B(t+ t) A(t)B(t)

    (A(t)B(t)) =

    lim

    dt t0 t

    WesubstitutefirstorderTaylorseriesexpansions

    d

    A(t

    + t) =

    A(t) + t A(t) +

    o(t)dt

    d

    B(t

    + t) =

    B(t) + t B(t) +

    o(t)dt

    toobtain

    dt t

    dt dt

    Hereh.o.t. standsfortheterms

    d d

    h.o.t.

    =

    A(t) + t A(t)

    o(t) +

    o(t)

    B(t) + t B(t)

    o(dt

    + t

    2

    ),dt

    amatrix quantity,where limt 0

    h.o.t./t

    =0(verify). Reducingtheexpressionandtakingthe

    limit,weobtaind d d

    [A(t)B(t)]= A(t)B(t) +A(t) B(t).dt dt dt

    b)

    For

    this

    part

    we

    write

    the

    identity

    A1(t)A(t) =

    I.

    Taking

    the

    derivative

    on

    both

    sides,

    we

    have

    d d dA1(t)A(t)

    = A1(t)A(t) +A1(t) A(t) =0

    dt dt

    dt

    3

    Now, I A

    det

    =

    det

    IAB 0

    =

    det(IAB).B I B I

    Therefore

    det(IBA) =det(IAB).

    Notethat(IBA) isaqq matrixwhile(IAB) isappmatrix. Thus,whenonewantstocompute

    the

    determinant

    of

    (IAB)or(IBA),s/hecancomparepandqtopicktheproduct(AB

    or

    BA)

    with

    the

    smaller

    size.

    b)Wehavetoshowthat(IAB)1A=A(IBA)1.Proof: Assumethat(IBA)1 and(IAB)1 exist. Then,

    A =

    AI=A(IBA)(IBA)1

    = (AABA)(IBA)1

    = (I

    AB)A(I

    BA)1

    (IAB)1 1A = A(IBA) .

    Thiscompletestheproof.

    Exercise1.6 a)Thesafestwaytofindthe(element-wise)derivativeisbyitsdefinitioninterms

  • 8/10/2019 MIT System Theory Solutions

    4/75

    RearrangingandmultiplyingontherightbyA1(t),weobtain

    dA1(t) =A1

    d(t) A(t)A1(t).

    dt dt

    Exercise1.8Let ( X={g x) = 2 M0+1x+2x + +M

    x

    |i C}.

    a)

    We

    have

    to

    show

    that

    the

    set

    B

    =

    {1, x,

    , xM}

    is

    a

    basis

    for

    X.Proof

    :

    1.

    First,

    lets

    show

    that

    elements

    in

    B

    are

    linearly

    independent.

    It

    is

    clear

    that

    each

    element

    in

    B cannotbewrittenasa linearcombinationofeachother. Moreformally,

    c (1)+c (x) + +c (xM1 1 M

    ) = 0 i ci

    = 0.

    Thus,elementsofB arelinearly independent.

    2.

    Then,

    lets

    show

    that

    elements

    in

    B

    span

    the

    space

    X.

    Every

    polynomial

    of

    order

    less

    than

    or

    equal

    to

    M

    looks

    like M

    p(x) = ixi

    i=0

    forsomesetof

    is.Therefore,

    {1, x1, , xM}spanX.

    db) T :X X and T(g(x)) = g(x).dx

    1. ShowthatT islinear.Proof:

    dT

    (ag1(x) +bg2(x)) = (ag1(x) +bg2(x))dx

    d d= a g1

    +b g2dx dx

    =

    aT

    (g1) +bT(g2).

    Thus,T islinear.

    2.

    g(x) =

    20+1x+2x + +M

    M

    x ,

    so

    T

    (g(x))

    =

    + 2 x

    +

    +

    M xM

    2 M

    1

    1

    .

    Thus

    it

    can

    be

    written

    as

    follows:

    0 1 0 0 0

    0 1

    0 0 2 0 0 0 0 0 3

    0

    . . . . .. . . .

    .. . . .

    . 0

    1

    23..

    =

    22

    33...

    M M

    .

    0 0 0 0 M .0 0 0 0 0 M 0

    4

  • 8/10/2019 MIT System Theory Solutions

    5/75

    The big matrix, M, is a matrix representation of T with respect to basis B. The columnvector

    in

    the

    left

    is

    a

    representation

    of

    g(x)

    with

    respect

    to

    B.

    The

    column

    vector

    in

    the

    rightisT(g)withrespecttobasisB.

    3.

    Since

    the

    matrix

    M

    is

    upper

    triangular

    with

    zeros

    along

    diagonal

    (in

    fact

    M

    is

    Hessenberg),

    theeigenvaluesareall0;i = 0i= 1, , M+ 1.

    4. OneeigenvectorofM for1 =0mustsatisfyM V1 =1V1 = 0

    V1 =

    10

    ...

    0

    is one eigenvector. Since i

    s are not distinct, the eigenvectors are not necessarily independent.

    Thus

    in

    order

    to

    computer

    the

    M

    others,

    ones

    uses

    the

    generalized

    eigenvector

    formula.

    5

  • 8/10/2019 MIT System Theory Solutions

    6/75

    MIT OpenCourseWarehttp://ocw.mit.edu

    6.241J / 16.338J Dynamic Systems and ControlSpring2011

    For information about citing these materials or our Terms of Use, visit:http://ocw.mit.edu/terms.

    http://ocw.mit.edu/http://ocw.mit.edu/termshttp://ocw.mit.edu/termshttp://ocw.mit.edu/termshttp://ocw.mit.edu/termshttp://ocw.mit.edu/
  • 8/10/2019 MIT System Theory Solutions

    7/75

    MASSACHUSETTSINSTITUTEOFTECHNOLOGYDepartmentofElectricalEngineeringandComputerScience

    6.241: DynamicSystemsFall2007

    Homework2Solutions

    Exercise1.4 a)Firstdefineallthespaces:

    R(A) = {yCm | xCn suchthaty=Axm

    }

    R(A) = {zC |yz=zy= 0,y

    R(A)}

    R A ( ) = {pCn | vCm suchthatp=Av}

    N(A) = {xCn |Ax= 0}

    N(A) = {qCm |Aq= 0}

    i)ProvethatR(A) =N(A).Proof: Let

    z

    R(A) yz= 0y R(A)

    xAz = 0xCn

    Az= 0z N(A)

    R(A) N(A).

    Now let

    q

    N

    (A

    )

    A

    q

    = 0

    xAq= 0xCn

    yq= 0y R(A)

    q R(A)

    N(A) R(A).

    ThereforeR(A) =N(A).

    ii)ProvethatN(A) =R(A).Proof: From i)weknowthatN(A) =R(A)byswitchingAwithA. Thatimpliesthat

    N

    (A) =

    {R(A)} =

    R(A).

    b)Showthatrank(A) +rank(B)nrank(AB)min{rank(A),rank(B)}.Proof: i)Showthatrank(AB)min{rank(A),rank(B)}. Itcanbeprovedasfollows:Each column of AB is a combination of the columns of A, which implies that R(AB) R(A).Hence,dim(R(AB))dim(R(A)),orequivalently,rank(AB)rank(A).Each row of AB is a combination of the rows of B rowspace (AB) rowspace (B), but thedimensionofrowspace=dimensionofcolumnspace=rank,sothatrank(AB)rank(B).Therefore,

    1

  • 8/10/2019 MIT System Theory Solutions

    8/75

    rank(AB)min{rank(A),rank(B)}.

    ii)Showthatrank(A) +rank(B)nrank(AB).Let

    rB = rank(B)

    rA = rank(A)

    whereACmn , BCnp.Now, let {v1, , vrB} be a basis set of R(B), and add nrB linearly independent vectors{w1, , wn rB}to

    thisbasistospanallofCn,{v1, v2, , v , w , , w . n 1 nrB}Let

    M = v1| v2 vrB

    | w1| wnrB

    = V W .

    Suppose

    x

    Cn,

    then

    x

    =

    M

    for

    some

    Cn.

    1. R(A) =R(AM) =R([AV|AW]).

    Proof: i) Let x R(A). Then Ay = xforsomey Cn. But y can be written as a linearcombinationofthebasisvectors ofCn,soy=M forsomeCn.Then,Ay=AM =xx R(AM) R(A) R(AM).

    R Cnii) Let x (AM). Then AM y =x for some y . ButM y= zCn Az=xxR(A) R(AM) R(A).Therefore,R(A) =R(AM) =R([AV|AW]).

    2. R(AB) =R(AV).Proof: i)Letx R CrB (AV). Then AV y = x for some y . Yet,V y=B forsomeCp

    since

    the

    columns

    of

    V

    and

    B

    span

    the

    same

    space.

    That

    implies

    that

    AV

    y

    =

    AB

    =

    x

    x R(AB) R(AV) R(AB).ii) Let x R(AB). Then (AB)y = x for some y Cp. Yet, again By = V for someCrB ABy=AV=xx R(AV) R(AB) R(AV).Therefore,R(AV) =R(AB).

    Usingfact1,weseethatthenumberoflinearlyindependentcolumnsofAislessthanorequaltothenumberoflinearlyindependentcolumnsofAV +thenumberoflinearlyindependentcolumnsofAW,whichmeansthat

    rank(A)rank(AV) +rank(AW).

    Usingfact2,weseethat

    rank(AV) =rank(AB)rank(A)rank(AB) +rank(AW),

    yet,therereonlynrB columns inAW. Thus,

    rank(AW)nrB

    rank(A)rank(AB)rank(AW)nrB

    rA(nrB)rAB .

    2

  • 8/10/2019 MIT System Theory Solutions

    9/75

    Thiscompletestheproof.

    Exercise2.2 (a)Forthe2nd orderpolynomialp 22(t) =a0 +a1t+a2t , wehave f(ti) =p2(ti) +ei i= 1, . . . ,16, and ti T. We can express the relationship between yi and thepolynomial asfollows;

    1

    t1 t21

    y1 . . .

    ea 1 .. . 0. . . . . = . a 1 + .. 2 . 1 t16 t 16 a2

    y16

    e16

    Thecoefficientsa0, a1,anda2 aredeterminedbytheleastsquaressolutiontothis(overconstrained)

    problem,a= (A A)1Ay,whereaLS = a0 a1 a2

    .

    Numerically,thevaluesofthecoefficients

    are:

    aLS= 0.2061

    0.5296

    0.375

    For the 15th order polynomial, by a similar reasoning

    w

    e can express the relation between data

    pointsyi andthepolynomialasfollows:y1 1 t

    2 151 t1 t1 a0 e1

    ..

    . . .

    .

    . . .=

    . .

    .

    .

    .

    ..

    + .. . . . . . .y 1 t t2 15

    16 16 16

    t16

    a15 e16

    This can be rewritten as y = Aa+e e

    . Observ that matrix A is invertible for distinct t s.

    i So

    thecoefficientsa 1i ofthepolynomialareaexact =A y,whereaexact = a0 a1 a15

    . The

    resulting error in fitting the data is e = 0, thus we have a perfect fit

    at these particular

    time

    instants.

    Numerically,thevaluesofthecoefficientsofare:

    0.49999998876521 0.39999826604650

    0.16013119161635

    0.044575313859820

    .00699544100513 0.00976690595462

    0.02110628552919

    0.02986537283027

    aexact =

    0.03799813521505

    0.00337725219202

    0.00252507772183

    0.00072658523695

    0.00021752221402

    0.00009045014791

    0.00015170733465

    0.00001343734075

    3

  • 8/10/2019 MIT System Theory Solutions

    10/75

    The function f(t) as well as the approximating polynomialsp15(t) andp2(t) are plotted inFigure2.2b. Note that whileboth polynomials are agood fit, thefifteenthorderpolynomial is abetterapproximation,asexpected.b)Nowwehavemeasurementsaffectedbysomenoise. Thecorrupteddata is

    yi =f(ti) +e(ti) i= 1, . . . ,16 ti T

    wherethenoisee(ti) isgeneratedbyacommandrandn inMatlab.Following the reasoning inpart (a), wecanexpress the relation between thenoisydatapoints yiandthepolynomialasfollows:

    y =Aa+e

    The

    solution

    procedure

    is

    the

    same

    as

    in

    part

    (a),

    with

    y

    replaced

    by

    y.

    Numerically,thevaluesofthecoefficientsare:

    0.00001497214861

    0.000894425437810

    .01844588716755

    0 .14764397515270 0.63231582484352

    1.

    62190727992829

    2.61484909708492

    2.67459894145774

    a

    =

    exact 1.67594757924772

    105

    0.56666848864500

    0.06211921500456

    0.002196227259540.01911248745682

    0.01085690854235

    0

    .00207893294346

    0.00010788458590

    4

    0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 20

    0.5

    1

    1.5

    2

    2.5Part A Dashed: 2nddegree approximation ; Dashdotted: 15thdegree approximation

    Figure2.2a

  • 8/10/2019 MIT System Theory Solutions

    11/75

    0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 23

    2

    1

    0

    1

    2

    3

    4

    5

    6

    7Part B Dashed: 2nddegree approximation ; Dashdotted: 15thdegree approximation

    Figure2.2b

    and 1.2239

    aLS = 0.10890.3219

    The function f(t) as well as the approximating polynomialsp15(t) andp2(t) are plotted inFigure 2.2b. The second order polynomial does much better in this case as the fifteenth orderpolynomial ends up fitting the noise. Overfitting is a common problem encountered when tryingtofitafinitedatasetcorruptedbynoiseusingaclassofmodelsthat istoorich.

    Additional

    Comments A stochastic derivation shows that the minimum variance unbiased

    estimatorforaisa=argminy Aa2 whereW =R1,andRn isthecovariancematrixoftheW n

    random

    variable

    e.

    So,

    a = (AW A)1AW y.

    Roughly speaking, this is saying that measurements with more noise are given less weight in theestimateofa.Inourproblem, Rn =I becausethee

    isare independent,zeromeanandhaveunit

    variance. Thatis,eachofthemeasurments isequallynoisyortreatedasequallyreliable.

    c)p2(t)canbewrittenasp2(t) =a0+a1t+a2t

    2.

    Inordertominimizetheapproximationerrorinleastsquaresense,theoptimal p2(t)mustbesuchthattheerror,fp2, isorthogonaltothespanof{1,t,t

    2}:

    < f

    p2,

    1

    >= 0

    =< p2,

    1

    >

    < fp2, t>= 0 =< p2, t>

    < f

    p2, t2 >= 0=< p2, t

    2 > .

    5

  • 8/10/2019 MIT System Theory Solutions

    12/75

    Figure2.2c

    Wehavethatf = 1e0.8t fort[0,2],So,2

    2 1 5 = 0

    5 e .8tdt= e

    8

    5

    0 2 8

    8

    2 t

    0.8t 15 8 25= e dt= e5 +0 2 32 32

    2 t2 85 125=

    e0 .8tdt= e5

    64 .

    0 2 64

    And,

    8

    < p2,

    1

    >= 2a0+ 2a1+

    a

    23

    8< p2, t>= 2a0+ a

    1+ 4a23

    28 32

    < p2, t >= a0+ 4a1+ a23 5

    Thereforetheproblemreducestosolvinganothersetof linearequations:

    2 2 8 a 0 3 2 8 4 a

    1 = .38 4 32 a 3 5

    Numerically,

    the

    values

    of

    the coefficien

    ts

    2

    are:

    0.5353a

    = 0.2032

    0.3727

    The function f(t) and the approximating

    polynomial

    p2(t) are plotted in Figure 2.2c. Hereweuseadifferent notion forthe closeness of the approximatingpolynomial,p2(t), totheoriginalfunction, f. Roughly speaking, in parts (a) and (b), the optimal polynomial will be the one for

    6

    0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 20.5

    1

    1.5

    2

    2.5Part C Dashed: 2nddegree approximation

  • 8/10/2019 MIT System Theory Solutions

    13/75

    whichthereissmallestdiscrepancybetweenf(ti)andp2(ti)forallti,i.e.,thepolynomialthatwillcome closest to passing through all the sample points, f(ti). All that matters is the 16 samplepoints,f(ti). Inthisparthowever,allthepointsoff matter.

    y1 C1 e1 S1 0Exercise

    2.3

    Let

    y

    = ,

    A

    = ,

    e

    =

    and

    S

    = .

    y2 C2 e2 0 S2

    NotethatAhasfullcolumnrankbecauseC1 hasfullcolumnrank. AlsonotethatS issymmetricpositive definite since both S1 and S2 are symmetric positive definite. Therefore, we know thatx = argmineSeexistsand isuniqueand isgivenbyx= (ASA)1ASy.

    Thusbydirectsubstitutionofterms,wehave:

    x = (C1

    S1C1+C2S2C2)

    1(C1

    S1y1+C2S2y2)

    Recallthatx1 = (C1S1C1)

    1C1S1y1 andthatx2 = (C2

    S2C2)1C

    2S2y2. Hencexcanbere-written

    as:

    x = (Q1+

    Q2)1(Q1x1+

    Q2x2)

    Exercise 2.8 We can think of the two data sets as sequentially available data sets. x is theleastsquaressolutiontoyAxcorrespondingtominimizingtheeuclideannormofe1 =yAx.

    y Ax istheleastsquaressolutionto

    z

    D x

    correspondingtominimizinge1

    e1+e2

    Se2 where

    e2 =zDxandS isasymmetric(hermitian)positivedefinitematrixofweights.Bytherecursionformula,wehave:

    x =x + (AA+DSD)1DS(zDx)

    Thiscanbere-writtenas:

    x =x + (I+ (AA)1DSD)1(AA)1DS(zDx)

    =x + (AA)1D(I+SD(AA)1D)1S(zDx)

    ThisstepfollowsfromtheresultinProblem1.3(b). Hence

    x =x + (AA)1D(SS1+SD(AA)1D)1S(zDx)

    =x + (AA)1D(S1+D(AA)1D)1S1S(zDx)

    =

    x + (A

    A)1

    D

    (S1

    +

    D(A

    A)1

    D

    )1

    (z

    Dx)

    In order to ensure that the constraint z = Dx is satisfied exactly, we need to penalize thecorresponding error term heavily (S ). Since D has full row rank, we know there exists atleastonevalueofxthatsatisfiesequationz=Dxexactly. Hencetheoptimizationproblemwearesettingupdoes indeed haveasolution. TakingthelimitingcaseasS ,henceasS1 0,wegetthedesiredexpression:

    x =x + (AA)1D(D(AA)1D)1(zDx)

    7

  • 8/10/2019 MIT System Theory Solutions

    14/75

    InthetrivialcasewhereD isasquare(hencenon-singular)matrix,thesetofvaluesofxoverwhich we seek to minimize the cost function consists of a single element, D1z. Thus, x in thiscase issimply x =D1z. It is easytoverifythattheexpressionweobtaineddoes in fact reducetothiswhenD isinvertible.

    Exercise

    3.1

    The

    first

    and

    the

    third

    facts

    given

    in

    the

    problem

    are

    the

    keys

    to

    solve

    this

    problem,

    inadditiontothefactthat:R

    U A= .

    0

    Here note that R is a nonsingular, upper-triangular matrix so that it can be inverted. Now theproblemreducestoshowthat

    x =argminyAx2 =argmin(yAx)(yAx)2x x

    is indeedequaltox =R1y1.

    Letstransformtheproblem intothefamiliarform. We introduceanerroresuchthat

    y=Ax+e,

    andwewouldliketominimizee2 whichisequivalenttominimizingyAx2. Usingtheproperty

    e2 =U e2.

    of an orthogonal matrix, we have that

    Thus with Ax, we havee=y

    2 2 2U e =eUU e= (U(yAx))(U(yAx))=U yU Ax2

    2

    e =2 2

    y1 R = (y1Rx)(y1Rx) +y2

    y2.= x0y22

    Sincey22 =y

    2 y2 isjustaconstant, itdoesnotplayanyrole inthisminimization.2

    Thuswewould liktohavey1Rx = 0

    andbecauseR isan invertiblematrix,x=R1y1.

    8

  • 8/10/2019 MIT System Theory Solutions

    15/75

    MIT OpenCourseWarehttp://ocw.mit.edu

    6.241J / 16.338J Dynamic Systems and ControlSpring2011

    For information about citing these materials or our Terms of Use, visit:http://ocw.mit.edu/terms.

    http://ocw.mit.edu/http://ocw.mit.edu/termshttp://ocw.mit.edu/termshttp://ocw.mit.edu/termshttp://ocw.mit.edu/termshttp://ocw.mit.edu/
  • 8/10/2019 MIT System Theory Solutions

    16/75

    MASSACHUSETTSINSTITUTEOFTECHNOLOGYDepartment

    of

    Electrical

    Engineering

    and

    Computer

    Science

    6.241: DynamicSystemsFall2007

    Homework

    3

    Solutions

    Exercise3.2 i)Wewouldliketominimizethe2-normofu, i.e.,u2 . Sinceyn isgivenas2n

    yn = hiun1

    we

    can

    rewrite

    this

    equality

    as

    i=1

    yn =

    h1 h2 hn

    un1un

    2

    .

    ..

    u0

    Wewanttofindtheuwiththesmallest2-normsuchthat

    y =

    Au.

    where

    we

    assume

    that

    A

    has

    a

    full

    rank

    (i.e. hi = 0 for some i, 1 in). Then, the solution

    reduces

    to

    the

    familiar

    form:

    u =A(AA)1y.n h2iBynotingthatAA

    =

    ,

    we

    can

    obtain

    uj asfollows;i=1

    uj =hjyn h2

    ,i=1 i

    for j = 0,1, ,n1.

    ii)a)Letsintroduceeasanerrorsuchthatyn =ye. Itcanalsobewrittenasyyn =e. Thennow

    the

    quantity

    we

    would

    like

    to

    minimize

    can

    be

    written

    as

    r(y

    yn)2+

    u02+

    +

    un21

    whererisapositiveweightingparameter. Theproblembecomestosolvethefollowingminimizationproblem

    :

    n

    u =argmin ui2+re2 =argmin(

    u

    22+r

    e

    22),

    u u

    i=1

    from which we see that r is a weight that characterizes the tradeoff between the size of the finalerror,

    y

    yn,andenergyoftheinputsignal,u.

    In order to reduce the problem into the familiar form, i.e, yAx, lets augment re at thebottom

    of

    u

    so

    that

    a

    new

    augmented

    vector,

    u

    is u

    u =

    ,

    re

    1

  • 8/10/2019 MIT System Theory Solutions

    17/75

    Thischoiceofufollowsfromtheobservationthatthisistheuthatwouldhaveu22 =u22+re2,the

    quantity

    we

    aim

    to

    minimize

    .

    Now

    we

    can

    writey asfollows

    u

    y

    = A

    ..

    . 1 =

    Au =

    Au

    +

    e

    =

    yn+e.r

    re

    Now,ucanbeobtainedusingtheaugmentedA,A,as

    u =A(AA)1y=A

    1

    AA+

    1y.

    rr

    By

    noting

    that

    n1

    1

    AA+ =

    hi2+

    ,

    r ri=1

    we

    can

    obtain

    uj as

    follows

    uj = nhjhy2 1 for j = 0, ,n1.i=1 i +rii) b) When r = 0, it can be interpreted that the error can be anything, but we would like tominimizethe inputenergy. Thusweexpectthatthesolutionwillhavealltheui

    stobezero. Infact,

    the

    expression

    obtained

    in

    ii)

    a)

    will

    be

    zero

    as

    r

    0.

    On

    the

    other

    hand,

    the

    other

    situation

    isan interestingcase. Weputaweightoftothefinalstateerror,thentheexpression from ii)a)givesthesameexpressionas ini)asr .

    Exercise

    3.3

    This

    problem

    is

    similar

    to

    Example

    3.4,

    except

    now

    we

    require

    that

    p(T

    ) = 0.

    We

    tcanderive,fromx(t) =p(t),thatp(t) =x(t)tu(t) = (t)x()d wheredenotesconvolution

    0

    and u(t) is the unit step, defined as 1 when t > 0 and 0 when t =

    0g(t)f(t)d isan innerproducton

    the

    space

    of

    continuous

    functions

    on

    [0, T

    ],

    denoted

    by

    C[0, T

    ],

    which

    we

    are

    searching

    for

    x(t).

    So,

    wehavethaty=p(T) =and0 =p(T) =. Inmatrixform,y

    < T

    t,

    x(t)

    >

    0 =

    =

    T

    t

    1

    , x(t)

    where .,. denotes the Grammian, as defined in chapter 2. Now, in chapter 3, it was shownthattheminimum lengthsolutiontoy=A,x, isx=AA,A1 y. So,forourproblem,

    x =

    Tt 1 Tt 1 , Tt 1 1 y0

    .

    Where,

    using

    the

    definition

    of

    the

    Grammian,

    we

    have

    that:

    T

    t

    1 , Tt 1 < Tt,Tt > < Tt,1> . =

    2

  • 8/10/2019 MIT System Theory Solutions

    18/75

    Now,wecanusethedefinition for innerproducttofindthe individualentries,< T t,T t >=T (T

    t)2dt

    =

    T

    3

    /3,

    < T

    t,

    1

    T>= (T

    t)dt

    =

    T

    2/2,

    and

    =

    T

    .

    Plugging

    these

    in,

    one

    0 0

    cansimplifytheexpressionfor and 12y x obtain x(t) = 12 [ t ]fort[0, T].T 2 TAlternatively,

    we

    have

    that

    x(t) =

    p(t).

    Integrating

    both

    sides

    and

    taking

    into

    account

    that

    p(0) = 0 and

    t tp(0) =

    0,

    we

    hav

    tep(t) =

    1 x()d

    dt =

    f(t )dt .

    Now,

    we

    use

    the

    integrationt

    10 0 0 1

    u

    1

    by

    parts

    formula,

    t tdv

    =

    uv|t0

    v

    du,

    with

    u

    =

    f(t1)

    =

    1

    x()d,

    and

    dv

    = dt ;

    hence

    du

    0 0 1 =

    0t

    =tdf(t1) =x(t1)dt1 andv t1.Plugging inandsimplifyingwegetthatp(t) = 1 x()d dt 1 =0 0t T

    (t

    )x()d.Thus,y=p(T) = (T)x()d =< Tt,x(t)> .Inaddition,wehavethat0 0T0 =p(T) = x()d = .That is,weseektofindtheminimumlengthx(t)suchthat

    0

    y = < Tt,x(t)>0 = .

    Recall that the minimum length solution x(t) must be a linear combination of Tt and 1, i.e.,x(t) =

    a1(T

    t) +

    a2.So,

    y

    =

    < T

    t,

    Ta 21(Tt) +a2 > = a1

    (Tt) dt+ Ta2 (T t)dt = a T3 T21 +a20 0 3 2T

    2

    0 = = (a1(Tt) +

    a T2)dt = a1 +a2T.0 2

    This isasystemoftwoequationsandtwounknowns,whichwecanrewriteinmatrixform:

    T3 T2

    y a

    1= 3 20 T

    2

    T

    a2

    ,

    2

    So,

    T3 T2 1 a1 y=

    3 2 .

    a T2

    2 T 02

    Exercise4.1NotethatforanyvCm,(showthis!)

    v v2 mv . (1)

    Therefore,forACmn withxCn

    Ax2 AxmAx forx= 0,Ax2 m .x2

    x2

    But,

    from

    equation

    (1),

    we

    also

    know

    that

    1

    1 .

    Thus,x x2

    Ax2

    mAx m Ax

    m

    x 2 x 2 A

    x ,

    (2) Equation(2)mustholdforallx=0,therefore

    3

  • 8/10/2019 MIT System Theory Solutions

    19/75

    Axmax =0 2

    x = A2 mA .x2

    Toprovethelowerbound 1 A

    A2,reconsiderequation(1):n

    Ax

    Ax n AxAx 2 n Ax Ax 2 forx= 0, A 2

    x 2

    x 2

    x 2 2

    x 2 nA 2.

    (3)

    But,fromequation(1)forx nCn,

    2

    1 . So,x xAx

    n Ax

    Ax 2

    n x2

    for

    all

    x

    =

    0

    Axincluding

    x

    that

    makes

    maximum,

    so,x

    Axmaxx=0 =

    A x

    nA2,

    or

    equivalently,

    1 An

    A2.

    Exercise4.5AnymnmatrixA, itcanbeexpressedas

    0A

    =

    U 0 0

    V

    ,

    where U and V are unitary matrices. The Moore-Penrose inverse, or pseudo-inverse of A,denoted

    by

    A+,

    is

    then

    defined

    as

    the

    n

    m

    matrix

    A+ =

    V

    U.

    0 0

    1 0

    w

    a) No we

    have

    to

    show

    that

    A+A

    and

    AA+ are

    symmetric,

    and

    that

    AA+A

    =

    A

    and

    A+AA+ =

    +A .Supposethat is adiagonal invertible matrixwith the dimensionofrr. Usingthe givendefinitionsas ellasthefact w thatforaunitarymatrixU,UU =U U =I,wehave

    +

    0

    AA =

    U

    0 0

    0

    V

    V

    1 0U

    0 0

    1 0

    = U I U0 0 0 0

    Ir = U r 0

    U,0 0

    4

  • 8/10/2019 MIT System Theory Solutions

    20/75

    which issymmetric. Similarly, +

    1 0 0A A = V U

    U V

    0 0

    0 0

    1 0

    0

    = V I

    V0 0

    0 0

    I 0=

    r V r V

    0 0

    which isagainsymmetric.The

    facts

    derived

    above

    can

    be

    used

    to

    show

    the

    other

    two.

    + +

    Ir r 0AA A

    = (AA )A

    =

    U

    UA

    0

    0 0

    Ir r 0

    =

    U UU

    0

    0

    V

    0 0

    0=

    U V

    0 0

    = A.

    Also,

    A+AA+ Ir r 0

    = (A+A)A+ =

    V

    +

    V

    A

    I 0

    =

    r

    V r0 0

    1 0

    0 0

    1 0

    V

    V

    U0 0

    =

    V

    U

    0 0= A+.

    b)

    We

    have

    to

    show

    that

    when

    A

    has

    full

    column

    rank

    then

    A+ = (AA)1A,

    and

    that

    when

    A

    has full row rank then A+ 1= A(AA) . If A has full column rank, then we know that m n,rank(A) =n,and

    nn=U

    A V

    .

    0

    Also,

    as

    shown

    in

    2,

    when

    A

    has

    full

    column

    rank,

    chapter

    (AA)1

    exists.

    Hence

    1

    1

    V

    U

    (AA) A = V 0 U V

    0 U0 1

    = VV V 0 U

    = V ()1

    V

    V

    0

    U

    1= V( )

    0

    U

    = V( 1 0 )U

    = A+.

    5

  • 8/10/2019 MIT System Theory Solutions

    21/75

    Similarly, ifAhasfullrowrank,thennm,rank(A) =m,and

    A

    =

    U

    mm 0 V.

    ItcanbeprovedthatwhenAhasfullrowrank,(AA)1 exists. Hence,

    A(AA)1 =

    V

    U U

    0

    V

    V

    U

    1

    0 0

    = V

    U

    UU1

    0

    =

    V

    UU(1)U

    0

    1= V U

    0

    = A+.

    c)

    Show

    that,

    of

    all

    x

    that

    minimize

    y

    Ax2,theonewiththesmallest lengthx2 isgivenbyx =A+y. IfA has fullrowrank, we haveshown inchapter3thatthesolutionwiththesmallestlengthisgivenby

    x =

    A(AA)1y,

    andfrompart(b),A(AA)1 =A+. Therefore

    x =A+y.

    Similary,

    it

    can

    be

    shown

    that

    the

    pseudo

    inverse

    is

    the

    solution

    for

    the

    case

    when

    a

    matrix

    A

    hasafullcolumnrank(comparetheresultsinchapter2withtheexpressionyoufoundinpart(b)

    for

    A+

    when

    A

    has

    full

    column

    rank).

    Now, lets consider the case when a matrix A is rank deficient, i.e., rank(A) = r

  • 8/10/2019 MIT System Theory Solutions

    22/75

    solutionwiththeminimumnormcanbeachievedwhenzi =0fori=r+ 1, r+ 2, , n. Thus,wecan

    write

    this

    z

    as

    z= 1c,

    where 1 r

    01 = 0 0

    and

    r is a square matrix with nonzero singular values in its diagonal in decreasing order. Thisvalueofz alsoyieldsthevalueofxofminimal2normsinceV isaunitarymatrix.Thusthesolutiontothisproblemis

    x = V z=V1c=V1Uy=A+y.

    Itcanbeeasily shownthatthischoiceofA+ satisfiesalltheconditions, ordefinitions, ofpseudoinverse ina).

    Exercise 4.6. a) Suppose A Cnm has full column rank. Then QR factorization for A can beeasilyconstructedfromSVD:

    nA= U

    V

    0

    wheren isanndiagonalmatrixwithsingularvaluesonthediagonal. LetQ=U andR= nVandwegettheQR factorization. SinceQ isanorthogonalmatrix,wecanrepresentanyY Cmpas

    YY

    1=

    Q

    Y2

    Next

    Y

    AX2 Y1 R 2 Y1 RX 2F =Q

    Q

    XF = Q

    Y2

    0

    Y2

    F

    Denote

    Y2

    Y1 RXD=

    and note that multiplication by an orthogonal matrix does not change Frobenius norm of thematrix:

    QD

    2

    F

    =

    tr(DQQD) =

    tr(DD) =

    2DF

    Since

    Frobenius

    norm

    squared

    is

    equal

    to

    sum

    of

    squares

    of

    all

    elements,

    square

    of

    the

    Frobenius

    normofablockmatrix isequaltosumofthesquaresofFrobeniusnormsoftheblocks: Y

    1 RX

    2

    RXY F

    =

    2

    Y1

    2 +

    Y 22 F F

    SinceY2 blockcannotbeaffectedbychoiceofX matrix,theproblemreducestominimizationofY 21RXF. RecallingthatR is invertible(becauseAhasfullcolumnrank)thesolution is

    X=R1Y1

    7

  • 8/10/2019 MIT System Theory Solutions

    23/75

    b)EvaluatetheexpressionwiththepseudoinverseusingtherepresentationsofAandY fromparta):

    AA

    1

    AY =

    R 0

    QQ R

    1

    R 0

    QQ Y1 =R1

    R

    1

    R 0

    Y1 =R1Y10 Y2 Y2

    From 4.5 b) we know that if a matrix has a full column rank, A+ = (AA)1A, therefore bothexpressionsgivethesamesolutions.c)

    Y AY AX2 +ZBX2 2F F = Z B XF

    A

    Since A has full column rank, also has full column rank, therefore we can apply resultsB

    from

    parts

    a)

    and

    b)

    to

    conclude

    that

    A A 1 A Y 1 X

    = =

    AA

    +

    BB AY

    +

    BZ

    B B

    B Z

    8

  • 8/10/2019 MIT System Theory Solutions

    24/75

    MIT OpenCourseWarehttp://ocw.mit.edu

    6.241J / 16.338J Dynamic Systems and ControlSpring2011

    For information about citing these materials or our Terms of Use, visit:http://ocw.mit.edu/terms.

    http://ocw.mit.edu/http://ocw.mit.edu/termshttp://ocw.mit.edu/termshttp://ocw.mit.edu/termshttp://ocw.mit.edu/termshttp://ocw.mit.edu/
  • 8/10/2019 MIT System Theory Solutions

    25/75

    MASSACHUSETTSINSTITUTEOFTECHNOLOGYDepartment

    of

    Electrical

    Engineering

    and

    Computer

    Science

    6.241: DynamicSystemsSpring2011

    Homework

    4

    Solutions

    Exercise 4.7 Given a complex square matrix A, the definition of the structured singular valuefunction isasfollows.

    1

    (A) =min {max() 0 |det(IA) = }

    where

    is

    some

    set

    of

    matrices.

    a) If = {I : C}, then det(IA) = det(IA). Here det(IA) = 0 implies thatthereexistsanx=0suchthat(IA)x= 0.Expandingthelefthandsideoftheequationyields

    x

    =

    Ax

    1

    1

    x

    =

    Ax.

    Therefore

    is

    an

    eigenvalue

    of

    A.

    Since

    ()

    =

    ||, max

    1

    arg

    min{max()|det(IA) = 0}=|| = .

    |max(A)

    |

    Therefore,

    (A) =|max(A)|.

    b)If={Cnn},thenfollowingasimilarargumentasina),thereexistsanx=0suchthat(I

    A)x= 0.That impliesthat

    x= Ax x2 =Ax2 2Ax21

    Ax

    2

    max(A)2 x2

    1

    max().max(A)

    Then, weshowthatthe lowerboundcanbeachieved. Since={Cnn},wecanchoosesuch

    that 1

    =

    max(A) 0

    V ...

    0

    U.

    where

    U

    and

    V

    are

    from

    the

    SVD

    of

    A,

    A

    =

    UV

    .

    Note

    that

    this

    choice

    results

    in

    1

    0

    0IA=IV .

    .

    .

    0

    V =V 1 .

    V

    ..

    1

    which is singular, as required. Also

    from the construction

    of

    , max() =

    1 . Therefore,max(A)

    (A) =max(A).

    1

  • 8/10/2019 MIT System Theory Solutions

    26/75

    c) If={diag(1, , n)|i C}with D {diag(d1, , dn)|di >0}, wefirst note that D1

    exists.

    Thus:

    det(ID1AD) = det(ID1AD)

    =

    det((D

    1

    D

    1A)D)

    = det(D1D1A)det(D)

    =

    det(D1(IA))det(D)

    = det(D1)det(IA)det(D)

    = det(IA).

    WherethefirstequalityfollowsbecauseandD1 arediagonalandthelastequalityholdsbecausedet(D1) = 1/det(D).

    Thus,

    (A) =(D1AD).

    Now lets show the left side inequality first. Since 1 2 , 1 = {I| C} and 2 =

    {diag(1,

    , n)},

    we

    have

    that

    min{max()|det(IA) = 0} min{max()|det(IA) = 0},

    1

    2

    which

    implies

    that

    1

    (A)2

    (A).

    But

    from

    part

    (a),

    1

    (A) =

    (A),

    so,

    (A)2

    (A).

    Now

    we

    have

    to

    show

    the

    right

    side

    of

    inequality.

    Note

    that

    with

    3 =

    {

    C},

    we

    have

    2

    3.

    Thus

    by

    following

    a

    similar

    argument

    as

    above,

    we

    have

    min () det(IA) = 0} min () det(IA) = 0}.

    2

    {max |

    3

    {max |

    Hence,

    2

    (A) =2

    (D1AD)3

    (D1AD) =max(D1AD).

    Exercise4.8WearegivenacomplexsquarematrixAwith rank(A)=1. AccordingtotheSVDofAwecanwriteA=uv whereu,varecomplexvectorsofdimensionn. Tosimplifycomputations

    we

    are

    asked

    to

    minimize

    the

    Frobenius

    norm

    of

    in

    the

    definition

    of

    (A).

    So

    1(A) =

    min{ F |det(IA) = 0}

    is

    the

    set

    of

    diagonal

    matrices

    with

    complex

    entries,

    ={diag(1, , n)|i C}. Introduce

    the column vector = (1, , n)T and the row vector B = (u1v1

    , , unvn ), then the original

    problemcanbereformulatedaftersomealgebraicmanipulationsas

    1(A) =

    minCn{ 2 |B= 1}

    2

  • 8/10/2019 MIT System Theory Solutions

    27/75

    Toseethis,weusethefactthatA=uv,and(fromexcercise1.3(a))

    det(IA) = det(Iuv)

    = det(1vu)

    = 1

    vu

    Thusdet(IA)=0 impliesthat1vu= 0.Thenwehave

    1 = vu u 1 1

    .vn

    .

    .

    .= v1 ..

    n

    1

    n u

    =

    v1 .

    u1

    vn

    un

    = B

    .

    .

    n

    Hence,computing(A)reducestoa leastsquareproblem, i.e.,

    min{F|det(IA) = 0} 2 s.t.1 = B.

    min

    Wearedealingwithaunderdeterminedsystemofequationsandweareseekingaminimumnormsolution.

    Using

    the

    oprojection

    theorem,

    the

    optimal

    is given

    from =

    B(BB )1.

    Substituting

    intheexpressionofthestructuredsingularvaluefunctionweobtain:

    n (A) = |uivi

    2

    i=1

    |

    In

    the

    second

    part

    of

    this

    exercise

    we

    define

    to

    be

    the

    set

    of

    diagonal

    matrices

    with

    real

    entries,

    = {diag(1, , n)|i R}. The idea remains the same, wejust have to alter

    the constrain

    tRe(B)

    equation,

    namely

    B

    =

    1 + 0j.

    Equivalently

    one

    can

    write

    D

    =

    d

    where

    D

    =

    and

    d

    = Im(B)

    1

    .

    Again

    the

    optimal

    is

    obtained

    by

    use

    of

    the

    projection

    theorem

    and

    o =

    D(DDT)1d.

    0

    Substitutingintheexpressionofthestructuredsingularvaluefunctionweobtain:

    1(A) =

    dT(DDT)1d

    Exercise5.1SupposethatACmn isperturbedby thematrixECmn.

    1. Showthat

    |max(A+E)max(A)| max(E).

    3

  • 8/10/2019 MIT System Theory Solutions

    28/75

    AlsofindanE thatachievestheupperbound.

    Note

    that

    A

    =

    A

    +

    E

    E

    A

    =

    A

    +

    E

    E A

    +

    E

    +

    E A A

    +

    E E.

    Also,

    (A

    +

    E) =

    A

    +

    E A+E A+E A+E A E.

    Thus,

    putting

    the

    two

    inequalities

    above

    together,

    we

    get

    that

    |A+E A|E.

    Note

    that

    the

    norm

    can

    be

    any

    matrix

    norm,

    thus

    the

    above

    inequality

    holds

    for

    the

    2-inducednorms

    which

    gives

    us,

    |max(A+E)max(A)| max(E).

    AmatrixE thatachievestheupperbound is

    1 0 0 0.

    0 . . 0 0E

    =

    U

    V

    =

    A,

    . .

    . .. .

    r

    0 0

    ...

    0

    where

    U

    and

    V

    form

    the

    SVD

    of

    A.

    Here,

    A

    +

    E

    =

    0,

    thus

    max(A+E)=0,and

    |0 +max(A)|=max(E)

    isachieved.

    2. Suppose that A has less than full column rank, i.e., the rank(A) < n, but A+E has fullcolumn

    rank.

    Show

    that

    min(A

    +

    E)

    max(E).

    SinceAdoesnothavefullcolumnrank,thereexistsx=0suchthat

    (A+E)x 2 Ex 2Ax= 0(A+E)x=Ex (A+E)x

    =

    2 =Ex2 E 2 = (E)

    x2 max .

    x 2

    But,(A+E)x

    min(A+E)

    2

    ,x2

    4

  • 8/10/2019 MIT System Theory Solutions

    29/75

    asshown inchapter4(pleaserefertotheproof inthe lecturenotes!). Thus

    min(A+E)max(E).

    Finally,amatrixEthatresultsinA+ Ehavingfullcolumnrankandthatachievestheupper

    bound

    is

    0 0 0

    0

    .

    0 . . 0

    0

    . .. .. 0 r+1 .

    0 0

    0

    r+1

    0

    E=U

    V,

    for

    1 0 0 0.

    0

    . . 0 0. .

    . .

    . 0

    r V.A=U .

    0

    NotethatAhasrankr < n,butthatA+E hasrankn,

    A+E=U

    1 0 0 0 0.

    0 . . 0 0

    0

    0 0 r 0 00 0 0 r+1 00 0 0

    ...

    r+1

    0

    V.

    Itiseasytoseethatmin(A+E) =r+1,andthatmax(E) =r+1.

    The

    result

    in

    part

    2,

    and

    some

    extensions

    to

    it,

    give

    rise

    to

    the

    following

    procedure

    (which

    is widely used in practice) for estimating the rank of an unknown matrix A from a knownmatrix

    A+E,whereE2 isknownaswell. Essentially,theSVDofA+E iscomputed,andthe

    rank

    of

    A

    is

    then

    estimated

    to

    be

    the

    number

    of

    singular

    values

    of

    A

    +

    E

    that

    are

    larger

    thanE2.

    5

  • 8/10/2019 MIT System Theory Solutions

    30/75

  • 8/10/2019 MIT System Theory Solutions

    31/75

    Thus,equation(2)becomes(A

    E)z2 r+1.

    Finally,(AE)z2 AE2 forallz suchthatz2 = 1.Hence

    AE2 r+1

    To

    show

    that

    the

    lower

    bound

    can

    be

    achieved,

    choose

    =

    E U

    1 .

    .. V. r

    0

    E hasrankr,

    0

    ...

    E=U

    0A

    r+1 ..

    k

    V..

    0

    andAE2 =r+1.

    Exercise 6.1 The model is linear one needs to note that the integration operator is a linearoperator. Formallyonewrites

    S(u )

    (1+ u =

    e ts2)(t) (u1(s) +u2(s))ds

    0

    = e(ts)

    u e(ts)1(s) +

    u2(s)

    0 0

    =

    (Su1)(t) +(Su2)(t)

    Itisnon-causalsincefutureinputsareneededinordertodeterminethecurrentvalueofy. Formallyonewrites

    (P Su)(t) = (P S P u)(t) +P (t s)T T T T e u(s)dsT

    It

    is

    not

    memoryless

    since

    the

    current

    output

    depends

    on

    the

    integration

    of

    past

    inputs.

    It

    is

    also

    timevaryingsince

    0(STu)(t) = (

    (t T s)TSu)(t) +

    e u(s)ds

    T

    onecanarguethatiftheonlyvalidinputsignalsarethosewhereu(t) = 0ift

  • 8/10/2019 MIT System Theory Solutions

    32/75

    Exercise6.4(i) linear ,timevarying ,causal ,notmemoryless(ii)

    nonlinear

    (affine,

    tranlated

    linear)

    time

    varying

    ,

    causal

    ,

    not

    memoryless

    (iii)nonlinear,time invariant ,causal,memoryless(iv)

    linear,

    time

    varying

    ,

    causal,

    not

    memoryless

    (i),(ii)

    can

    be

    called

    time

    invariant

    under

    the

    additional

    requirement

    that

    u(t) = 0

    for

    t 0therearetwomoreequilibriumpoints:

    0,(Inparticular,this is2

    thecasefor inthe interval0< 1).

    (b)Linearizingthesystemaround(0,0)wegettheJacobian:

    0 1

    A =

    2 0

    Thecharacteristicpolynomialofthesystemisdet(AI) =22. If >0thereisanunstableroot, hence the linearized model is unstable around (0,0). If

    0)

    we

    get

    the

    Jacobian:

    0 1

    4 0

    The characteristic polynomial for the system is det(AI) = 2 + 4. The complex conjugateroots lieonthej axisandthe linearizedsystem ismarginallystable.

    Exercise13.2 a)Noticethatthe input-outputdifferentialequationcanbewrittenas

    y =(u

    a1y)+(u

    a2y

    cy2)

    andwecanuseobservability-like realizationemployedforadiscrete-timesystemofexercise7.1(c). Thedifferentialequationsforthestatesare

    x1 =a1x1+x2+ux2 =a2x1cx12+u

    and the output equation is y = x1. You can check that it is indeed a correct realization bydifferentiatingthefirststateequationandplugginginanexpressionforx2 fromthesecondequation.

    1

  • 8/10/2019 MIT System Theory Solutions

    43/75

    b)Letusconsiderthesystemwithzero inputanda1 =3,a2 =2andc=2.

    x1 = 3x1+x2x2

    = 2x1

    2x12

    Thelinearizedsystemsmatrix is

    3 1

    A = 2 0

    with the characteristic equation 2 + 3+2 = 0, which has the roots 1

    = 1 and 2

    = 2.Thereforethe linearizedsystem isasymptoticallystablearoundtheorigin,whichalsomeansthattheoriginalnonlinearsystemisa.s. aroundtheorigin(seelecturenotesfortherelevanttheorem).Youcanalsoverifybylinearizationthattheotherequilibriumpoint(1,3)isunstable.c)LetusfindaLyapunovfunctionforthelinearsystem,andthenfindaregionwhereitsderivativeisnegativedefinitealongthetrajectoriesoftheoriginalnonlinearsystem. Sincethe linearsystemisasymptoticallystable,foranysymmetricpositivedefinitematrixQthereexistsauniquepositivedefinitematrixP suchthatA

    P +P A=

    Q. LetuschooseQ=I. Solvingthesystemof linear

    equations

    imposed

    by

    the

    matrix

    relation

    we

    obtain

    1 1

    P =

    2

    12

    12

    whichgivesusaquadraticLyapunovfunction

    V(x) =xP x=1

    x21x1x2+x22 2

    TakingaderivativeofV(x)alongthetrajectoryusingthechainruleweget:

    V (x) =

    x

    21

    x

    22

    2x

    2(2x2

    x1) =

    x

    2(1

    +

    2(2x2

    x1))

    x

    2

    1 1 2

    Thecontour linesofthefoundLyapunovfunctionare

    1x1

    2x1x2

    +x2 =C22 2

    forvariousconstantsC.LetusfindsuchCthatifthepoint

    x1

    x2

    iswithintheboundarythen

    V (x)

  • 8/10/2019 MIT System Theory Solutions

    44/75

    Exercise14.2 (a)Thesystem isasymptoticallystable ifalltherootsofthecharacteristicpolynomial lie inthe lefthalfofthecomplexplane. Notethatcharacteristicpolynomial formatrixAinacontrolcanonicalformisgivenby

    det(AI) =N +a0N1+. . .+aN1

    b)

    Use

    continuity

    argument

    to

    prove

    that

    destabilizing

    perturbation

    with

    the

    smallest

    Frobenius

    normwillplaceaneigenvalueofA+ontheimaginaryaxis. Supposethattheminimumperturbationis. Assumethatthereisaneigenvalueintherighthalfplane. Consideraperturbationoftheformc, where0c1. Ascchangesfrom0to1atleastoneeigenvaluehadtocrossjaxis,and the resulting perturbation has a smaller Frobenius norm than . This proves contradictionwiththeoriginalassumptionthatA+ hasaneigenvalue intherighthalfplane.c)Thecharacteristicpolynomialfortheperturbedmatrix is

    det(AI) =N + (a + )N10

    0 +. . .+ (aN1+N )1

    Weknowthatthereexistsaroot=j,where isreal. Ifweplugthissolutionin,andassemblethereal and imaginary parts andset themequal to zero, well get two linearequations in with

    coefficients

    dependent

    on

    ak and

    powers

    of

    .

    For

    example,

    for

    a

    4th order

    polynomial:

    ( )4+ ( + )( )3+ ( + )( j a0 0 j a1 1 j)2 + (a2+2)j+a3+3 = 0

    results inthefollowingtwoequations:

    4( + ) 2 a1

    1 +a3

    +3

    = 0

    (a 30

    +0) + (a2

    +2) = 0

    Thisequationcanbewritten inmatrixformasfollows:

    003

    2

    0

    10 0

    4

    1

    +

    a 2

    1

    a = 3 032 a a23

    orA()=B()

    Thereforetheproblemcanbeformulatedasfindingaminimalnormsolutiontoanunderdeterminedsystemofequations:

    min A()=B()

    By inspection we can see that matrix A has full row rank for any value of unequal to zero. If

    =

    0

    the

    solution

    is

    3 =

    a3,

    and

    the

    rest

    of

    the

    k equal

    to

    zeros.

    For

    all

    other

    values

    of

    the

    solutioncanbeexpressedasafunctionof:

    () =A()

    A( 1

    )A()

    B()

    NotethatthematrixAA isdiagonal,andcanbeeasily

    inverted. Byminimizingthenormofthis

    expressionoverwecanfindthatcorrespondstotheminimizingperturbation. Thenplugthisinthepreviousequationtocomputeminimizingperturbation()explicitly. Comparethenormsofthesolutionscorrespondingto= and=0(i.e. compare () anda3),andchoosethe

    3

  • 8/10/2019 MIT System Theory Solutions

    45/75

    minimumas the solution. Thisway we convertedthe problemtominimizationof a function ofasinglevariable,whichcanbeeasilysolved.d)IncaseN =2thecharacteristicpolynomialoftheperturbedmatrix is

    2 + (a0+0)+ (a1+1) = 0

    where

    =

    j .

    For

    =

    0

    the

    minimizing

    solution

    is

    1 =

    a1,

    0 =

    0.

    If

    =

    0,

    plug

    in

    =

    j ,

    andtheresultingsystemofequations is

    1 = 2a1

    0 = a0Thisisapropersystem(numberofequationsisequaltothenumberofunknowns),anditssolutionis given directly by the equations. To minimize the norm of the solution we set =

    a1. Note

    thatstabilityoforiginalmatrixArequiresthata1

    >0, a0

    >0(in factpositivityofallcoefficientsisalwaysanecessarycondition,butnotsufficientforN >2- useRouthcriterionforatestinthatcase!). Next, we have to compare |a1| and |a0|, and choose the smallest of them to null with 1or0. Inourproblema0 =a1 =a,thereforethereare2solutions: (0, a)and(a,0)forthesetofs.

    Exercise14.5a)Consideranonlineartime-invariantsystem

    x =f(x,u)

    andits linearizationaroundanequilibriumpoint(x, u) is

    x=Ax+Bu.

    1)Sincethe feedbacku=Kxasymptoticallystabilizethe linearizedsystem,alltheeigenvaluesofthematrixA+BK are inOLHP.

    2)

    Without

    loss

    of

    generality,

    we

    can

    take

    (x,

    u)

    =

    (0,

    0)

    where

    the

    nonlinear

    system

    is

    linearized

    around. Then,

    fx =f(x,u) =

    x

    fx+ u+(x,u),(x, u)=(0,0) u

    where( theorderof 2x,u) isin x .

    (x, u)=(0,0)

    Ifu=kx,theaboveequationcanbewrittenasfollows:

    f(x,u) =Ax+Bu+(x,kx),

    where

    (x,

    kx) 0asx

    x 0.

    Thustheoriginalnonlinearsystem is locallystabilizedwiththecontrol lawu=kx.b)ConsiderthesystemS1:

    y +y4+y2u+y3 = 0

    whereuisthe input.1)Letx1 =y andx2 =y,thenstatespacerepresentationofthesystemS1 is

    4

  • 8/10/2019 MIT System Theory Solutions

    46/75

    x1

    x

    = 2x 32

    x 2 4 .1 x2ux2

    Thustheuniqueequilibriumpointx isfoundtobex =(0,0),which is independentonu.2)Chooseu =0. Thenthe linearizedsystem is

    0 1 0x = x+3 2 2 4 3 2 x x2u x (x,u1 2 )=(0,0) x 2 (x,u)=(0,0)

    0 1 0

    = x+ u

    0 0

    0

    x = Ax+Bu.

    Since the eigenvalues of the matrix A are at 0, and the u term does not enter to the linearizedsystem,the linearizedsystemcannotconcludethe localstabilityofthenonlinearsystemS1.c)Letu=cx32 wherec isafunctionofx1 andx2. Then

    x1 = x2

    x2 = x31 x22(cx22)x42=x31 cx22.

    So, itcanbeconsideredthatthissystemhasanonlinearspring.Now,chooseaLyapunovfunctioncandidate intheV(x) =x4 21+x2. Then,

    V(x) =

    x4 2x31 2x2

    3 2x1 cx2= 4x3x

    2 x3x

    cx3

    1 2 2 1 2

    =

    x2(2x31

    2cx22)

    = 2x2(x31 cx22).

    InordertomakeV (x)negativedefinite,wewould liketohaveset

    23 2 x1

    x1cx2 = x2x2

    x21+x22+x

    31x = 2c .

    x32

    This

    choice

    of

    c

    makes

    V (x)

    be

    V (x) =2x2 212x2

  • 8/10/2019 MIT System Theory Solutions

    47/75

    Exercise14.7 (a)Thelinearizedsystem isgivenby:

    x1

    =

    1 2x2 x1x2 x2 (x1+1)

    Hence,thematrixAofthe linearizedsystem

    x20

    is: 1 0

    A=0 1

    Ahasrepeatedeigenvaluesat1. Thus,thenonlinearsystemislocallyasymptoticallystableabouttheorigin.

    (b)Thelinearizedsystem isgivenby:

    x 21

    =

    3x1

    1 x1

    x2 1 1 x20

    Hence,

    the

    matrix

    A

    of

    the

    linearized

    system

    is:

    0 1A=

    1 1

    Thecharacteristicpolynomial is: (1+)1=0. Theeigenvaluesarethus1,2=1 2 52 (oneoftheeigenvaluesisintherighthalfplane),andthenonlinearsystemisunstableabouttheorigin.

    (c)Thelinearizedsystem isgivenby:

    x1

    =

    x2

    x1

    1 1

    2x1

    1

    x2

    0

    Hence,thematrixAofthelinearizedsystemis:

    1 1

    A=0 1

    Ahasrepeatedeigenvaluesat1. Thus,thenonlinearsystemislocallyasymptoticallystableabouttheorigin.

    (d)Thelinearizedsystem isgivenby:

    x1(k+1)

    2 x (k) x= 2 1

    (k)

    x2(k

    + 1)

    1 1

    x2(k)

    0

    Hence,thematrixAofthelinearizedsystemis:

    2 0

    A=1 1

    A has eigenvalues 1 = 2 and 2 = 1. Since one of the eigenvaues is outside the unit disk, thenonlinearsystem isunstableabouttheorigin.

    6

  • 8/10/2019 MIT System Theory Solutions

    48/75

    (e)Thelinearizedsystem isgivenby:

    x1(k+1) x (k)ex1(k)x=

    2(k) 1(k)x2(k)2 x x

    1(k)e x1(k)x2(k+ 1)

    1 2

    x2(k)0

    Hence,

    the

    matrix

    A

    of

    the

    linearized

    system

    is:

    0 0

    A=1 2

    A has eigenvalues 1 = 0 and 2 = 2. Since one of the eigenvalues is outside the unit disk, thenonlinearsystem isunstableabouttheorigin.

    7

  • 8/10/2019 MIT System Theory Solutions

    49/75

    MIT OpenCourseWarehttp://ocw.mit.edu

    6.241J / 16.338J Dynamic Systems and Control

    Spring2011

    For information about citing these materials or our Terms of Use, visit:http://ocw.mit.edu/terms.

    http://ocw.mit.edu/http://ocw.mit.edu/http://ocw.mit.edu/http://ocw.mit.edu/http://ocw.mit.edu/http://ocw.mit.edu/
  • 8/10/2019 MIT System Theory Solutions

    50/75

    MASSACHUSETTSINSTITUTEOFTECHNOLOGYDepartment

    of

    Electrical

    Engineering

    and

    Computer

    Science

    6.241: DynamicSystemsSpring2011

    Homework

    7

    Solutions

    Exercise15.1(a)Thesystemiscausaliftheimpulseresponseisright-sided. Considerasequenceeatu[t],whereu[t] isaunitstep: u[t] = 1 fort0,andzerootherwise. Laplacetranformofthissequence

    converges

    if

    Re(s)

    >a,and isequalto

    st 1e eatu[t]dt

    = ,

    ROC

    :

    Re(s)

    >

    a

    s+a

    Thereforeforasystemrepresentedbyfirst-ordertransferfunctiontobecausaltheROChastobeto

    the

    right

    of

    the

    pole

    (in

    fact

    this

    is

    true

    for

    a

    multiple

    pole

    as

    well).

    Since

    a

    rational

    function

    can

    be

    represented

    by

    a

    partial

    fraction

    expansion,

    and

    region

    of

    convergence

    is

    defined

    by

    the

    intersection of individual regions of convergence, the ROC of the system has to lie to the rightof

    the

    rightmost

    pole

    for

    the

    system

    to

    be

    causal.

    In

    case

    of

    a

    rational

    transfer

    function

    this

    is

    alsoasufficientcondition. NotethatifanLTIsystemhasarationaltransferfunction, its impulseresponseconsistsofdivergingordecayingexponents(maybemultipliedbypowersoft),thereforeallconcepts

    of

    p-stability

    are

    equivalent.

    For

    BIBO

    stability

    the

    impulse

    response

    has

    to

    be

    absolutely

    integrable,whichisequivalenttoexistenceofFouriertransform. TheFouriertransformisLaplacetransform evaluated at thej axis. Therefore for stability the ROC has to include thej axis.Using

    these

    two

    rules

    we

    can

    see

    that

    the

    system

    s

    + 2

    G(s) =

    (s2)(s+1)

    (i)isneithercausalnorstableforROCgivenbyRe(s)

  • 8/10/2019 MIT System Theory Solutions

    51/75

    f(t) = 1for0< t

    1:

    1

    +

    2(t) (t2

    e e )1

    y(t) =

    u[t 4 ]f()d = e2 t

    1e2 et(e1)

    3 3 3

    Clearly

    this

    function

    grows

    unbounded

    and

    has

    an

    infinite

    p-norm.

    However

    the

    input

    f(t)

    has

    p-norm equal to 1 for any p including . In case (i) we can use f(t) = 1 for 1 < t

    0

    there

    exists

    such

    that

    O(Cup)up,whichimpliesthatyp (C+)up.Thatconcludesthep-stability,withup < .

    c)

    When

    g(x)

    is

    a

    saturation

    function

    with

    a

    scale

    of

    1,

    the

    system

    is

    p-stable

    for

    p1. Proof:

    Againsincethesystemfromutoz isp-stable,thereexistsaconstantC suchthat z p C u p.

    So, for all u with u p

    , if we take C tobe 1 ,thenwehave:

    zP

    Cup

    1.

    Since,

    z

    g(z) =|z

    | 1| | ,

    1 |z| 1

    for |z| 1wehaveyp =zP Cup 1.

    Therefore

    this

    system

    is

    p-stable

    for

    all

    p1 in|z|

  • 8/10/2019 MIT System Theory Solutions

    52/75

    Exercise16.1 a)SinceuX,wecanexpressuas

    N

    u

    =

    u t where

    uR

    nie

    jii , i R.

    i=1

    With

    N

    N u(t) = ejit = uTi e

    jit

    i=1 i=1

    u(t)u(t) = (

    N

    N N jit e uT)( u ji e

    ktj )

    i=1 k=1N

    =

    ej(ki)t uTi uk,i=1 k=1

    we

    can

    computePu

    as

    follows:

    1Pu = lim

    L

    2L

    1

    L

    u( t

    t)u( )dt

    LLT

    = lim u ej(ji uji)tdt

    L2LL i j

    lim

    u

    u

    L1 T= ej(j

    i)t

    L i j dt.

    L2i j L

    NotethatasL ,becauseoftheorthnormalityofcomplexexponential,

    L

    0 :

    i

    =

    jlim

    ej

    (ji)tdt =L

    L

    1 :

    i

    =

    j

    Thus

    1

    N N

    T

    P = u u 2u lim

    i i(2L) =L2L

    i=1

    ui 2.

    i=1

    b)

    The

    output

    of

    the

    system

    can

    be

    expressed

    as

    y(t) = H(t)u(t) in time domain or Y(s) =

    jit jitH(s)U(s) in frequency domain. For a CT LTI system, we have y = H(ji)uie if u=uie .Thus

    N

    y(t) =

    H(ji)uiejit.

    i=1

    Followingthesimilarmethodtaken ina),wehave

    N N

    T

    y(t)y(t) = ui H(ji)ejit

    i=1

    H(jk)u e

    jktk

    k=1

    N

    N

    =

    ej(ki)t Tiu

    H(ji)H(jk)uk.i=1 k=1

    3

  • 8/10/2019 MIT System Theory Solutions

    53/75

    Thus,Py

    canbecomputedasfollows:

    = u H

    L1 T

    P lim (j )H(j )u ej( i)ty i i k kk dt

    L2Li k L

    1

    =

    lim

    H(j )u

    2i i (2L).L2L

    i

    Py =N

    H(ji)u 2

    i .i=1

    c)Using

    the

    fact

    shown

    in

    b),

    N

    Py =

    H(ji)ui2

    i=1

    N

    2

    max(H(ji))ui

    2

    i=1

    N

    max2max(H(ji))i

    i=1

    u2i

    = max2max(H(ji))Pui

    Py max2max(H(ji))Pu

    i

    Py sup2max(H(j))Pu.

    sup Py

    = H2 .Pu=1

    d)Nowwehavetofindan input uXsuchthatPy =H2 P u. ConsideraSVDofH(j0):

    | |

    1 v1

    . .

    H(j0) =UV = u1 un . . ..

    .

    | | n vn

    Letu=v ej01 where0 issuchthatH = max(H(j0)),then

    P = H(j )v ej0 21

    y 2=

    0 H(j ) 2

    0 v 2 2

    1 = 1 u 2

    1 2.

    2

    Py = max(H(j0)).

    Indeed,

    the

    equality

    can

    be

    achieved

    by

    the

    choice

    of

    u

    =

    v1ej0.

    4

  • 8/10/2019 MIT System Theory Solutions

    54/75

    Exercise16.3Wecanrestrictourattention totheSISOsystemsinceonecanprovetheMIMOcase

    with

    similar

    arguments

    and

    use

    of

    the

    SVD.

    i.)

    Inputl

    Outputl

    this

    case

    was

    treated

    in

    chapter

    16.2

    of

    the

    notes

    ii.)

    Input

    l2 Output

    l2this

    case

    was

    treated

    in

    chapter

    16.3

    of

    the

    notes

    iii.) InputPowerOutputPowerthiscasewastreated intheExercise16.1. PleasenotethatPy =H

    2 Pu,thegivenentry inthetable

    corresponds

    to

    the

    rms

    values.

    iv.) InputPowerOutputl2a

    finite

    power

    input

    normally

    produces

    a

    finite

    power

    output

    (unless

    the

    gain

    of

    the

    system

    is

    zero

    atallfrequencies)and inthatcasethe2-normoftheoutput is infinite.v.) Inputl2 OutputPowerThis

    is

    now

    the

    reveresed

    situation,

    but

    with

    the

    same

    reasoning.

    A

    finite

    energy

    input

    produces

    finite

    energy

    output,

    which

    has

    zero

    power.

    vi.)

    Input

    Power

    Output

    lHere the idea is that a finite power input can produce a finite power output whose -norm isunbounded. Thinkingalong the lines of example 15.2consider the signal u= m=1vm(t)where

    vm(t) =m if m < t < m+m3 and

    otherwise

    0.

    This

    signal

    has

    finite

    power

    and

    becomes

    unboundedovertime. TakethatsignalastheinputtoanLTIsystemthatisjust

    aconstantgain.

    vii.)

    Input

    l2 Outputl

    |

    |y(t)

    =

    h(t

    )

    s)u(s

    ds

    |y(t)|=|< h(t >

    s), u(s) |

    h2u2

    The

    last

    step

    comes

    from

    the

    application

    of

    the

    Cauchy

    Schwartz

    inequality.

    Taking

    the

    sup

    on

    the

    lefthandsidegivesy

    h2u2;nowtoachievetheboundapplytheinputu(t) =h( t)/ h . 2

    viii.)

    Input

    l Outputl2Applyasinusoidalinputofunitamplitudesuchthatj isnotazeroofthefrequencyresponseofthetransferfunctionH(j).

    ix.)

    Input

    l Output

    Power

    note

    that{u:u

    1} isasubsetof{u:Pu

    1}. Therefore

    sup{Py :u 1} sup{Py :Pu 1};

    we use the lower bound from case iii.). Note that the entry in the table corresponds to rms andnottopower.

    5

  • 8/10/2019 MIT System Theory Solutions

    55/75

    MIT OpenCourseWarehttp://ocw.mit.edu

    6.241J / 16.338J Dynamic Systems and ControlSpring2011

    For information about citing these materials or our Terms of Use, visit:http://ocw.mit.edu/terms.

    http://ocw.mit.edu/http://ocw.mit.edu/termshttp://ocw.mit.edu/termshttp://ocw.mit.edu/termshttp://ocw.mit.edu/termshttp://ocw.mit.edu/
  • 8/10/2019 MIT System Theory Solutions

    56/75

    MASSACHUSETTSINSTITUTEOFTECHNOLOGYDepartmentofElectricalEngineeringandComputerScience

    6.241: DynamicSystemsSpring2011

    Homework8Solutions

    Exercise 17.4 1) First, in order for the closed loop system to be stable, the transfer functionfrom ( w w )T1 2 to ( y u )

    T has to be stable. The transfer function from w1 to y is given by(IP K)1P and iscalledsystemresponsefunction. Thetransfer functionfromw1 tou isgivenby (I KP)1 and is called input sensitivity function. The transfer function from w2 to y is(IP K)1P K and iscalledthecomplementarysensitivity function. Thetransfer function fromw touisgivenby(IKP)12 K. Therefore,wehavethefollowing :

    y (I+P K)1P (I+P K)1P K w1=

    u

    .

    (I+KP)1 (I+KP)1

    K w2

    So, ifK isgivenas

    K= Q(IP Q)1 = (IQP)1Q,

    then

    (I+P K)1P = (I+P Q(IP Q)1)1P

    = (((IP Q) +P Q)(IP Q)1)1P

    = (IP Q)P

    (I+P K)1P K = (I P Q)P Q(IP Q)1

    = P(IQP)(IQP)1Q

    =

    P Q

    (I+KP)1 = (I+ (IQP)1QP)1

    = ((IQP+QP)(IQP)1)1

    = IQP

    (I+KP)1K = (IQP)(IQP)1Q

    = Q.

    Thus,theclosed looptransferfunctioncanbenowwrittenasfollows:

    y (I

    = P Q)P P Q w1

    .

    u (IQP) Q w2

    In

    order

    for

    the

    closed

    loop

    system

    to

    be

    stable,

    then

    all

    the

    transfer

    functions

    in

    the

    large

    matrix

    abovemustbestableaswell.

    (IP Q)P IP QP (I+P Q)P

    (I+PQ)P P+P2Q

    P Q PQ

    IQP I+QP I+QP.

    1

  • 8/10/2019 MIT System Theory Solutions

    57/75

    SinceP andQarestablefromtheassumptions,weknowthatallthetransferfunctionsarestable.Thereforetheclosed loopsystem isstable ifK=Q(IP Q)1 = (IQP)1Q.

    2)From1),wecanexpressQ intermsofP andK inthefollowingmanner.

    K

    =

    Q(I

    P Q)1

    K(IP Q) = Q

    KKP Q = Q

    K = (I+KP)Q

    Q = (I+KP )1K=K(I+P K)1,

    bypushthroughrule.

    For some stable Q, the closed loop is stable for a stable P. by the stabilizing controller K =1Q(IP Q) . Yet,notall stableQcanbeusedforthisformulationbecauseofthewell-posedness

    of

    the

    closed

    loop.

    In

    the

    state

    space

    descriptions

    of

    P

    and

    Q,

    in

    order

    for

    the

    interconnectedsystem, in this case K(s) to be well-posed, we have to have the condition ( 17.4 ) in the lecture

    note, i.e.,(IDPQ()) is invertible.

    3)SupposeP isSISO,w1 isastep,andw2 =0. Then,wehavethe followingclosed looptransferfunction:

    Y(s)

    (IP Q)P

    =

    1

    ,U(s) IQP s

    sincetheLaplace transformoftheunitstep is 1 wehaves

    1U(s)

    =

    (1

    Q(s)P

    (s))

    .s

    Thenusingthefinalvaluetheorem,inordertohavethesteadystatevalueofu()tobezero,weneed:

    1u() = lims(1

    s0Q(s)P(s)) =0

    s1Q(0)P(0) = 0

    Q(0) = 1/P(0).

    Therefore,Q(0)mustbenonzeroand isequalto1/P(0). Notethatthiscondition impliesthatP

    cannot

    have

    a

    zero

    at

    s

    =

    0

    because

    then

    Q

    would

    have

    a

    pole

    at

    s

    =

    0,

    which

    contradicts

    that

    Qisstable.

    Exercise17.5 a)Letl(s)bethesignalattheoutputofQ(s),thenwehave

    l = Q(r(PP0)l)

    (I+Q(PP0))l = Qr

    l = (I+Q(PP0))1Qr.

    2

  • 8/10/2019 MIT System Theory Solutions

    58/75

    2

    Since we can write y =P l, and with P(s) = , P0(s) =1 ,andQ=2,thetransfer function

    s1 s1

    fromrtoy canbecalculatedasfollows:

    Y(s) = P(s)L(s)

    = P(I+Q(P

    0))

    1

    P QR(s)

    2

    1

    1

    2=

    1

    +

    2

    2R(s)

    s1

    s

    1 s

    4 s 11

    +1

    = R(s)s1 s1

    4 s 1=

    R(s)

    s1 s+1Y(s) 4

    = .R(s) s+1

    b)There isanunstablepole/zerocancellationsothatthesystem isnot internallystable.c)SupposeP(s) =P0(s) =H(s)forsomeH(s). Thenusingapartoftheequation ina),wehave

    Y(s) = H(s)(I+Q(s)(H(s)H(s)))1Q(s)R(s)

    = H(S)I1Q(s)R(s)

    = H(s)Q(s)R(s)

    Y(s) = H(s)Q(s).

    R(s)

    ThereforeinorderforthesystemtobeinternallystableforanyQ(s),H(s)hastobestable.

    Exercise19.2Thecharacteristicpolynomfortheclosed loopsystemisgivenby

    s(s

    +

    2)(s

    +

    a) + 1 = 0

    Computing the locus of the closed poles as a function of a can be done numerically. The closedloop system is stable if a0.225. The above bound can also be derived by means of root locustechniques or by evaluating the Routh Hurwitz criterion. Another way of deriving bounds forthe value of a is by casting this parametric uncertainty problem as an additive or multiplicativeperturbationproblem,seealso19.5. Onecanexpectthatthederivedboundsinsuchacasewouldberatherconservative.

    Exercise19.4Wecanrepresentanuncertainty infeedbackconfiguration,asshownbelow.Notethattheplant isSISO,andweconsiderblocksandW tobeSISOsystemsaswell,so

    wecancommutethem. Thetransferfunctionseenbytheblockcanbederivedasfollows:

    z = P0(W wKz )

    = (I+P0K)1P0W w

    M = (I+P0K)1P0W.

    Applythesmallgaintheorem,andobtaintheconditionforstabilityrobustnessoftheclosed loopsystemasfollows:

    3

  • 8/10/2019 MIT System Theory Solutions

    59/75

    Figure19.4

    W(j)P

    0(j)sup

    0. Letuscalculatethedeterminant inquestion:

    W21K111 1+K11P11 W12W21K1K212W12K22det(IM) = det 1

    = 1(1+K11P11) (1+K22P22)1+K22P22

    To have a stable perturbed system for arbitrary 1 1 and 2 1 it is necessary andsufficientto impose

    W12W21K1K2

    (1+K11P11) (1+K22P22)1.

    teAt

    1=

    I+At=

    0 1

    thus

    te

    At b=

    1

    b)thereachabilitymatrix is:

    0 1

    b Ab =1 0

    The

    reachability

    matrix

    has

    rank

    2,

    therefore

    the

    system

    is

    reachable.

    Now,

    we

    compute

    the

    reachabilityGrammianoveran intervalof length1: 1 1 1G= eA (T)bbeA (T)

    d = 30

    2

    1

    12

    The

    system

    is

    reachable

    thus

    the

    Grammian

    is

    invertible,

    so

    given

    any

    final

    statexf wecanalways

    findsuchthatxf =G. Inparticular

    1

    18

    =2 10

    c)

    According

    to

    23.5

    defineFT(t) =eA(1t) b. Thenu(t) =F(t) isacontrol inputthatproduces

    atrajectorythatsatisfiestheterminalconstraintxf. Thecontroleffort isgivenas: Tu2 d = G

    0

    Infactthisinputcorrespondstotheminimumenergyinputrequiredtoreachxf in1second. Thiscanbeverified bysolvingthecorrespondingunderconstrained leastsquares problemby meansofthe

    tools

    we

    learned

    at

    chapter

    3.

    1

  • 8/10/2019 MIT System Theory Solutions

    69/75

  • 8/10/2019 MIT System Theory Solutions

    70/75

    b)NO.Theexplanationisasfollows. Withthecontrolsuggested,theclosedloopdynamicsisnow

    x = Ax+ (b+)uTu = f x+v

    x = (A+ (b+)fT)x+ (b+)v.

    Suppose

    that wi was the minimizing eigenvector of unity norm in part a). Then it is also an

    eigenvector of matrix A+ (b+)fT since wi is orthogonalto b+. Therefore feedback doesnotimprovereachability.

    Exercise24.5a)Thegivensystemingeneralforallt0withu(k) = 0k0hasthefollowingexpressionfortheoutput:

    y(t) = CAt k 1Bu(k)k=

    1

    =

    CAt

    A

    k

    1Bu(k)k=1

    sincematrixAisstable. NotethatbecauseofstabilityofmatrixAallofitseigenvaluesarestrictlywithin

    unit

    circle,

    and

    from

    Jordan

    decomposition

    we

    can

    see

    that

    lim Ak 2 = 0k

    therefore x() does not influence x(0). Thus the above equation can be used in order to find

    x(0)asfollows:

    x(0)= Ak1Bu(k).k=1

    b) Since the system is reachable, any Rn can be achieved by some choice of an input of theabove

    form.

    Also,

    since

    the

    system

    is

    reachable,

    the

    reachability

    matrixRhas fullrow rank. As

    aconsequence(RRT)1 exists. Thus, inordertominimizethe inputenergy,wehavetosolvethefollowing

    familiar

    least

    square

    problem:

    Find minu2

    s.t. =

    Ak1Bu(k).

    k=1

    Then

    the

    solution

    can

    be

    written

    in

    terms

    of

    the

    reachability

    matrix

    as

    follows:

    umin =RT(RRT)1,so

    that

    its

    square

    can

    be

    expressed

    as

    2u = uTmin minumin

    T RRT 1 TRRT RRT= ((( ) ) ( )1= T(RRT)1,

    3

  • 8/10/2019 MIT System Theory Solutions

    71/75

  • 8/10/2019 MIT System Theory Solutions

    72/75

    1

    = max{

    (t)2 |

    u(t)2y 1, u(k) = 0 k 0u

    t=0 t=

    }

    1 =

    max{ ()

    |

    u

    s.t.

    =

    x(0)

    and

    u(t)22

    1

    , u(k) = 0

    ,

    k 0

    t=}

    =

    max{2()| min

    u 221

    }=

    max{2()|1()1}.

    e)

    Now,

    using

    the

    fact

    shown

    in

    d)

    and

    noting

    the

    fact

    thatP1 =MTM where M isHermitian

    squarerootmatrixwhich is invertible,wecancompute:

    =

    max

    {2() |1()1}

    { = max N 2 |M 22 set=M12 1} l

    =

    max )Tl

    {(M1l OTOM1l | l22 1}= max(OM1)= max((M

    1 )T

    1 T

    OTOM1)=

    max((M ) QM1)

    = max(QM1(M1)T)

    =

    max(Q P)

    Exercise25.2a)Given:

    s+f s+f 1H1(s) = = , H2(s) = .

    (s+4)3 s3+12s2+48s+64 s2Thusthestate-spacerealizations incontrollercanonicalformforH1(s)andH2(s)are:

    12

    48

    64 1

    A1 =

    0

    1 00

    , B1 =

    0 1

    0

    , C1 =

    0 1

    f

    0

    , D1

    = 0,

    and

    A2 = 2 , B2 = 1 , C2 = 1 , D2 = 0.

    Since

    f

    is

    not

    included

    in

    the

    controllability

    matrix

    for

    H1(s)withthisrealization,thecontrollability,whichisequivalenttoreachabilityforCTcases,thecontrollabilityisindependentofthevalueof

    f.Thus,checktherankofthecontrollabilitymatrix:

    5

  • 8/10/2019 MIT System Theory Solutions

    73/75

  • 8/10/2019 MIT System Theory Solutions

    74/75

  • 8/10/2019 MIT System Theory Solutions

    75/75

    MIT OpenCourseWarehttp://ocw.mit.edu

    6.241J / 16.338J Dynamic Systems and ControlSpring2011

    For information about citing these materials or our Terms of Use, visit:http://ocw.mit.edu/terms.

    http://ocw.mit.edu/http://ocw.mit.edu/termshttp://ocw.mit.edu/termshttp://ocw.mit.edu/termshttp://ocw.mit.edu/termshttp://ocw.mit.edu/