Lecture Notes- MIT- System Identification

download Lecture Notes- MIT- System Identification

of 30

description

System Identification as used in control engineering for the purpose of treating stochastic systems

Transcript of Lecture Notes- MIT- System Identification

  • System Identification

    6.435

    SET 11

    Computation

    Levinson Algorithm

    Recursive Estimation

    Munther A. Dahleh

    Lecture 11 6.435, System Identification Prof. Munther A. Dahleh

    1

  • Computation

    Least Squares: QR factorization

    Q is invertible and

    error

    Lecture 11 6.435, System Identification Prof. Munther A. Dahleh

    2

  • Initial Conditions:

    Lecture 11 6.435, System Identification Prof. Munther A. Dahleh

    3

  • What about initial conditions?

    Solution 1: sum starts

    appropriately shifted, assume data is available at

    (Covariance method)

    Solution 2:

    assume

    and

    augment the sum to

    (autocorrelation method).

    Lecture 11 6.435, System Identification Prof. Munther A. Dahleh

    4

  • In the 2nd case

    only depends on

    Lecture 11 6.435, System Identification Prof. Munther A. Dahleh

    5

  • Block Toeplitz.

    Structure allows for fast computations

    AR model of order n

    Lecture 11 6.435, System Identification Prof. Munther A. Dahleh

    6

  • Levinson Algorithm

    definition

    Lecture 11 6.435, System Identification Prof. Munther A. Dahleh

    7

  • Lecture 11 6.435, System Identification Prof. Munther A. Dahleh

    8

    def. of

    flip flip

    add 1st + 2nd

  • Flip:

    Clearly

    Lecture 11 6.435, System Identification Prof. Munther A. Dahleh

    9

  • Initial conditions

    good reduction.

    Lecture 11 6.435, System Identification Prof. Munther A. Dahleh

    10

  • Numerical Methods

    Both have no analytical

    solutions in general

    General Procedure

    step size

    direction of the search depends on

    Lecture 11 6.435, System Identification Prof. Munther A. Dahleh

    11

  • Different Methods

    f depends on

    f depends on

    f depends on (Newtons)

    Newtons

    Quasi Newton: Approximate

    Lecture 11 6.435, System Identification Prof. Munther A. Dahleh

    12

  • Special Schemes: Nonlinear Least Squares

    A family of Algorithms

    step size, chosen so that

    Lecture 11 6.435, System Identification Prof. Munther A. Dahleh

    13

  • Newtons

    negligible around min.

    Newton-Gauss, Newton-Raphson

    Lecture 11 6.435, System Identification Prof. Munther A. Dahleh

    14

    gradient steepest descent

  • For instrumental method

    Newton-Raphson

    Computing the gradient:

    ARMAX:

    diff. with respect to

    Lecture 11 6.435, System Identification Prof. Munther A. Dahleh

    15

  • diff. w. r. to

    diff. w. r. to

    Recall

    Lecture 11 6.435, System Identification Prof. Munther A. Dahleh

    16

  • Re-arrange

    dep. on , however assumed stable

    Of course a special case for ARX:

    Exercise

    Jenkins Black-Box model

    State-space model

    Lecture 11 6.435, System Identification Prof. Munther A. Dahleh

    17

  • Recursive Methods

    Computational advantages

    Carry the covariance matrix in the estimate.

    General form:

    Specific form:

    Lecture 11 6.435, System Identification Prof. Munther A. Dahleh

    18

  • Least-square estimate as a recursive estimate

    Off line

    Lecture 11 6.435, System Identification Prof. Munther A. Dahleh

    19

  • Recursive Identification (LS)

    Assume

    Example exponential weight

    means in general

    Lecture 11 6.435, System Identification Prof. Munther A. Dahleh

    20

  • Lecture 11 6.435, System Identification Prof. Munther A. Dahleh

    21

  • A recursive algorithm

    Normalized Gain estimate:

    Lecture 11 6.435, System Identification Prof. Munther A. Dahleh

    22

  • Recursive Algorithm:

    Lecture 11 6.435, System Identification Prof. Munther A. Dahleh

    23

  • Properties of Recursive Algorithms

    Need to store and to compute the next estimate

    is the covariance (an estimate) of and hence gives an

    estimate of the accuracy of [Recall that ]

    is symmetric, so you only need to store the lower part of it.

    (Save on memory)

    Lecture 11 6.435, System Identification Prof. Munther A. Dahleh

    24

  • Recursive Algorithms withEfficient Matrix Conversion

    Define

    Inversion formula

    Consider the matrix

    Lecture 11 6.435, System Identification Prof. Munther A. Dahleh

    25

  • Lecture 11 6.435, System Identification Prof. Munther A. Dahleh

    26

  • Recursive Algorithm:

    Advantage: no need to compute at each iteration

    is iterated directly!

    Kalman filter interpretation

    is the gain

    is the solution of the Riccati equation

    Lecture 11 6.435, System Identification Prof. Munther A. Dahleh

    27

  • Recursive Instrumental Variable Method

    For a fixed, not model dependent Instrument,

    Lecture 11 6.435, System Identification Prof. Munther A. Dahleh

    28

  • You can show:

    where satisfies (by assumption)

    Lecture 11 6.435, System Identification Prof. Munther A. Dahleh

    29

  • Adaptive Control

    P

    Estimator (recursive)

    Controller

    r is a bdd input.

    Estimator: Std least squares

    Controller : Condition is a stable time-varying system.

    is bold for any bdd

    Lecture 11 6.435, System Identification Prof. Munther A. Dahleh

    30

    System Identification 6.435Computation