Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and...

102
G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software The material in this monograph is in part deduced from the slides “Linear Control Theory in Geometric Terms” presented at the CIRA Summer School “Antonio Ruberti”, Bertinoro, July 15-20, 2002 by G. Marro, L. Ntogramatzidis, D. Prattichizzo, and E. Zattoni. The description of the software comes in part from “Multivariable Feedback with Geometric Tools” by G. Marro, in A Tribute to Antonio Lepschy, G. Picci and M.E. Valcher, Eds, Edizioni Libreria Progetto, Padova, 2007.

Transcript of Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and...

Page 1: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

G. Marro

Controlled and Conditioned Invariants

in Linear System Theory

Volume 2:

New Applications and Improved Software ∗

∗ The material in this monograph is in part deduced from the slides “LinearControl Theory in Geometric Terms” presented at the CIRA Summer School“Antonio Ruberti”, Bertinoro, July 15-20, 2002 by G. Marro, L. Ntogramatzidis,D. Prattichizzo, and E. Zattoni.

The description of the software comes in part from “Multivariable Feedback withGeometric Tools” by G. Marro, in A Tribute to Antonio Lepschy, G. Picci andM.E. Valcher, Eds, Edizioni Libreria Progetto, Padova, 2007.

Page 2: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software
Page 3: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

Notation

A, H sets or vector spaces∈ belonging to⊆ contained in∀ for all∪ union∩ intersection∅ the empty setR the set of all the real numbersC the set of all the complex numbersCg the subset of all the complex numbers related to stable modes

(with real part negative or absolute value less than one)R

n the set of all the n-tuples of real numbersC

n the set of all the n-tuples of complex numbersa, x elements of sets or vectors{xi} the set whose elements are xi

× cartesian product⊕ direct sumA,X matrices or linear transformationsO a null matrixI an identity matrixIn the n×n identity matrixAT the transpose of matrix A

(the conjugate transpose if A is complex)imA the image of AkerA the kernel of Aσ(A) the spectrum of matrix Atr(A) the trace of matrix AA−1 the inverse of matrix AA+ the pseudoinverse of matrix A〈x, y〉 the scalar product of vectors x and y

(equivalent to xTy in Rn or Cn)

Af , Xf function spaces[t0, t1] a closed interval[t0, t1) a right-open interval

Page 4: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

VI Notation

f(·) a time function

f(·) the first derivative of f(·)f(t) the value of f(·) at tf |[t0,t1] a segment of f(·)δ(t) the Dirac impulseδ(k) the unit impulse for discrete-time systemsj the imaginary unitz∗ the conjugate of the complex number zsign x the signum function (x real)|z| the absolute value of the complex number zarg z the argument of the complex number z‖x‖2 the Euclidean norm of the vector x‖A‖2 the Euclidean norm of the linear transformation AA|J the restriction of the linear map A to the A-invariant JA|X/J the linear map induced by A on the quotient space X /JH⊥ the orthogonal complement of HAH the image of H in the linear transformation AA−1 H the inverse map of H in the linear transformation AJ a generic invariantV a generic controlled invariantS a generic conditioned invariant

maxJ (A, E) the maximal A-invariant contained in EminJ (A,D) the minimal A-invariant containing DmaxV(A,B, E) the maximal (A,B)-controlled invariant

contained in EminS(A, C,D) the minimal (A, C)-conditioned invariant

containing D

Σ a generic linear time invariant (LTI) systemV∗(Σ) the maximum output-nulling controlled invariant of ΣS∗(Σ) the minimum input-containing conditioned invariant of ΣZ(Σ) the set of all the invariant zeros of ΣZ(V) the set of all the internal unassignable eigenvalues of V‖Σ‖2 the H2-norm of of Σ� end of discussion

Page 5: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

Contents

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 State space models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.1.1 Connection with transfer function representations . . . . . . . 41.1.2 Computational aspects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.1.3 A few words on notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.1.4 Finite delays and FIR systems . . . . . . . . . . . . . . . . . . . . . . . . 91.1.5 Duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

1.2 Objectives of our investigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2 Geometric Tools and System Properties . . . . . . . . . . . . . . . . . . . 132.1 Basic operations and relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

2.1.1 Matlab commands referring to Section 2.1 . . . . . . . . . . . . . . 152.2 Invariant subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

2.2.1 Internal and external stability of an invariant . . . . . . . . . . . 172.2.2 Lattices of invariants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.2.3 Controllability and observability . . . . . . . . . . . . . . . . . . . . . . 202.2.4 Changes of basis and equivalent systems . . . . . . . . . . . . . . . 222.2.5 The Kalman canonical decomposition . . . . . . . . . . . . . . . . . . 232.2.6 Pole assignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242.2.7 Observers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252.2.8 Matlab commands referring to Section 2.2 . . . . . . . . . . . . . . 27

2.3 Controlled and conditioned invariant subspaces . . . . . . . . . . . . . . 282.3.1 The reachable subspace on a controlled invariant . . . . . . . . 322.3.2 Internal and external stabilizability of controlled and

conditioned invariants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332.3.3 Extension to quadruples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342.3.4 Matlab commands referring to Section 2.3 . . . . . . . . . . . . . . 36

2.4 System properties stated in geometric terms . . . . . . . . . . . . . . . . . 372.4.1 Left invertibility, right invertibility, and relative degree . . . 372.4.2 Invariant zeros . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412.4.3 Matlab commands referring to Section 2.4 . . . . . . . . . . . . . . 43

Page 6: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

VIII Contents

3 Disturbance Decoupling and Unknown-Input StateObservation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473.1 Disturbance decoupling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

3.1.1 Inaccessible disturbance decoupling by state feedback . . . . 483.1.2 Measurable signal decoupling and unknown-input state

observation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503.1.3 Previewed signal decoupling and delayed state estimation 533.1.4 Disturbance decoupling by dynamic output feedback . . . . . 563.1.5 Matlab commands referring to Section 3.1 . . . . . . . . . . . . . . 60

3.2 Model following . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613.2.1 Matlab commands referring to Section 4.6 . . . . . . . . . . . . . . 64

3.3 The multivariable regulator problem . . . . . . . . . . . . . . . . . . . . . . . . 643.3.1 Matlab commands referring to Section 3.3 . . . . . . . . . . . . . . 68

3.4 Noninteraction and fault detection and identification . . . . . . . . . . 683.4.1 Matlab commands referring to Section 3.4 . . . . . . . . . . . . . . 72

4 Geometric approach to H2-optimal regulation and filtering . 734.1 Disturbance decoupling in minimal H2-norm . . . . . . . . . . . . . . . . . 73

4.1.1 The Kalman regulator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 784.1.2 The Kalman dual filter and the Kalman filter . . . . . . . . . . . 784.1.3 Other H2-optimal control and filtering problems . . . . . . . . . 794.1.4 Matlab commands referring to Section 4.1 . . . . . . . . . . . . . . 80

A Some basic Matlab commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83A.1 Vectors, matrices and polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . 83A.2 Interaction with the Command Window . . . . . . . . . . . . . . . . . . . . . 84A.3 Cell arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85A.4 Binary logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85A.5 Conditional execution of a block of commands . . . . . . . . . . . . . . . 86A.6 Further commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86A.7 Linear Time-Invariant (LTI) systems . . . . . . . . . . . . . . . . . . . . . . . 87A.8 A few commands of the GA Toolbox . . . . . . . . . . . . . . . . . . . . . . . 89A.9 M-files and functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89A.10Some system connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

Page 7: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

1

Introduction

The main object of this monograph is a survey of geometric AA techniquesto solve linear multivariable system regulation and obvervation problems. Asfar as feedback regulation problems are concerned, let us consider some basicnotation and terminology. Consider the following very standard block diagramthat shows the feedback connection of a controlled system (plant) Σ and acontroller Σr. Fig. 1.2 shows a possible detailed feedback regulation layout.

w

Σu y

Σr

η

Fig. 1.1. A standard reference block diagram.

This fits the concise block diagram shown in Fig. 1.1 with the assumptionsw := {r, d1, d2, d3}, η := {e, y1}, y := {e, d1, y2} (with feedthrough on d1).

+

_

+ +

y2

e

ΣrΣu

d2

y1rGc(s) G(s)

d3

d1

Fig. 1.2. A feedback regulation scheme.

Page 8: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

2 1 Introduction

The more detailed scheme shown in Fig. 1.3 includes a controlled system(plant) Σ and a controller Σr, with a feedback part Σc and a feedforward partΣf . This is more complete than the previous one and it is what we need fora correct exposition of regulation theory.

+ _

+

+

+ +

rp

Σf

r e Σc

d1 d2

d3

y2y1Σr

y

Fig. 1.3. A feedforward/feedback regulation scheme.

• rp previewed reference• r reference• y1 controlled output• y2 informative output• e error variable• u manipulated input• d1 measurable disturbance• d2 non-measurable disturbance• d3 non-measurable disturbance

The block diagram in Fig. 1.3 can be concisely represented by the oneshown in Fig. 1.4, whose differences from that in Fig. 1.1 are basic. In the

d

Σu y

Σrrp e

Fig. 1.4. A more comprehensive block diagram.

above figure d := {d1, d2, d3}, y := {y1, y2, d1}. All the symbols in the figuredenote signals, represented by real vectors varying in time. Both the plantand the controller are assumed to be linear (zero state and superpositionproperty). The blocks represent oriented systems (inputs, outputs), that are

Page 9: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

1.1 State space models 3

assumed to be causal. The plant Σ is given and the controller Σr is to bedesigned to (possibly) maintain e(·)= 0.

1.1 State space models

The state space model for a continuous-time Linear Time-Invariant (LTI)system Σ is

x(t) = Ax(t) + B u(t) , x(0) = x0

y(t) = C x(t) +Du(t)(1.1)

with the state x∈X =Rn, the input u∈U =R

p, the output y ∈Y =Rq and

A, B, C, D real matrices of suitable dimensions. If p= q=1, Σ is said tobe a SISO (single-input-single-output) system, otherwise it is said to be aMIMO (multi-input-multi-output) system. Unless otherwise specified, matri-ces [BT DT] and [C D] are assumed to be full-rank. The parameter x0 ∈R

n

denotes the initial state. The system will be referred to as the quadruple(A,B,C,D) or, if D=O, as the triple (A,B,C). If D 6=O Σ is said to benon-purely dynamic and the corresponding term in (1.1) is referred to asfeedthrough matrix. Most of the theory will be derived referring to triples sinceconditions and algorithms are simpler and subsequent extension to quadruplesis straightforward.

The state space model for a discrete-time LTI system Σd is

x(k + 1) = Ad x(k) + Bd u(k) , x(0) = x0

y(k) = Cd x(k) +Dd u(k)(1.2)

The solution of the former equation in (1.1), that provides the stateevolution versus time, is 1

x(t) = eAt x0 +

∫ t

0

eA(t−τ)B u(τ) dτ (1.3)

while that of the overall system, that provides the output evolution versustime, is

y(t) = C eAt x0 + C

∫ t

0

eA(t−τ)B u(τ) dτ +Du(t) (1.4)

Likewise, for system (1.2) the state and output functions versus time areprovided by

1 Let us recall that the matrix exponential eAt is defined through the power seriesexpansion

eAt =Σ∞

i=0

Aiti

i != I +At+

A2t2

2+

A3t3

3!+ . . .

that can be proven to converge in norm for all t.

Page 10: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

4 1 Introduction

x(k) = Akd x0 +

k−1∑

h=0

Ak−h−1d Bd u(h) (1.5)

and

y(k) = CdAkd x0 + Cd

k−1∑

h=0

Ak−h−1d Bd u(h) +Dd u(k) (1.6)

Recall that g(t)=C eAt B+D δ(t) is the impulse response2 of system (1.1),while the sequence gd(k)=CdA

kdBd +Dd is that of (1.2).

Also recall that a system is internally asymptotically stable if all theeigenvalues of A or Ad belong to Cg. We herein denote with Cg, C0 andCb the left open half-plane, the imaginary axis and the right open half-planein the continous-time case, the open unit disk, the unit circle and the the partof the complex plane not belonging to the closed unit disk in the discrete-timecase.

1.1.1 Connection with transfer function representations

By taking the Laplace transform of (1.1) or the Z transform of (1.2) we obtainthe transfer function models

Y (s) = G(s)U(s) with

L(Σ) = G(s) = C (sI − A)−1 B +D (1.7)

and

Y (z) = Gd(z)U(z) with

Z(Σd) = Gd(z) = Cd (zI − Ad)−1Bd +Dd (1.8)

1.1.2 Computational aspects

Let us briefly consider some computational aspects related to the solution ofcontinuous-time equations (1.4). These can easily be extended to the discrete-time case embodied by (1.6).

Using an exosystem

If in (1.1) the input u(t) is absent, an autonomous continuous-time system isobtained. With the symbols suitably changed, this can be represented by theequations

2 The symbol δ(t) is used herein to denote a diagonal matrix of Dirac impulses,namely a signal such that

∫ ǫǫ δ(t) dt= I for any ǫ> 0.

Page 11: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

1.1 State space models 5

v(t) = W v(t) , v(0) = v0u(t) = Lv(t)

(1.9)

whose solution is provided in terms of the matrix exponential of W as

u(t) = LeWt v0 (1.10)

If the output u(t) of (1.9) is used as the input in (1.1), the connection shownin Fig. 1.5 is obtained, where the system Σe is a part, called exosystem, of anautonomous overall system Σ described by

x =

[xv

], x0 =

[x0

v0

], W =

[A BLO W

], L =

[C DL

]

Hence the response of Σ to the input signal u(·) generated by the exoxsystemin presence of the initial state x0 is obtained by computing a matrix exponen-tial as in (1.10). The exosystems are used to generate particular input signsls,

Σ

x0

Σe

v0

u yΣ

Fig. 1.5. Connection with an exosystem.

like suitable test signals used to analyze the system behavior. Example arethe step, the ramp and the sinusoid shown in Fig. 1.6. In these case the gener-ated signals are scalar, but multiple exosystems can be connected in parallelto provide multicomponents inputs.

uuu

tttϕω

Fig. 1.6. Test signals: step, ramp and sinusoid.

Step

Assume W =0 (scalar), L=1, v0 =1. It follows that eWt =1 andu(t)=LeWt v0 is an unit step.

Page 12: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

6 1 Introduction

Ramp

Assume

W =

[0 10 0

], L =

[1 0], v0 =

[01

]

Hence

eWt =

[1 t0 1

], u(t) = LeWt v0 = t

Sinusoid

Assume

W =

[0 ω−ω 0

], L =

[cosϕ sinϕ

], v0 =

[01

]

Hence

eWt =

[cosωt sinωt− sinωt cosωt

]

u(t) = LeWt v0 = sinωt cosϕ+ cosωt sinϕ = sin(ωt+ ϕ)

Converting from continuous to discrete time

Very often the input signal u(·) for the continuous-time system (1.1) isrecorded as a sequence of samples u(k), (k=0, . . . ) corresponding to a sam-pling time T , in general very small. In this case u(t) can be approximated bythe piecewise-constant function shown in Fig. 1.7.

u

0 T 2T kT t

Fig. 1.7. Approximation with a piecewise-constant function.

Let the state and output of the continuous-time system be accordinglysampled at the time instants kT , (k=0, . . . ). It can easily be shown that these

Page 13: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

1.1 State space models 7

samples can be computed as the responses (1.5) and (1.6) of the discrete-timesystem (1.2) with the following assumptions:

Ad = eAT

Bd =(∫ T

0

eA(T−τ) dτ)B = −

(∫ 0

T

eAρ dρ)B =

(∫ T

0

eAτ dτ)B

Cd = C , Dd = D

(1.11)

Matrices Ad and Bd can be computed as submatrices of a suitable matrixexponential, the same as that used for computation of the time response to astep generated by an exosystem. Consider the extended autonomous system

˙x(t) = Ax(t) , x(0) = x0 (1.12)

with

x =

[xv

], A =

[A BO O

], x0 =

[OIp

]

whose evolution in time is

eAT =

[eAT

∫ T

0eA(T−τ)B dτ

O Ip

]=

[Ad Bd

O Ip

](1.13)

Hence Ad and Bd can be computed as submatrices of a suitable matrixexponential.

How to obtain a better approximation

In Fig. 1.8 a different approximation of the input signal u(·) is shown. It isobtained by summing a step and a ramp at every discrete time instatnt.

u

0 T 2T 3T t

Fig. 1.8. Approximation with a piecewise-linear function.

Starting from the zero state, the value of the state at T caused by anunit step is given by the matrix Bd defined in (1.11), while that caused by anunit-slope ramp is

Page 14: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

8 1 Introduction

B′d =

∫ T

0

eA(t−τ)B τ dτ

The sampled state of system (1.1) is given by

x ((k+1)T ) = Ad x(kT )+Bd u(kT )+B′d

1

T

(u((k+1)T )−u(kT )

)(k=0, . . . )

where the matrices on the right can be derived again by means of an expo-nential matrix referring to an autonomous system like that shown in Fig. 1.5,but with the exosysstem previously presented to generate a ramp, with twostages of integrators in cascade rather than one. Let denote with r and v thecorresponding states: the overall system can still be represented with equation(1.12), but the state, the matrix A, and the initial state defined as

x =

xvr

, A =

A B OO O IpO O O

, x0 =

x0

v0r0

The exponential of AT can be partitioned as

eAT =

Ad Bd B′

d

O Ip T IpO O Ip

thus suggesting a convenient procedure to compute Ad, Bd and B′d.

1.1.3 A few words on notation

Let us consider the following multivariable system, with a non-manipulableinput h, a manipulable input u, a regulated output e and an informativeoutput y.

h

u Σ

e

y

Fig. 1.9. A five-map system.

The system equation are

x(t) = Ax(t) +H h(t) + B u(t)

e(t) = E x(t) +D1 u(t)

y(t) = C x(t) +D2 h(t)

Page 15: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

1.1 State space models 9

or x(t)e(t)y(t)

=

A H BE 0 D1

C D2 0

x(t)h(t)u(t)

or else, by using a today very popular notation, we can denote the systemwith

Σ =

A H BE 0 D1

C D2 0

This has the advantage of pointing out very clearly the existence offeedthrough terms.

1.1.4 Finite delays and FIR systems

The regulator Σr shown in Fig. 1.4 may be a continuous-time or a discrete-time system ruled by

z(t) = N z(t) +M y(t)

u(t) = Lz(t) +K y(t)(1.14)

orz(k + 1) = Nd z(k) +Md y(k)

u(k) = Ld z(k) +Kd y(k)(1.15)

but to solve some basic control theory problems like perfect or almost per-fect decoupling, perfect or almost perfect tracking and their duals some othersignal processing techniques are necessary. These are the finite delay and theconvolutor or finite impulse response (FIR) system. The mathematical models

udelay

y uFIR

y

Fig. 1.10. Further components of regulators.

of the delay and of the convolutor or Finite Impulse Response (FIR) systemare set as follows.

Continuous-time:y(t) = u(t− t0) (1.16)

y(t) =

∫ tf

0

W (τ) u(t− τ) dτ (1.17)

Page 16: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

10 1 Introduction

where W (τ), τ ∈ [0, tf ], is a q× p real matrix of continuous time functions,referred to as the gain of the FIR system, while [0, tf ] is called the window ofthe FIR system.

The corresponding transfer functions are

G(s) = I e−t0s

where I denotes an identity matrix of suitable dimension, and

G(s) = L(W (t))

where W (t) is equal to W (t) in [0, tf ] and 0 in (tf ,∞).

Discrete-time:y(k) = u(k − k0) (1.18)

y(k) =

kf∑

l=0

W (l) u(k − l) (1.19)

where W (k), k ∈ [0, kf ], is a q× p real matrix of discrete time functions,referred to as the gain of the FIR system, while [0, kf ] is called the window ofthe FIR system.

The corresponding transfer functions are

G(z) = I z−k0

where I is defined as above, and

G(z) = Z(W (k))

where W (k) is equal to W (k) in [0, kf ] and 0 in [kf+1,∞).

1.1.5 Duality

Definition 1.1. (dual system) Given an LTI system Σ : (A,B,C,D), itsdual system is defined as ΣT : (AT, CT, BT, DT). Given an FIR system Σ :W (τ), its dual system is defined as ΣT : W T(τ). The dual system of a finitedelay is the same delay.

Consider the interconnection of systems shown in Fig. 1.11: the overalldual system is obtained by reversing the order of serially connected systemsand interchanging branching points with summing junctions and vice versa.In fact, referring to the above figure, we have:

Σ =

A1 0 0 B1

0 A2 0 B2

B3,1C1 B3,2C2 A3 0

0 0 C3 0

, ΣT =

AT1 0 CT

1BT3,1 0

0 AT2 CT

2BT3,2 0

0 0 AT3 CT

3

BT1 BT

2 0 0

Page 17: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

1.2 Objectives of our investigation 11

+ +

Σ1

Σ2

Σ3

ΣT3

ΣT1

ΣT2

u y

u y

Fig. 1.11. The dual of an interconnection.

1.2 Objectives of our investigation

1 - The seven properties of linear time-invariant systems

• Controllability• Observability• Internal and External Stability• Right Invertibility• Left Invertibility• Relative Degree• Phase Minimality

These will be all expressed in geometric terms, i.e., in terms of invariants,controlled invariants and conditioned invariants.

2 - Some basic regulation problems

• Signal Decoupling and its dual• Disturbance Decoupling with Output Feedback• Perfect Tracking and its dual• Feedforward Model Matching and its dual• Feedback Model Matching• Regulation with Internal Model• Noninteraction• Fault Detection and Isolation

Furthermore, disturbance decoupling with state or output feedback, mea-surable signal decoupling, perfect tracking, model matching and their dualscan also be geometrically solved in the minimal H2-norm sense by referringto the corresponding Hamiltonian systems.

Page 18: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software
Page 19: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

2

Geometric Tools and System Properties

The geometric approach is a control theory for multivariable linear systemsbased on

• vector spaces and subspaces• linear tranformations

The geometric approach consists of

• an algebraic part (theoretical)• an algorithmic part (computational)

Most of the mathematical support is developed in coordinate-free form.This choice leads to simpler and more elegant results, which facilitate insightinto the actual meaning of statements and procedures. The computationalaspects are considered independently of the theory and handled by means ofthe standard methods of matrix algebra, once a suitable coordinate system isdefined 1.

2.1 Basic operations and relations

Let A : V → W be a linear map between the vector spaces V and W . Let X ,Y , Z be subspaces (of V or W).

Basic operations:

• sum: Z = X + Y• linear transformation: Y = AX• orthogonal complementation: Y = X⊥

• intersection: Z = X ∩ Y• inverse linear transformation: X = A−1 Y

1 A very concise primer on the basic mathematical tools is provided by Appendix A- Mathematical Background - in Vol. 1.

Page 20: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

14 2 Basic Geometric Tools and System Properties

Basic relations:

X ∩ (Y + Z) ⊇ (X ∩ Y) + (X ∩ Z) (2.1)

X + (Y ∩ Z) ⊆ (X + Y) ∩ (X + Z) (2.2)

(X⊥)⊥ = X (2.3)

(X + Y)⊥ = X⊥ ∩ Y⊥ (2.4)

(X ∩ Y)⊥ = X⊥ + Y⊥ (2.5)

A (X ∩ Y) ⊆ AX ∩ AY (2.6)

A (X + Y) = AX + AY (2.7)

A−1 (X ∩ Y) = A−1 X ∩ A−1 Y (2.8)

A−1 (X + Y) ⊇ A−1 X + A−1 Y (2.9)

AX ⊆ Y ⇔ AT Y⊥ ⊆ X⊥ (2.10)

(A−1 Y)⊥ = AT Y⊥ (2.11)

Relations (2.10) and (2.11) are proven in Vol. 1 as Property 3.1.2 and Prop-erty 3.1.3.

Property 2.1. (modular rule) Relations (2.1) and (2.2) hold with the equal-ity sign if one of the involved subspaces X , Y, Z is contained in any of theothers.

Proof. Refer to Vol. 1, Property 3.1.1. �

From the above properties it follows that the set of all the subspaces ofa given vector space X is a non-distributive modular lattice 2

Gr(X ) withrespect to the binary operations +, ∩ and the partial ordering ⊆ whoseuniversal bounds are X and {0}. It is represented by the Hasse diagramshown in Fig. 2.1.

X

X1 + X2

X2X1

X1 ∩ X2

{0}

Fig. 2.1. Hasse diagram of the lattice Gr(X ).

2 For the definitions of lattice and Hasse diagram, refer to Vol. 1, Section A.1.2.Notation Gr(X ) stands for Grassman’s manifold of X .

Page 21: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

2.1 Basic operations and relations 15

Property 2.2. (Grassmann’s rule)

dim (X + Y) + dim (X ∩ Y) = dimX + dimY

Definition 2.3. (Grassmann’s manifold) Let X be a vector space. Gr(X )(Grassmann‘s manifold in X ) is defined as the set of all the subspaces in X .

Property 2.4. Gr(X ) is a non-distributive modular lattice, whose universalbounds are X and {0}.

This means that, for every pair X1,X2,

1. X1 + X2 is the smallest subspace of X containing both X1 and X2.2. X1 ∩ X2 is the largest subspace of X contained in both X1 and X2.

2.1.1 Matlab commands referring to Section 2.1

The subspaces are numerically expressed through orthonomal basis matrices.Operations on subspaces are performed by using standard routines whoseworkings are briefly recalled in this section. The overall numerical robustnessis held up by the first routine, ima, where the crucial decision on linearindependence of vectors is taken. Most of the routines herein considered arecontained in the Matlab toolbox GA 3.

>> Q=ima(A[,fl]); performs the orthonormalization of a set of vectorsgiven as the colums of the matrix A and returns them as the columns ofmatrix Q. The flag fl refers to the possible re-ordering of vectors during theorthonormalization process: if it is absent or fl=1 re-ordering is allowed,if fl=0 it is not.

>> Q=ortco(A); computes the orthogonal complement of imA as

[ma,na]=size(ima(A,0));

X=ima([A,eye(na)],0); Q=X(:,nA+1:mA);

>> Q=sums(A,B); computes the sum of imA and imB as

Q=ima([A,B]);

>> Q=ints(A,B); using (2.5) computes the intersection of imA and imB as

Q=ortco(sums(ortco(A),ortco(B)));

>> Q=invt(A,X); computes the subspace A−1 X , inverse transform ofX = imX with respect to the linear transformation expressed by matrixA. It reflects relation (2.11) by the statement

3 Software can be freely downloaded from:http://www3.deis.unibo.it/Staff/FullProf/GiovanniMarro/geometric.htm

For a review of the main computational methods, see the Appendix B (Compu-

tational Background) in Vol. 1.

Page 22: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

16 2 Basic Geometric Tools and System Properties

Q=ortco(A’*ortco(X));

>> Q=ker(A); computes the nullspace of matrix A through

Q = ortco(A’);

2.2 Invariant subspaces

Definition 2.5. (invariant subspace) Given a linear map A : X →X , asubspace J ⊆X is an A-invariant if

AJ ⊆ J

Consider the continuous-time free LTI system

x(t) = Ax(t) , x(0) = x0 (2.12)

or the discrete-time free LTI system

x(k+1) = Ax(k) , x(0) = x0 (2.13)

Theorem 2.6. Let J be a basis matrix of the subspace J ⊆X ; the followingstatements are equivalent:

1. J is an A-invariant;2. a matrix X exists such that AJ = J X;3. J is a locus of state trajectories of the system (2.12) or (2.13).

Proof. Refer to Vol. 1, Property 3.2.1 and Theorem 3.2.4. �

Fig. 2.2 illustrates the meaning of statement 3 in the continuous-time case.

X

J

x0

Fig. 2.2. J as a locus of free trajectories.

Page 23: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

2.2 Invariant subspaces 17

2.2.1 Internal and external stability of an invariant

Lemma 2.7. Let J be a basis matrix of the A-invariant J . Let T be a basisfor the whole space X adapted to J , i.e, T = [J T2], with T2 such that T isnonsingular. The linear transformation A with respect to this basis is expressedby the new matrix

A′ = T−1AT =

[A′

11 A′12

O A′22

]

Proof. Refer to Vol. 1, Theorem 3.2.1. �

Consider a partition according to the basis defined in the above lemmaof the state vector whose evolution in time is defined by equation (2.12) or(2.13). We thus obtain the following system:

x1(t) = A′11 x1(t) + A′

12 x2(t) , x1(0) = x10

x2(t) = A′22 x2(t) , x2(0) = x20

Definition 2.8. (invariant internally stable) The A-invariant J is internallystable if every state-trajectory that originates on it lies completely on it andconverges to the origin as t approaches infinity (see Fig. 2.3).

J

x0

X

Fig. 2.3. Internal stability of an invariant.

According to the basis defined in Lemma 2.7, if x20 = 0 (x(0) ∈ J ), thenx2(t) = 0 ∀t: the motion on J is described only by A′

11:

x1(t) = A′11 x1(t), x1(0) = x10

Since the motion on the invariant J is completely described by the eigenvaluesof A′

11, J is internally stable if and only if σ(A′11) ∈ Cg.

Definition 2.9. (invariant externally stable) The A-invariant J is exter-nally stable if every state-trajectory of system (2.12) that originates out of itconverges to J as t approaches infinity (see Fig. 2.4).

Page 24: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

18 2 Basic Geometric Tools and System Properties

x0

J

X

Fig. 2.4. External stability of an invariant.

According to the basis considered in Lemma 2.7, if x20 6= 0 (x(0) /∈ J ),the state trajectory converges to J if and only if submatrix A′

22 is stable:

x2(t) = A′22 x2(t), x2(0) = x20 6= 0

Since the dynamics of the second component of the state only depend fromthe eigenvalues of of A′

22, J is externally stable if and only if σ(A′22) ∈ Cg.

Definition 2.10. (quotient space) Let W be a subspace of X . We define thequotient space X /W as the set of all the linear varieties parallel to W, andwe denote the canonical projection on the quotient space X /W by

Π : X → X /W , x → {x}+W

Submatrix A′11 represents A|J , i.e. the restriction of A to the subspace

J , while submatrix A′22 represents the map induced by A on the quotient

space X /J .

Definition 2.11. (subspace of the stable modes) Refer to the free system(2.12) or (2.13). The subspace of the stable modes of matrix A is the sum ofall the internally stable A-invariants.

Definition 2.12. (invariant complementable) An A-invariant J ⊆ X is saidto be complementable if an A-invariant Jc exists such that J ⊕Jc =X ; if so,Jc is called a complement of J .

Theorem 2.13. Let us consider again the change of basis defined in Lemma2.7. J is complementable if and only if the Sylvester equation

A′11 X −X A′

22 = −A′12 (2.14)

admits a solution. If so, a basis matrix of Jc is given by Jc= J X +T2.

Proof. Refer to Vol. 1, Theorem 3.2.2. �

Page 25: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

2.2 Invariant subspaces 19

Property 2.14. A necessary and sufficient condition for equation (2.14) toadmit a unique solution is that matrices A′

11 and A′22 have no common eigen-

values.

Proof. Refer to Vol. 1, Theorem 2.5.10. �

2.2.2 Lattices of invariants

Let C be a subspace of X . We denote by J↑(A, C) the set of all the A-invariantcontained in C.

Property 2.15. J↑(A, C) is a non-distributive modular lattice with the bi-nary operations +, ∩ and the partial ordering ⊆ whose minimum is {0} andwhose maximum, here denoted by maxJ (A, C), is provided by the followingalgorithm.

Algorithm 2.16. (computation of maxJ (A, C)) Consider the sequence ofsubspaces

Z1 = CZi = C ∩ A−1 Zi−1 i = 2, 3, ...

maxJ (A, C) is obtained when the sequence stops (i.e., when Zi+1 = Zi).

Proof. Refer to Vol. 1, Algorithm 3.2.2. �

Let B be a subspace of X . We denote by J↓(A,B) the set of all the A-invariant containing B.

Property 2.17. J↓(A,B) is a non-distributive modular lattice with the bi-nary operations +, ∩ and the partial ordering ⊆ whose maximum is X andwhose minimum, here denoted by minJ (A,B), is provided by the followingalgorithm.

Algorithm 2.18. (computation of minJ (A,B)) Consider the sequence ofsubspaces

Z1 = BZi = B + AZi−1 i = 2, 3, ...

minJ (A,B) is obtained when the sequence stops (i.e., when Zi+1 = Zi).

Proof. Refer to Vol. 1, Algorithm 3.2.1. �

The Venn diagrams of the lattices of all J↑(A, C) and of all J↓(A,B) areshown in Fig. 2.5.

Page 26: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

20 2 Basic Geometric Tools and System Properties

PSfrag

C

maxJ (A, C)

J↑(A, C)

{0}

minJ (A,B)

J↓(A,B)

B

X

Fig. 2.5. The lattices of all J↑(A, C) and of all J↓(A, B).

Property 2.19. The following dualities hold

maxJ (A, C) =(minJ (AT, C⊥)

)⊥

minJ (A,B) =(maxJ (AT,B⊥)

)⊥

Let C and B be subspaces of X . We denote by J (A,B, C) the set of allthe A-invariant contained in C and containing B.

Property 2.20. J (A,B, C) is a non-distributive modular lattice with the bi-nary operations +, ∩ and the partial ordering ⊆. It is non-empty if and onlyif B ⊆ maxJ (A, C) or minJ (A,B) ⊆ C.

The corresponding Venn diagram is shown in Fig. 2.6.

C

maxJ (A, C)

J (A,B, C)

minJ (A, B)

B

Fig. 2.6. The lattice of all J (A,B, C).

2.2.3 Controllability and observability

Refer to the continuous-time, purely dynamic LTI system

x(t) = Ax(t) +B u(t)y(t) = C x(t)

(2.15)

Theorem 2.21. Consider the first equation in (2.15). The reachable subspaceof (A,B), i.e., the set of all the states that can be reached from the originin any finite time by means of control actions, is the minimum A-invariantcontaining imB, or R=minJ (A, imB).

Page 27: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

2.2 Invariant subspaces 21

Proof. Refer to Vol. 1, Theorem 3.3.1. �

Hence the reachable subspace R can be computed with Algorithm 2.18.The number of steps of the corresponding sequence is called the controllabilityindex.

• If R=X , the pair (A,B) is said to be completely controllable.• If R 6=X but R is externally stable, the pair (A,B) is said to be stabiliz-

able.

Theorem 2.22. Consider equations (2.15) with u(·)= 0. The unobservablesubspace of (A,C), i.e., the set of all the initial states that cannot be recognizedfrom the output function in any finite time interval, is Q=maxJ (A, kerC).

Proof. Refer to Vol. 1, Corollary 3.3.1. �

Hence the unobservable subspaceQ can be computed with Algorithm 2.16.The number of steps of the corresponding sequence is called the observabilityindex.

• If Q= {0}, the pair (A,C) is said to be completely observable.• IfQ 6= {0} butQ is internally stable, the pair (A,C) is said to be detectable.

In the discrete-time case, i.e., referringo to the purely dymamic LTI system

x(k+1) = Ax(k) + B u(k)y(k) = C x(k)

(2.16)

the same results hold. Note that in this case Algorithm 2.18 simply yields asZi the subspace of all states reachable in i steps. Hence the controllabilityindex for a discrete-time system is the minimum number of steps required toreach the whole reachable subspace R.

Duality of controllability and observability

Let us refer to Fig. 2.7 to show that the concepts of controllability and ob-servability are dual to each other.

Lemma 2.23. If the impulse response of an LTI system is zero at a giventime tf , the impulse response of the dual system is also zero at tf .

In Fig. 2.7a Σ is the purely dynamic triple (A,B, I), whose output co-incides with the state, while Σc is an FIR system of the type (1.17) whoseconvolution profile W (τ) is a p×n matrix of time functions driving the stateof Σ from zero to In in the time interval [0, tf ). Hence these time functionsare generated when the FIR system is subject to a unit Dirac impulse δ(t).By linearity, the generic impulse xf δ(t) produces state transition from zero

Page 28: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

22 2 Basic Geometric Tools and System Properties

to xf in the same time interval, making the overall output at the summingjunction on the right to be zero at t= tf .

In Fig. 2.7b ΣT is the purely dynamic triple (AT, I, BT), which is givenan initial state x0 by the Dirac impulse x0 δ(t). The corresponding output isprocessed by the dual FIR system ΣT

c , that yields an estimate of x0 at t= tf .In fact, if g(t), the impulse response of the overall system in Fig. 2.7a, is nullat t= tf , also gT(t), the impulse response of the overall system in Fig. 2.7b, isnull at t= tf .

+

_

+

_

ΣΣcu

0 at t= tf

∫tf -delay

xf δ(t) x

ΣTy

x0 δ(t)

ΣTc

x

0 at t= tf

tf -delay∫

a)

b)

Fig. 2.7. Controlling to a given final state and observing the initial state.

2.2.4 Changes of basis and equivalent systems

The state space representations (1.1) and (1.2) of LTI systems are not uniqueas far as the input-state or the input-output behavior is concerned.

Definition 2.24. (equivalent systems) Two LTI systems (A,B,C,D) and(A′, B′, C ′, D′) are said to be equivalent if D′ =D and a nonsingular trans-formation T exists such that

A′ = T−1AT , B′ = T−1B , C ′ = C T , D′ = D

Property 2.25. Equivalent systems have the same proprties of stability, con-trollability and observability.

Proof. The system matrices A′ and A have the same eigenvalues. In fact,

Page 29: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

2.2 Invariant subspaces 23

det(λI − A′) = det(λI − T−1AT )

= det(λT−1I T − T−1AT )

= det(T−1(λI − A)T

)

= detT−1 det(λI − A) detT .

Since detT−1 and detT are different from zero, the equations det(λI −A′)= 0and det(λI −A)= 0 are identical. About controllability and obvervability, itfollows that

R′ = im[B′ A′B′ . . . (A′)n−1B′

]

= im[T−1B T−1AT T−1B . . . T−1An−1 T T−1B

]

= im(T−1

[B AB . . . An−1B

])= T−1 R ;

Q′ = ker

C ′

C ′A′

...C ′ (A′)n−1

= ker

C TC T T−1AT

...C T T−1An−1 T

= ker

CCA...

C An−1

T = QT ;

since T and T−1 are nonsingular, the subspaces R′ and R, Q′ and Q areidentical. �

2.2.5 The Kalman canonical decomposition

Invariance in connection with controllability and observability propertiesplays a key role in deriving the Kalman canonical decomposition, which pro-vides a relevant insight into the structure of LTI systems.

Theorem 2.26. (Kalman canonical decomposition) A generic quadruple(A,B,C,D) is equivalent to a quadruple (A′, B′, C ′, D), where the matricesA′, B′, and C ′ have the structures

A′ =

A′11 A

′12 A

′13 A

′14

O A′22 O A′

24

O O A′33 A

′34

O O O A′44

B′ =

B′1

B′2

OO

C ′ =[O C ′

2 O C ′4

](2.17)

Proof. Refer to Vol. 1, Property 3.3.1. �

Consider the system expressed in the new basis, i.e.

z(t) = A′ z(t) + B′ u(t) (2.18)

y(t) = C ′ z(t) +Du(t) (2.19)

Page 30: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

24 2 Basic Geometric Tools and System Properties

Because of the particular structure of matrices A′, B′, C ′ the system canbe decomposed into one purely algebraic and four purely dynamic subsystems,interconnected as shown in Fig. 2.8. Subsystem Σ2 is the sole completely

+

+ + +

u

D

yΣ2

Σ1 Σ4

Σ3

unobservable part

reachable part

Fig. 2.8. The Kalman canonical decomposition.

controllable and observable, and the only one which, with memoryless systemD, determines the input-output correspondence, i.e., the zero-state responseof the overall system.

The Kalman decomposition is a state space representation of linear time-invariant systems that provides complete information about controllabilityand observability: in particular, if the system is completely controllable andobservable, subsystems Σ1, Σ3, and Σ4 are not present (the correspondingmatrices in (2.17) have zero dimensions) 4.

Definition 2.27. (minimal realization) Refer to the Kalman canonical de-composition (2.17): the subsystem (A′

22, B′2, C

′2, D) is a minimal realization of

(A,B,C,D).

2.2.6 Pole assignment

The connections shown in Fig. 2.9 are called state feedback and output injec-tion, respectively.

The overall system in Fig. 2.9a is described by

4 Furthermore, the system is stabilizable and detectable respectively (see Subec-tion 2.2.3) if and only if A′

33, A′44 are stable and if and only if A′

11, A′33 are stable.

Page 31: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

2.2 Invariant subspaces 25

+

+

v u y

x

u y

Σ Σ

F Ga) b)

Fig. 2.9. State feedback and output injection.

x(t) = (A+ BF ) x(t) + B v(t)y(t) = C x(t)

(2.20)

while that in Fig. 2.9b is described by

x(t) = (A+GC) x(t) + B u(t)y(t) = C x(t)

(2.21)

Theorem 2.28. (pole assignment) The eigenvalues of A+BF are arbitrarilyassignable by a suitable choice of F if and only if the system is completelycontrollable and those of A+GC are arbitrarily assignable by a suitable choiceof G if and only if the system is completely observable.

Proof. Refer to Vol. 1, Theorem 3.4.2 and Theorem 3.4.3. �

2.2.7 Observers

The state and the output of a system con be recovered through an observer.Consider the layout shown in Fig. 2.10a, where a dual observer Σc ((A+BF,H,C) is used to obtain a given input-output behavior in the system Σ(η(·) is identically zero for any h(·)) and the dual layout shown in Fig. 2.10b,where an observer ΣT

c is used to estimate any linear function of the state(possibly the whole state - η(·) is identically zero for any u(·)).

+ +

_ _

h

u

y

Σ

Σc y

η

ue

yΣT

ΣTc e

η

a) b)

Fig. 2.10. Dual observer and observer.

Page 32: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

26 2 Basic Geometric Tools and System Properties

The overall system and its dual system are, in this case,

Σ =

A BF HO A+ BF HC −C O

ΣT =

AT O CT

F TBT AT + F TBT −CT

HT HT O

=

AT O CT

−F TBT AT + F TBT CT

HT −HT O

Note that the sign of both the input and output of the observer have beenchanged with respect to the strict dual to obtain the input-output equivalentlayout shown in the figure at right.

By slightly modifying the connections in Fig. 2.10 and suitably renamingmatrices and signals, the new layouts shown in Fig. 2.11, dual to each other.

+

+

v u y uy

Σ Σ

F −G

Σc Σoxx

a) b)

Fig. 2.11. Dynamic pre-compensator and observer.

The corresponding overall matrices are

Σ1 =

A BF BO A+BF BC O OO I O

, Σ2 =

A O B−GC A+GC BC O OO I O

Let us consider now the connection shown in Fig. 2.12, where the statefeedback is provided from the observer instead of the system. The corre-sponding overall system is

Σ =

A BF B−GC A+BF+GC BC O OO I O

Page 33: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

2.2 Invariant subspaces 27

+

+

v u y

Σ

F

−G

Σox

Fig. 2.12. Pole assignment with an observer.

Theorem 2.29. (separation theorem) The eigenvalues of the overall systemshown in Fig. 2.12 are the union of those of A+BF and those of A+GC,hence completely assignable if and only if the triple (A,B,C) is completelycontrollable and observable.

Proof. Refer to Vol. 1, Theorem 3.4.6. �

2.2.8 Matlab commands referring to Section 2.2

>> F=-place(A,B,p); if (A,B) is a controllable pair, computes a statefeedback matrix F such that the eigenvalues of (A+BF ) are equal to theelements of the vextor p.

>> Q=mininv(A,D); computes a basis matrix Q of minJ (A, imD), theminimum A-invariant containing imD. It uses Algorithm 2.18.

>> Q=maxinv(A,E); computes a basis matrix Q of maxJ (A, imE), themaximum A-invariant contained in imE. It uses Algorithm 2.16.

>> [P,Q]=stabi(A,J); computes as P and Q the matrices for the internaland external stability of the A-invariant imJ .

>> [As,Au,Ao]=subsplit(A[,1]); gives as As a basis matrix for the sub-space of strictly stable modes of A, as Au a basis matrix for the subspaceof strictly unstable modes of A and as Ao a basis matrix for the subspaceof modes on the boundary (imaginary axis or unit circumference). Thetwo-argument call refers to the discrete time case.

Page 34: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

28 2 Basic Geometric Tools and System Properties

2.3 Controlled and conditioned invariant subspaces

Definition 2.30. (controlled invariant subspace) Given a linear map A :X →X and a subspace B⊆X , a subspace V ⊆X is an (A,B)-controlled in-variant if

AV ⊆ V + B

Consider the continuous-time LTI system

x(t) = Ax(t) +B u(t)y(t) = C x(t)

(2.22)

or the discrete-time LTI system

x(k+1) = Ax(k) + B u(k)y(k) = C x(k)

(2.23)

Theorem 2.31. Let V be a basis matrix of the subspace V and B be a basismatrix of B; the following statements are equivalent:

1. V is an (A,B)-controlled invariant;2. a matrix F (called a friend of V) exists such that (A+BF )V ⊆ V;3. matrices X and U exist such that AV = V X + B U ;4. V is a locus of suitably controlled state trajectories of the system (2.22)

or (2.23).

Proof. Refer to Vol. 1, Theorem 4.1.1, Property 4.1.4 and Theorem 4.1.2. �

Fig. 2.13 shows the geometric meaning of property 4 in Theorem 2.31.

V

X

Fig. 2.13. The controlled invariant V as a locus of trajectories.

The sum of any two controlled invariants is a controlled invariant, whilethe intersection is not. We denote by V(A,B, C) the set of all the (A,B)-controlled invariants contained in a given subspace C ⊆ X .

Page 35: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

2.3 Controlled and conditioned invariant subspaces 29

Property 2.32. V(A,B, C) is a non-distributive modular upper semilatticewith the binary operation + and the partial ordering ⊆.

Let us denote by maxV(A,B, C) the maximal (A,B)-controlled invariantcontained in C. A Venn diagram of the lattice is shown in Fig. 2.14.

C

maxV(A,B, C)

V1 + V2

V2V1

V(A,B, C)

Fig. 2.14. The upper semilattice V(A,B, C).

Algorithm 2.33. (computation of maxV(A,B, C)) Consider the sequence ofsubspaces

V1 = CVi = C ∩ A−1 (Vi−1 + B) i = 2, 3, ...

maxV(A,B, C) is obtained when the sequence stops (i.e., when Vi+1 = Vi).

Proof. Refer to Vol. 1, Algorithm 4.1.2. �

Definition 2.34. (V∗, the maximal output-nulling subspace) Refer to thecontinuous-time system (2.22) the or discrete-time system (2.23). The symbolV∗ will be used for the subspace maxV(A, imB, kerC), called the maximaloutput-nulling5 subspace of the system, that is the maximal locus of controlledstate trajectories such that the output is identically zero.

Definition 2.35. (conditioned invariant subspace) Given a linear map A :X →X and a subspace C ⊆X , a subspace S ⊆X is an (A, C)-conditionedinvariant if

A (S ∩ C) ⊆ S

Theorem 2.36. Let S be a basis matrix of the subspace S and C be a basismatrix of C; the following statements are equivalent:

5 The term output-nulling was first used by Anderson [1] and will be used hereinboth for triples and quadruples. Following a notation due to Hautus and Sil-verman [2], in other textbooks (e.g., [3]) V∗ for quadruples is called the weakly

unobservable subspace.

Page 36: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

30 2 Basic Geometric Tools and System Properties

1. S is an (A, C)-conditioned invariant;2. a matrix G exists such that (A+GC)S ⊆ S;

The intersection of any two conditioned invariants is a conditioned in-variant, while the sum is not. We denote by S(A, C,B) the set of all the(A, C)-conditioned invariants containing the subspace B ⊆ X .

Property 2.37. S(A, C,B) is a non-distributive modular lower semilatticewith the binary operation ∩ and the partial ordering ⊆.

Let us denote by minS(A,B, C) the minimal (A, C)-conditioned invariantcontaining B. A Venn diagram of the lattice is shown in Fig. 2.15.

B

minS(A, C,B)

S1 ∩ S2

S2S1

S(A, C,B)

Fig. 2.15. The lower semilattice S(A, C,B).

Algorithm 2.38. (computation of minS(A, C,B)) Consider the sequence ofsubspaces

S1 = BSi = B + A (Si−1 ∩ C) i = 2, 3, ...

minS(A, C,B) is obtained when the sequence stops (i.e., when Si+1 = Si).

Proof. Refer to Vol. 1, Algorithm 4.1.1. �

Definition 2.39. (S∗, the minimal input-containing subspace) Refer to thecontinuous-time system (2.22) the or discrete-time system (2.23). The symbolS∗ will be used for the subspace minS(A, kerC, imB), called the minimal input-containing6 subspace of the system.

Meaning of S∗: Refer to the discrete-time system (2.23): Algorithm 2.38at the generic i-th step provides the set of all states reachable from theorigin through trajectories having all states but the last one belonging to

6 The term input-containing clearly refers to triples, but will be used herein bothfor triples and quadruples. In other textbooks (e.g., [3]) S∗ is called the strongly

reachable subspace.

Page 37: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

2.3 Controlled and conditioned invariant subspaces 31

S

0

X

Fig. 2.16. A trajectory leaving the origin on a conditioned invariant S.

kerC, hence invisible at the output (see Fig. 2.16). Hence S∗ is the maximumsubspace of X reachable from the origin through this type of trajectoriesin ρ steps, being ρ the number of iterations required for the sequence Si toconverge to S∗.

Property 2.40. (dualities) Given a subspace B⊆X , the orthogonal comple-ment of an (A,B)-controlled invariant is an (AT,B⊥)-conditioned invariant(and viceversa).

As a consequence of Property 2.40 the following relations hold:

(maxV(A,B, C))⊥ = minS(AT,B⊥, C⊥)(minS(A, C,B))⊥ = maxV(AT, C⊥,B⊥)

When a triple (A,B,C) is considered, under the settings B= imB,C=kerC it follows that

(V∗(Σ))⊥ = S∗(ΣT) (2.24)(S∗(Σ))⊥ = V∗(ΣT) (2.25)

It will be shown in Subsection 2.3.3 that the above relations also hold forquadruples (A,B,C,D).

Algorithm 2.41. (computation of a friend of a controlled invariant) Let V bea basis matrix of the (A, imB)-controlled invariant V. From the the equation

AV =V X +B U (2.26)

(see Theorem 2.31) it is possible to derive a matrix F such that(A+BF )V ⊆V as follows:

[XU

]=[V B

]+AV + Γα (2.27)

where the superscript + denotes the pseudoinverse, Γ is a basis matrix ofker[V B

]and α is an arbitrary vector of suitable dimensions. A “structural”

Page 38: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

32 2 Basic Geometric Tools and System Properties

F , i.e., a friend that does not consider any possible eigenvalue assignment for(A+BF ), is computed by taking α=0 as

F = −U V + (2.28)

In fact, equation (2.26) can also be written as

(A−BUV +)V = V X or (A+ BF )V = V X

that proves that V is an (A+BF )-invariant with internal dynamics expressedby matrix X. If the system is not left invertible, i.e., if V∗ ∩B 6= {0}, the ma-trices on the left of (2.27) and (2.28) are non-unique. The degree of freedomexpressed by α allows to assign the internal and external assignable eigen-structure of V . If the system is left-invertible (V∗ ∩S∗ = {0}), then Γ =0 andno internal eigenvalue can be assigned, while if (A,B) is completely reach-able (R=X ) the external eigenvalues are all assignable (see Vol. 1, Property4.1.13).

2.3.1 The reachable subspace on a controlled invariant

Given the triple (A,B,C), the (A, imB)-controlled invariant V and the inputfunction u(t) = F x(t)+ v(t), the system behavior is described by

x(t) = (A+ BF ) x(t) + B v(t) , x0 ∈ Vy(t) = C x(t)

The state trajectory belongs to V if and only if (A + B F )V ⊆ V andv(t) ∈ B−1 V , t > 0.

+

+ Σ

Fx

uv y

Fig. 2.17. State feedback for motion on a controlled invariant.

We denote by RV the reachable subspace from the origin by state trajec-tories constrained to belong to V :

RV = minJ (A+BF, V ∩ imB)

Being an (A+BF )-invariant, RV is an (A, imB)-controlled invariant it-self 7.

7 Other equivalent definitions of RV are

Page 39: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

2.3 Controlled and conditioned invariant subspaces 33

Theorem 2.42. Refer to the continuous-time system (2.22) or the discrete-time system (2.23). The following equality holds:

RV∗ = minJ (A+ BF,V∗ ∩ imB)

= V∗ ∩ S∗

Proof. Refer to Vol. 1, Theorem 4.1.4. �

2.3.2 Internal and external stabilizability of controlled andconditioned invariants

Let V be an (A,B)-controlled invariant and F such that (A+BF )V ⊆V .Denote by R=minJ (A,B) (the reachable subspace of the overall system).The spectrum of (A+BF ) can be partitioned as in Fig. 2.18a, with respectto inclusions of some controlled invariants. V is said to be:

• internally stabilizable: if ∀x0 ∈V the state trajectory can be maintainedon V converging to the origin by a suitable control action; this happens ifand only if (A+BF )|V is stable, i.e. if and only if (A+BF )|V/RV

is stable(since the eigenvalues of (A+BF )|RV

can be arbitrarily placed;• externally stabilizable: if ∀x0 /∈ V the state trajectory converges to V

by a suitable control action; this happens if and only if (A+B F )|X/V isstable, i.e., if and only if (A+B F )|X/(V+R) is stable. Since V +R is an A-invariant containing R, all the (A,B)-controlled invariants are externallystabilizable if and only if (A,B) is stabilizable (see Vol. 1, Property 4.1.16).

The similar statements for conditioned invariants are easily derived byduality as follows and the spectrum of (A+GC) is partitioned as shownin Fig. 2.18b. Let S be an (A, C)-conditioned invariant and G such that(A+GC)S ⊆S. Denote by Q=maxJ (A, C) (the unobservable subspace ofthe overall system). S is said to be:

• externally stabilizable: if ∀x0 /∈S the state trajectory can be maintainedout of S converging to S by a suitable control action; this happens if andonly if (A+GC)|X/S is stable, i.e. if and only if (A+GC)|QS/S is stable(since the eigenvalues of (A+GC)|X/QS

can be arbitrarily placed;

RV = V ∩minS(A,V ,B)

and as the last term of the sequence

R1 = {0}Ri = V ∩ (ARi−1 + B) i = 2, 3, ...

called the supremal controllability subspace algorithm by Wonham [4].

Page 40: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

34 2 Basic Geometric Tools and System Properties

X

V +R

V

RV

{0}

σ(A + BF )|X/(V+R)

σ(A + BF )|(V+R)/V

σ(A + BF )|V/RV

σ(A + BF )|RV

fixed

free

fixed

free

a)

X

QS

S

S ∩ Q

{0}

σ(A + GC)|X/QS

σ(A + GC)|QS/S

σ(A + GC)|S/(S∩Q)

σ(A + GC)|S∩Q

free

fixed

free

fixed

b)

Fig. 2.18. Location of internal and external eigenvalues of V and S.

• internally stabilizable: if ∀x0 ∈S the state trajectory can be mantained onS converging to the origin by a suitable control action; this happens if andonly if (A+GC)|S is stable, i.e. if and only if (A+GC)|S/(S∩Q) is stable.Since S ∩Q is an A-invariant contained in Q, all the (A, C)-conditionedinvariants are internally stabilizable if and only if (A,C) is detectable (seeVol. 1, Property 4.1.19).

2.3.3 Extension to quadruples

Extension of the above definitions and properties from triples like (A,B,C)to quadruples like (A,B,C,D) can be obtained through simple contrivances.

+ +

D

u(A,B,C)

y zΣe

Σ

Fig. 2.19. An artifice to reduce a quadruple to a triple.

For the triple (A,B,C) the maximun output-nulling subspace V∗ is definedas the maximum subspace V such that for a suitable F

(A+BF )V ⊆ V , V ⊆ kerC (2.29)

while for the quadruple (A,B,C,D) it is the maximum subspace V such thatfor a suitable F

(A+ BF )V ⊆ V , V ⊆ ker(C +DF ) (2.30)

Page 41: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

2.3 Controlled and conditioned invariant subspaces 35

Algorithm 2.43. (computation of V∗ and a friend for a quadruple) Refer

to the overall system Σ shown in Fig. 2.19, where Σe is a set of integrators inthe continuous-time case or a set of unit delays in the discrete-time case. Itis described by

˙x(t) = A x(t) + B v(t)

y(t) = C x(t)(2.31)

with

x =

[xz

]A =

[A OC O

]B =

[BD

]C =

[O Iq

](2.32)

Let us compute with Algorithm 2.33 the corresponding output nulling subspaceV∗ with basis matrix V and with Algorithm 2.41 a corresponding frend F , anddenote with

V =

[V1

V2

], F =

[F1 F2

]

their partitions according to (2.32). Owing to the structure of C, it turns outthat V2 =O and F2 =O. Thus the maximum output nulling of the quadruple(A,B,C,D) is V∗ = imV1 and F1 is a corresponding friend.

Remark. Computation of S∗ and a corresponding friend G can be done byusing the duality (2.25) and G=F T.

A second contrivance for dealing with quadruples is derived by duality.

+ +

DT

v(AT, CT, BT)

yuΣTe

ΣT

Fig. 2.20. The dual of the system in Fig. 2.19.

For the triple (A,B,C) the minimum input-containing subspace S∗ isdefined as the minimum subspace S such that for a suitable G

(A+GC)S ⊆ S , S ⊇ B (2.33)

while for the quadruple (A,B,C,D) it is the minimum subspace S such thatfor a suitable G

(A+GC)S ⊆ S , S ⊇ (B +GD) (2.34)

Page 42: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

36 2 Basic Geometric Tools and System Properties

Algorithm 2.44. (computation of S∗ and a friend for a quadruple) Refer to

the overall system Σ shown in Fig. 2.20, where Σe is a set of integrators inthe continuous-time case or a set of unit delays in the discrete-time case. Itis described by

˙x(t) = A x(t) + B v(t)

y(t) = C x(t)(2.35)

with

x =

[xu

]A =

[A BO O

]B =

[OIp

]C =

[C D

](2.36)

Let us compute with Algorithm 2.38 the corresponding output nulling subspaceS∗ with basis matrix S and with Algorithm 2.41 a corresponding frend G, anddenote with

S =

[S1 S2

O Ip

], G =

[G1

G2

]

their partitions according to (2.36). Owing to the structure of B, it turns outthat S2 =O and G2 =O. Thus the maximum output nulling of the quadruple(A,B,C,D) is S∗ = imS1 and G1 is a corresponding friend.

2.3.4 Matlab commands referring to Section 2.3

>> S=miinco(A,C,B); computes a basis matrix S of the minimum(A, imC)-conditioned invariant containing imB. It operates through thesequence of subspaces defined in Algorithm 2.38.

>> V=mainco(A,B,C); computes a basis matrix V of the maximum(A, imB)-controlled invariant contained in imC. It utilizes the duality ex-pressed by

Q = ortco(miinco(A’,ortco(B),ortco(C)));

>> F=effe(A,B,V); provides a matrix F such that im (A+BF )V ⊆ imVby using Algorithm 2.41. An error message is generated if imV is not an(A, imB)-controlled invariant. If the system is not left-invertible, i.e., ifV∗ ∩B 6= {0}, matrix F is not unique. In fact, in this case the internaldynamics of RV∗ =V∗ ∩S∗, is completely assignable through a suitablechoice of F 8.

>> [V,F]=vstar(A,B,C[,D]); (or [V,F]=vstar(sys);) provides as V a ba-sis matrix of V⋆, the maximum output nulling subspace of the LTI systemsys=ss(A,B,C,D) and as F a corresponding friend.- Computation of V : if D is absent or D=O, the routine usesV=mainco(A,B,ker(C)), while, if if D 6=O the computation is referredto the extended system (2.32) and matrix V is derived through

8 This can be done by using the routine effesta (see Subsection 2.4.3).

Page 43: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

2.4 System properties stated in geometric terms 37

V1=mainco(Ahat,Bhat,ker(Chat)); V=V1(1:size(A,2),:);

- Computation of F : this is done by using the above routine effe, possiblyreferred to the extended system (2.32) if D 6=O and suitably reduced.

>> [S,G]=sstar(A,B,C[,D]); (or [S,G]=sstar(sys);) provides as S abasis matrix of S⋆, the maximum input containing subspace of the LTIsystem sys=ss(A,B,C,D), and as G an output injection matrix such that(A+GC)S∗ ⊆S∗. By duality, the computation is simply done throughthe statements

[V,F] = vstar(A’,C’,B’,D’)); S=ortco(V); G=F’;

>> R=rvstar(A,B,C[,D]); (or R=rvstar(sys);) provides as R a basis ma-trix of RV∗ =V∗ ∩S∗, the reachable subspace in V∗.

2.4 System properties stated in geometric terms

In this section the definitions of some properties of multivariable systems andbasic applications of the geometric tools are presented.

2.4.1 Left invertibility, right invertibility, and relative degree

Let us consider the standard continuous-time triple (A,B,C) describedby equations (1.1) with D=O or the standard discrete-time system triple(Ad, Bd, Cd) described by equations (1.2) with Dd =O (we consider triplessince they provide a better insight and extension to quadruples is straight-forward – obtainable with a suitable state extension). These systems withx(0)= 0 define linear maps Tf : Uf → Yf from the space Uf of the admissibleinput functions to the functional space Yf of the zero-state responses. Thesemaps are defined by the convolution integral and the convolution summation

y(t) = C

∫ t

0

eA (t−τ)B u(τ) dτ (2.37)

y(k) = Cd

k−1∑

h=0

A(k−h−1)d Bd u(h) (2.38)

The admissible input functions are

• piecewise continuous and bounded functions of time t in (2.37);• bounded functions of the discrete time k in (2.38).

In the following definitions matrices B,C or Bd, Cd are assumed to bemaximum rank.

Page 44: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

38 2 Basic Geometric Tools and System Properties

Definition 2.45. (left invertibility of a continuous-time triple) System(A,B,C) is said to be left-invertible) if, given any output function y(t),t∈ [0, t1], t1 > 0 belonging to imTf , there exists a unique input function u(t),t∈ [0, t1), such that (2.37) holds.

Definition 2.46. (left invertibility of a discrete-time triple) System(Ad, Bd, Cd) is said to be left-invertible if, given any output function y(k),k ∈ [0, k1], k1 ≥n belonging to imTf there exists a unique input function u(k),k ∈ [0, k1 − 1] such that (2.38) holds.

Definition 2.47. (right invertibility of a continuous-time triple) Sys-tem (A,B,C) is said to be right-invertible or functionally controllable ifthere exists an integer ρ≥ 1 such that, given any output function y(t),t∈ [0, t1], t1 > 0 with ρ-th derivative piecewise continuous and such thaty(0)= 0, . . . , y(ρ− 1)(0)= 0, there exists at least one input function u(t),t∈ [0, t1) such that (2.37) holds. The minimum value of ρ satisfying the abovestatement is called the relative degree of the system.

Definition 2.48. (right invertibility of a discrete-time triple) System(Ad, Bd, Cd) is said to be right-invertible or functionally controllable if thereexists an integer ρ≥ 1 such that, given an output function y(k), k ∈ [0, k1],k1 ≥ ρ such that y(k)= 0, k ∈ [0, ρ−1], there exists at least one input functionu(k), k ∈ [0, k1 − 1] such that (2.38) holds. The minimum value of ρ satisfyingthe above statement is called the relative degree of the system.

Theorem 2.49. System (A,B,C) or (Ad, Bd, Cd) is left-invertible if and onlyif

V∗ ∩ S∗ = {0} (2.39)

or, equivalently,V∗ ∩ B = {0} (2.40)

Proof. Refer to Vol. 1, Property 4.3.6. �

Corollary 2.50. Another condition for left invertibility, equivalent to(2.40),is

B−1V∗ = {0} (2.41)

Theorem 2.51. System (A,B,C) or (Ad, Bd, Cd) is right-invertible if andonly if

V∗ + S∗ = X (2.42)

or, equivalently,S∗ + C = X (2.43)

Page 45: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

2.4 System properties stated in geometric terms 39

Proof. Refer to Vol. 1, Theorem 4.3.2. �

Property 2.52. If (A,B,C) or (Ad, Bd, Cd) is left (right) invertible, the dualsystem (AT, CT, BT) or (AT

d , CTd , B

Td ) is right (left) invertible.

Algorithm 2.53. (computation of the relative degree) For a right invert-ible system without feedthrough the relative degree is the minimal value of ρsuch that dim(V∗ +Sρ)= dim(V∗ +S∗) where Si (i=1, 2, . . .) is the sequencedefined in Algorithm 2.38. For systems with feedthrough this still holds butreferred to a suitably extended system, like that shown in Fig. 2.19 (note thatin this case the computed relative degree has to be reduced by one).

Corollary 2.54. Relations equivalent to (2.43) are

C S∗ = Rq

andC Sρ = R

q (2.44)

respectively.

+ _

+ _

Σi Σ

Σf

e

ΣT ΣTi

ΣTf

e

a)

b)

Fig. 2.21. Connections for right and left inversion.

Remark. Definitions 2.45–2.48 and the corresponding Theorems 2.49, 2.51are structural conditions , independend of any stability problem. In Fig. 2.21Σf denotes a suitable relative-degree filter in the continuous-time case or a rel-ative degree delay in the discrete-time case. This may be simply implementedas a diagonal matrix of transfer functions such as 1/(1+ τs)ρ or a diagonalmatrix of delays as 1/zρ. The figure shows that if it is possible to define aninverse system Σi designed to null the error e for any input function, thussolving the right inversion problem, the left inversion problem is also solvedby duality.

Page 46: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

40 2 Basic Geometric Tools and System Properties

Property 2.55. Refer to a right-invertible continuous-time system LTI sys-tem (A,B,C). The corresponding discrete-time system (Ad, Bd, C) defined by(1.11), obtained through sampling and ZOH (zero-order hold) equivalence 9

has, in general, relative degree equal to one.

The following theorem states more precise necessary and sufficient condi-tions.

Theorem 2.56. (relative degree after sampling) Define the matrices

C =[C O

], A =

[A BO O

], B =

[OIp

]

Let [0, t2] be a time interval of possible choices of the sampling time T . Prop-erty 2.55 holds if and only if the matrix

M(t) = C eAt B (2.45)

is maximum rank almost everywhere in (0, t2].

Proof. Relation (2.44) with ρ=1 becomes simply C Bd =Rq, i.e., C Bd max-

imum rank. Let us consider equation (1.13). Note that C Bd =M(T ) forany sampling time T , C Bd =M(T ), hence there exists a sampling periodT ∈ (0, t2] such that the triple (Ad, Bd, C) has relative degree 1 if and only ifM(t) has maximum rank at least for one t∈ (0, t2]. Since M(t) is analytic,this condition must hold almost everywhere in (0, t2]. �

Remark. A very special case where the conditions of Theorem 2.56 are notsatisfied is provided by the following continuous-time triple:

A =

0 0 0 01 0 3 00 0 0 10 0 0 0

, B =

1 00 00 00 1

, C =

[1 0 2 00 1 0 0

]

In this case the rank of the 2× 2 matrix M(t) is identically one. This exampleis due to Grizzle and Shor [5].

9 The ZOH equivalence is referred to a digital-to-analog converter, applied atthe input, that maintains the converted value constant between sampling times,like in Fig. 1.7 and an analog to-digita-converter that simply samples the outputsignal.

Page 47: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

2.4 System properties stated in geometric terms 41

2.4.2 Invariant zeros

Roughly speaking, an invariant zero corresponds to a mode that, if suitablyinjected at the input of a dynamic system, can be nulled at the output by asuitable choice of the initial state.

Definition 2.57. (invariant zeros and invariant zero structure) The invari-ant zeros and the invariant zero structure of (A,B,C) are the eigenvaluesand the eigenstructure of the linear map (A+BF )

∣∣V∗/R∗, where F denotes

any friend of V∗, or the eigenvalues and the eigenstructure of the linear map(A+GC)

∣∣U∗/S∗, where G denotes any friend of S∗ and U∗ is the subspace

V∗ +S∗.

V∗RV∗

unstablezero

stablezero

X

Fig. 2.22. Decomposition of the map (A+BF )|V∗ .

Algorithm 2.58. (computation of the invariant zeros) Let us refer to a triple(A,B,C). A matrix P representing the map (A+BF )|V∗/RV∗ up to an iso-morphism, is derived as follows. Let us consider the similarity transformationT = [T1 T2 T3 T4], with imT1 =RV∗, im[T1 T2] =V∗, im[T1 T3] =S∗ and T4

such that T is nonsingular. In the new basis the matrices A+BF and B areexpressed by

A′ = T−1(A+BF )T =

A′11 A

′12 A

′13 A

′14

O A′22 A

′23 A

′24

O O A′33 A

′34

O O A′43 A

′44

, B′ = T−1B =

B′1

OB′

3

O

(2.46)

The requested matrix is P =A′22.

Proof. The zero submatrices in the first and second column of A′ depend onRV∗ and V∗ being (A+BF )-invariant subspaces, while the structure of B′

depends on S∗ containing B. Clearly matrix A′22 cannot be affected by any

Page 48: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

42 2 Basic Geometric Tools and System Properties

further state feedback. �

Remark. Algorithm 2.58 can also be used for a quadruple (A,B,C,D) byusing the state extension defined in (2.32).

An LTI system Σ is said to be minimum phase if all its invariant zeros arestable, i.e. with the real parts negative in the continuous-time case or withabsolute value less than one in the discrete-time case. Hence the invariantzeros of Σ are the internal unassignable eigenvalues of V∗ and will be referredto with the symbol Z(V∗) or Z(Σ). By extension, we will denote with Z(V)the internal unassignable eigenvalues of any controlled invariant V .

Let us get some further insight into the structure of Σ.

Theorem 2.59. Let W be a real m×m matrix having the invariant zerostructure of (A,B,C) as eigenstructure. A real p×m matrix L and a realn×m matrix X exist, with (W,X) observable, such that by applying to(A,B,C) the input function

u(t) = LeWt v0 (2.47)

where v0 ∈Rm denotes an arbitrary column vector, and starting from the ini-

tial state x0 =X v0, the output y(·) is identically zero, while the state evolution(on kerC) is described by

x(t) = X eWt v0 (2.48)

v0 δ(t) x0 = X v0 δ(t)

Σe L Σv u y

Fig. 2.23. The meaning of Theorem 2.59.

Proof. Substitution of (2.47, 2.48) in the differential equation of the system –the former of (1.1) – yields

XWeWtv0 = AXeWtv0 + BLeWtv0

i.e.,AX −XW = −BL

Le us refine the change of basis (2.46). Since the eigenvalues of A′11 can be

arbitrarily placed by a suitable choice of matrix F , they can be assumed alldifferent from those of A′

22 without any loss of generality. On this assumptionRV∗ is an (A+BF )-invariant complementable with respect to V∗, i.e., an

Page 49: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

2.4 System properties stated in geometric terms 43

(A,B)-controlled invariant V exists such that RV∗ ⊕V =V∗, (A+BF )V ⊆V .Consider the change of basis defined by the transformation T := [T1 T2 T3 T4],with imT1 =RV∗ , imT2 =V , im[T1 T3] =S∗, and T4 such that T is nonsingular.With respect to the new basis the transformed matrices A′ =T−1(A+BF )T ,B′ =T−1B, C ′ =CT have the structures

A′ =

A′11 O A′

13 A′14

O A′22 A

′23 A

′24

O O A′33 A

′34

O O A′43 A

′44

, B′ =

B′1

OB′

3

O

, C ′ =

[O O C ′

3 C4

](2.49)

The statement follows by assuming

W = A′22 , X = T2 , L = F T2

In fact, it will be shown that the above matrices are such that

1. (X,W ) is observable;2. C X = O ;3. AX −XW = −B L .

The above property 1 is due to the rank of X being maximal and equalto the dimension of W , relation 2 follows from imX =V ⊆V∗, while 3 isequivalent to

AT2 − T2 A′22 = −B F T2

i.e.,(A+ B F )T2 = T2 A

′22

(imT2 is an (A+BF )-invariant with A′22 as internal eigenstructure), which

directly follows from the change of basis (2.49). �

Remark. In the discrete-time case equations (2.47) and (2.48) are replacedby u(k)=LW k v0 and x(k)=XW k v0, respectively.

2.4.3 Matlab commands referring to Section 2.4

>> [z,X]=gazero(A,B,C[,D]); (or [z,X]=gazero(sys);) gives as z thecolumn vector of the invariant zeros and as X the matrix of the invariantzero structure of the LTI system sys=ss(A,B,C,D) 10. The computationalprocedure is the following:

a) determine R∗ =V∗ ∩S∗ and denote by R and V the basis matricesobtained for R∗ and V∗, and by nR, nV their numbers of columns;

10 To check whether a given controlled invariant V = imV is internally stabilizable,the command z=gazero(A,B,ortco(V)’) can be used.

Page 50: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

44 2 Basic Geometric Tools and System Properties

b) use V=ima([R,V],0]) to obtain a suitably ordered basis of V∗;

c) use (2.27) and denote by X22 the (nV − nR)× (nV − nR) submatrixextracted from the bottom/right corner of X. The invariant zeros are theeigenvalues of X22 and the invariant zero structure is represented by X22

itself.

>> F=effesta(A,B,V,[1]); for any (A,B)-controlled invariant V = imVa state feedback matrix F is given such that (A+BF )V ⊆V and theeigenvalues of (A+BF )|R∗ are assigned in interactive mode. With thethree-argument call the computational procedure, similar to that of gazero,is

a) determine R∗ =V∗ ∩S∗ and denote by R and V the basis matricesobtained for R∗ and V∗, and by nR the number of columns of R;

b) use V=ima([R,V],0]) to obtain a suitably ordered basis of V∗;

c) use (2.27) and

K = ker

[XU

]

and denote by A1 the nR×nR submatrix extracted from the top/leftcorner of the matrix on the left of (2.27) and by B1 the matrix consistingof the first nR rows of K.d) since (A1, B1) is a controllable pair, it is possible to define a matrix Fsuch that M =A1 +BF has arbitrary eigenvalues, defined in interactivemode;e) replace the nR×nR submatrix at the top/left corner of the matrix onthe left of (2.27) with M and use (2.28), thus deriving a state feedbackmatrix F1 achieving A+ BF -invariance and internal stability of imV ;f) set F = F1 and exit (three-argument call).

With the four-argument call the routine performs the following furthercomputations.g) compute Re=mininv(A,B);h) perform the change of basis T=ima([V,Re],0]), Ap = T−1(A+B F1)T ,Bp = T−1 B, Fp = F1 T ;i) denote by nRe the number of columns of ima(Re);l) denote by As the nRe×nRe matrix on the bottom-right of Ap andby Bs the matrix formed by the latter nRe rows of Bp; since (As, Bs) is acontrollable pair, it is possible to define a matrix F2 such that (As+Bs F2)has arbitrary eigenvalues;m) replace the last nRe rows of Fp with F2 and compute the final statefeedback matrix F = Fp T

−1.

Page 51: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

2.4 System properties stated in geometric terms 45

>> [V,F]=vstarg(A,B,C,D[,-1]); (or [V,F]=vstarg(sys);) computes abasis matrix V of V∗

g , the maximum internally stable output nulling con-trolled invariant subspace of the LTI system sys=ss(A,B,C,D[,-1]). Theoptional fifth argument “-1” is to refer to a discrete-time system. The com-putational procedure is strongly related to that of gazero: When matrixX22 has been determined at step c of the algorithm, use the utility sub-split 11 to derive the subspace Ws of the stable modes of X22. Through asuitable change of basis relating to V (the basis matrix of V∗), make Ws

to be the image of the first nWs columns of X22 and assume as the firstoutput matrix the first nR+nWs columns of the new V . The friend F iscomputed like in vstar.

>> r=reldeg(A,B,C[,D]); (or r=reldeg(sys);) provides the relative de-gree of the LTI system sys=ss(A,B,C,D). Let us momentarily assume thatD=O. In this case the relative degree is the minimum value of i in thesequence provided by Algorithm 2.38 such that dimV∗+Si =dimV∗ +S∗.If D 6=O, the relative degree is computed for the auxiliary system (2.31,2.32) and lowered by one.

>> [F,T,nR,nV,sys1]=zbasis(sys); for a continuous or discrete-timepurely dynamic LTI system sys=ss[A,B,C,0] or sys=ss[A,B,C,0,-1]

provides the matrices F and T corresponding to the new basis (2.49), asnR and nV the dimensions of R∗ and of V∗ and as sys1 the correspondingsystem, equivalent to sys , in the new basis.

11 The routine [As,Au]=subsplit(A[,-1]) computes basis matrices As, Au of thesubspaces of stable and unstable modes of a real square matrix A by using theSchur form.

Page 52: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software
Page 53: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

3

Disturbance Decoupling and Unknown-Input

State Observation

In this chapter we will present some leading applications of the geometriccontrol theory, that date back to the early seventies. At those times researcheson system inversion and disturbance decoupling came to fashion, but most ofthe solutions proposed then were incomplete, since an important feature ofthe overall system in view of practical application, the internal stability, wasin general neglected.

In fact, as a rule any satisfactory solvability condition of these problemsconsists of

• a structural necessary and sufficient condition• a stabilizability necessary and sufficient condition

and the latter requires a subtle use of the concept of invariant zero, whoseinterpretation in geometric terms has been given before.

The statements will strictly be in terms of the geometric tools presented inChapter 2. For every problem efficient computational tools, directly related tothe proofs of the sufficiency conditions, are also available within the geometricapproach software facilities.

3.1 Disturbance decoupling

When the output of a system must be made completely insensitive to an inputsignal we must distinguish three cases

1. (inaccessible) disturbance decoupling2. measurable signal decoupling3. previewed signal decoupling

whose difference is basic for a correct statement and solution of the corre-sponding control problem. Feedback is strictly required only in the first case,since the second and third case are solvable with feedforward, possibly appliedto a system stabilized and/or strengthened with feedback.

Page 54: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

48 3 Disturbance Decoupling and Unknown-Input State Observation

Both the continuous-time case and discrete-time case are herein consid-ered, since the solutions may be significantly different 1. Hnce the mathemat-ical models referred to, with two inputs, are the following

x(t) = Ax(t) +B u(t) +H h(t)y(t) = C x(t)

(3.1)

orx(k+1) = Ax(k) + B u(k) +H h(k)

y(k) = C x(k)(3.2)

where u denotes the manipulable input, h the disturbance input. Let B= imB,H= imH, C=kerC.

3.1.1 Inaccessible disturbance decoupling by state feedback

Consider the continuous-time or discrete-time LTI system Σ with state feed-back shown in Fig. 3.1.

Problem 3.1. (inaccessible disturbance decoupling with stability) Refer tothe two-input system modeled by equations (3.1) or (3.2). Determine, if pos-sible, a state feedback matrix F such that the disturbance h has no influenceon output y and the overall system is internally stable.

u

hy

x

Σ

F

Fig. 3.1. Inaccessible disturbance decoupling.

In spite of its apparent simplicity, the disturbance decoupling problemwas not completely solved at the outset. The system with state feedback isdescribed by

x(t) = (A+ BF ) x(t) +H h(t)

y(t) = C x(t)

or

1 This is due to the decision of excluding distributions in signals, not always sharedin the literature.

Page 55: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

3.1 Disturbance decoupling 49

x(k+1) = (A+ BF ) x(k) +H h(k)

y(k) = C x(k)

It behaves as requested if and only if its reachable set by h, i.e., the mini-mum (A+BF )-invariant containing H, is contained in C. Denote by V∗

(B,C)

the maximum output-nulling (A,B)-controlled invariant contained in C. Sinceany (A+BF )-invariant is an (A,B)-controlled invariant, the inaccessible dis-turbance decoupling problem has a solution if and only if

H ⊆ V∗(B,C) (3.3)

Condition (3.3) is a necessary and sufficient structural condition and doesnot ensure internal stability. If stability is requested, we have the inacces-sible disturbance decoupling problem with stability. Stability is convenientlyhandled by using self-bounded controlled invariants.

Definition 3.2. (self-bounded controlled invariant) Let B, C be subspaces ofX . Let V be an (A,B)-controlled invariant contained in C, hence belonging tothe semilattice V(A,B, C); V is said to be self-bounded with respect to C if

V ⊇ V∗ ∩ B

We define

Φ(B,C) = {V : AV ⊆ V + B , V ⊆ C , V ⊇ V∗ ∩ B}

If V ∈Φ(B, C), then V cannot be exited by means of any trajectory on C.

Property 3.3. Let F be a matrix such that (A+BF )V∗ ⊆ V∗, with V∗ =maxV(A,B, C). Any controlled invariant V self-bounded with respect to Csatifies (A+BF )V ⊆ V.

Proof. Refer to Vol. 1, Property 4.1.7. �

Property 3.4. The intersection of any two (A,B)-controlled invariants self-bounded with respect to C is an (A,B)-controlled invariant self-bounded withrespect to C.

Proof. Refer to Vol. 1, Property 4.1.8. �

Theorem 3.5. Φ(B,C) is a non-distributive modular lattice with the bineryoperations +, ∩ and the partial ordering ⊆, whose supremum is V∗ and whoseinfimum is RV∗ =V∗ ∩S∗.

Page 56: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

50 3 Disturbance Decoupling and Unknown-Input State Observation

Proof. Refer to Vol. 1, Theorem 4.1.4. �

Being H a subspace of X , if H⊆V∗(B,C) it can be proven that

V∗(B,C) =V∗

(B+H,C), so that

Φ(B+H,C) = {V : AV ⊆ V + B , V ⊆ C , V ⊇ V∗ ∩ (B +H)}

is the lattice of (A,B+H)-controlled invariants with forcing action in B+Hself-bounded with respect to C, whose maximum is V∗

(B,C) and whose mini-mum is

Vm = V∗(B,C) ∩ S∗

(C,B+H) (3.4)

Theorem 3.6. Let H⊆V∗(B,C). There exists at least one internally stabiliz-

able (A,B)-controlled invariant V such that H⊆V ⊆C if and only if Vm isinternally stabilizable.

Proof. Refer to Vol. 1, Lemma 4.2.1 �

The change of basis shown in (2.49) also holds for Vm instead of V∗ withthe stabilizability property of (A′

33, B′3) still valid if (A,B,C) is stabilizable.

Hence there is a state feedback Fm achieving disturbance decoupling andmaking the overall system stable if and only if Vm is internally stabilizable.This is stated in the following corollary.

Corollary 3.7. The disturbance decoupling problem with stability admits asolution if and only if

H ⊆ V∗(B,C)

Vm is internally stabilizable(3.5)

From the structure pointed out by (2.49) it follows that he second condi-tion in (3.5) is equivalent to

Z(Vm) ⊆ Cg (3.6)

Since Z(Vm) is a part of Z(V∗(B,C)), minimality of phase ensures stabilizability.

3.1.2 Measurable signal decoupling and unknown-input stateobservation

When the signal h is accessible for measurement the statement of the problemis different.

Problem 3.8. (measurable signal decoupling with stability) Refer to thetwo-input system modeled by equations (3.1). Determine, if possible, an al-gebraic or dynamic compensator such that the measurable signal h has noinfluence on output y and the overall system is internally stable.

Page 57: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

3.1 Disturbance decoupling 51

This problem appears as a slight extension of the previous inaccessibledisturbance decoupling problem, but indeed it is very different and opens outto many types of solutions. The first solution considered in the literature,based on state feedback and algebraic feedforward, is represented in Fig. 3.2.The structural necessary and sufficient condition for its solvability is

H ⊆ V∗(B,C) + B (3.7)

In fact if (3.7) holds, by assuming as matrix G in Fig. 3.2 a projectionof H on V∗

(B,C) along B, the action of input h is driven on V∗(B,C), hence it is

invisible at the output.

+ +

u

hy

x

Σ

F

G

Fig. 3.2. Measurable signal decoupling with state feedback.

It can be proven that the stabilizability condition for this problem isagain expressed by (3.5) or (3.6), provided (3.7) is satisfied. However, thealgebraic layout shown in Fig. 3.2 is not the most convenient for achievingdecoupling of a measurable signal since it requires accessibility of state. Letus consider instead the feedforward connection shown in Fig. 3.3 where Σc

could be a replica of the system with feedback shown in Fig. 3.2. It is worthnoting that the dynamics of Σc can be restricted to Vm, that is an internallystable (A+BFm)-invariant containing the projection of H on Vm along B.Referring to the layout in Fig. 3.3 we can state the following result, that isagain a consequence of Theorem 3.6.

h

u Σ

Σc

y

Fig. 3.3. Measurable signal decoupling with a feedforward compensator.

Corollary 3.9. The measurable signal decoupling problem with stability ad-mits a solution if and only if

H ⊆ V∗(B,C) + B

Vm is internally stabilizable(3.8)

Page 58: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

52 3 Disturbance Decoupling and Unknown-Input State Observation

However in this case Σ must be stable. But, if the system is unstable, howcan we avoid feedback? This can be done, very easily, as shown in Fig. 3.4.If Σ is stabilizable from u and detectable from a suitable measurable output

+

+

h

u Σ

Σc

Σs

y

ym

−ym

xs=0

Fig. 3.4. Using both a compensator and a stabilizer.

ym (possibly coinciding with y), it can be stabilized with a feedback unit Σs

which can be maintained at zero by the pre-compensator since this reproducesthe state evolution (hence the output) of Σ, restricted to Vm. If ym = y thisconnection is not necessary. The stabilizer is standard, based on state feedbackFs such that A+BFs is strictly stable and output injection Gs such thatA+GsC1 is strictly stable, i.e., Σs : (A + BFs + GsC1 , −Gs , Fs). Inputmatrix −Gs is referred to both inputs. Note that the output of the stabilizeris identically zero, since its inputs due to the action of h on Σ and Σc canceleach other.

+ +

ue

y

−e

Σ

Σo

η

Fig. 3.5. Unknown-input observer of a linear function of the state.

Consider the continuous-time LTI system Σ with two outputs, shown inFig. 3.5, described by

x(t) = Ax(t) +B u(t)e(t) = E x(t)y(t) = C x(t)

(3.9)

orx(k + 1) = Ax(k) + B u(k)

e(k) = E x(k)y(k) = C x(k)

(3.10)

The unknown-input observation problem of a linear function of the state(possibly the whole state) is stated as follows: design a stable feedforward unitthat, connected to the output y, provides an exact estimation of the output e.

Page 59: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

3.1 Disturbance decoupling 53

+

+

u e

Σ

Σo

Σs

η

d

y

−e

u1

Fig. 3.6. Unknown-input observer with a stabilizer.

This problem was the object of very early investigation. More recently, ow-ing to its connection with the fault detection problem, it has been the objectof hundreds of non-geometric scientific papers based on appalling manipula-tions of matrices, but duality with the measurable signal decoupling problemwas never recognized and pointed out. Its formal statement is the following.

Problem 3.10. (unknown-input observation) Refer to the two-output systemmodeled by equations (3.9). Determine, if possible, an observer such that theinput u has no influence on output η.

The overall system in Fig. 3.5 is clearly the dual of measurable signal de-coupling, considered in Fig. 3.3 (the summing junction on the right is onlyexplanatory), so the design of an unknown-input observer can be considereda very standard problem in the geometric approach context. Of course thenecessary and sufficient conditions to build a stable unknown-input observerΣo are still (3.3), (3.5) for the dual of system (3.9), obtained with the sub-stitutions AT → A, BT → C, CT → B, ET → H. However, if an equivalentset of geometric conditions directly referred to the system matrices in (3.9) issought-after, these are stated as follows. First, define the dual of (3.4) as

SM = S∗(C,B) + V∗

(B,C ∩E) (3.11)

and the dual of Corollary 3.7 as

Corollary 3.11. The unknown-input observation problem with stability ad-mits a solution if and only if

S∗(C,B) ∩ C ⊆ E

SM is externally stabilizable(3.12)

Also in this case, a stabilizer can be used if the system is unstable, asshown in Fig. 3.6. By duality, the output η is independent of both u and d(any disturbance acting on the stabilizer).

3.1.3 Previewed signal decoupling and delayed state estimation

It has previously been pointed out that a sufficient condition for stability inthe above disturbance and measurable signal decoupling problems is mini-mality of phase of Σ. If Σ is not minimum-phase, however, it is still possible

Page 60: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

54 3 Disturbance Decoupling and Unknown-Input State Observation

to obtain decoupling if signal h is known in advance by a certain amount oftime (several times the maximum time constant of the unstable zeros). In thecontinuous-time case the problem is stated as follows.

Problem 3.12. (previewed signal decoupling with stability) Refer to thetwo-input system modeled by equations (3.1). Determine, if possible, an FIRsystem and a dynamic compensator such that the measurable signal h has noinfluence on output y and the overall system is internally stable.

In this case the following result, still related to Theorem 3.6, can be stated.

Corollary 3.13. The measurable and previewed signal decoupling problemwith stability admits a solution if and only if

H ⊆ V∗(B,C) + B

Z(Vm) ∩ C0 = ∅(3.13)

where C0 denotes the imaginary axis of the complex plane.

hpdelay h

u Σ

Σc

y

a)

+ +

ue

y

ed

−ed

delay

Σo

Σ η

b)

Fig. 3.7. Previewed decoupling and delayed unknown-input observation.

unstable zeroVm

BuHh

stable zero

α

−α

X

Fig. 3.8. Preaction along the unstable zeros.

In Fig. 3.7.a h denotes the previewed signal, hp its value t0 seconds inadvance, so that hp(t)=h(t+ t0). The feedforward unit Σc includes a convo-lutor, also called a finite impulse response (FIR) system. Refer to Fig. 3.8 andsuppose that a single impulse is applied at input h, i.e., assume h(t)= hδ(t),

Page 61: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

3.1 Disturbance decoupling 55

causing an initial state Hh at time zero. Since H⊆Vm +B owing to (3.7),this initial state can be projected on Vm along B, and decomposed into threecomponents: a component on R∗

(B,C), a component on the subspace of Vm cor-responding to strictly stable zeros, and a component on the subspace of Vm

corresponding to strictly unstable zeros. While the former two (state α) canbe driven to the origin along stable trajectories on Vm, the latter, that corre-sponds to unstable motions on Vm, can be nulled by a preaction on u prior toits occurrence (in the time interval [−t0, 0]), thus cancelling it at t=0 (state−α). This is obtained by means of the above-mentioned FIR system, wherethe convolution profile corresponding to the control action along the unstablezeros, computed backward in time for good and all, is suitably stored.

The overall system in Fig. 3.7.a is dualized as shown in Fig. 3.7.b, thusobtaining an unknown-input observer with delay. The FIR system includedin Σc to steer the system along the unstable zeros is simply dualized bytransposing its convolution profile for all t.

Discrete-time previewed control and delayed estimation

The conditions for previewed signal decoupling and delayed state estimationare somehow less restrictive in the discrete-time case. Let us refer to thediscrete-time systems (3.2) and (3.10) and consider the meaning of S∗ pointedout in Fig. 2.16, as the subspace reachable from the origin in a certain numberof steps along a state trajectory invisible at the output at all the states axceptthe last one. Vector Bu in Fig. 3.8 can be replaced by a vector belonging toS∗(C,B), hence the following results follow.

Corollary 3.14. (relative degree preview) The relative-degree previewed sig-nal decoupling problem with stability admits a solution if and only if

H ⊆ V∗(B,C) + S∗

(C,B)

Vm is internally stabilizable(3.14)

where Vm is defined again by (3.4).

Note that the first condition in (3.14) is satisfied if Σ is right-invertible andthe second is satisfied if it is minimum-phase.

Property 3.15. (large preview) The “largely” previewed signal decouplingproblem with stability admits a solution if and only if

H ⊆ V∗(B,E) + S∗

(C,B)

Vm has no unassignable eigenvalueon the unit circle

(3.15)

Page 62: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

56 3 Disturbance Decoupling and Unknown-Input State Observation

−ka 0 ρ

preaction

dead-beat

postaction

Fig. 3.9. Input sequence for decoupling an impulse at time ρ.

Proof. Suppose that an impulse is scheduled at input h at time ρ. It can bedecoupled with an input signal u of the type shown in Fig. 3.9 with preactionconcerning unstable zeros and postaction stable zeros. Denote by ρ the leastinteger such that H⊆V∗

(B,E)+Sρ. Let us recall that Vm is a locus of initialstates in C corresponding to trajectories controllable indefinitely in C, while(Sρ) is the maximum set of states that can be reached from the origin in ρsteps with all the states in C except the last one. Suppose that an impulse isapplied at input h at the time instant ρ, producing an initial state xh ∈H,decomposable as xh = xh,s + xh,v, with xh,s ∈Sρ and xh,v ∈Vm. Let us applythe control sequence that drives the state from the origin to −xh,s along atrajectory in Sρ, thus nulling the first component. The second componentcan be maintained on Vm by a suitable control action in the time intervalρ≤ k <∞ while avoiding divergence of the state if all the internal unassignalemodes of Vm are stable or stabilizable. If not, it can be further decomposedas xh,v = x′

h,v + x′′h,v, with x′

h,v belonging to the subspace of the stable orstabilizable internal modes of Vm and x′′

h,v to that of the unstable modes.The former component can be maintained on Vm as before, while the lattercan be nulled by reaching −x′′

h,v with a control action in the time interval−∞<k≤ ρ− 1 corresponding to a trajectory in Vm from the origin.

3.1.4 Disturbance decoupling by dynamic output feedback

This is an extension of the inaccessible disturbance decoupling problem withstate feedback, considered in Subsection 3.1.1.

Consider the continuous-time LTI system Σ with two inputs and twooutputs, shown in Fig. 3.10, described by

x(t) = Ax(t) +B u(t) +H h(t)y(t) = C x(t)e(t) = E x(t)

(3.16)

The inputs u and d are the manipulable input and the disturbance input,respectively, while outputs y and e are the measured output and the controlled

Page 63: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

3.1 Disturbance decoupling 57

h e

yu Σ

Σc

Fig. 3.10. Disturbance decoupling by dynamic output feedback.

output, respectively. Let B= imB, H= imH, C=kerC, E =kerE. Σc denotesa feedback dynamic compensator, described by

z(t) = N z(t) +M y(t)u(t) = Lz(t) +K y(t)

(3.17)

Problem 3.16. (disturbance decoupling problem by output feedback) De-sign, if possible, a dynamic compensator (N,M,L,K) such that the distur-bance h has no influence on the regulated output e and the overall system isstable.

It has been shown in Subsection 2.2.6 that output dynamic feedback of thetype shown in Fig. 3.10 enables stabilization of the overall system providedthat (A,B) is stabilizable and (A,C) detectable. Since overall system stabilityis required, these conditions on (A,B) and (A,C) are still necessary.

The overall system is described by

˙x(t) = A x(t) + H h(t)

e(t) = E x(t)(3.18)

with

x =

[xz

]A =

[A+BKC BL

MC N

]

H =

[H0

]E =

[E 0

](3.19)

i.e., it can be described by a unique triple (A, H, E).

e

Fig. 3.11. The overall system.

Page 64: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

58 3 Disturbance Decoupling and Unknown-Input State Observation

Output e is decoupled from input h if and only if minJ (A, imH) (the

reachable subpace of the pair (A, H)) is contained in kerE or, equivalently,

imH is contained in maxJ (A, kerE). Furthermore, in order the stability

requirement to be satisfied, A must be a stable matrix or minJ (A, imH)

and maxJ (A, kerE) must be both internally and externally stable.Stated in very simple terms, disturbance decoupling is achieved if and only

if the overall system (A, H, E) exibits at least one A-invariant W such that

H ⊆ W ⊆ E

W is internally and externally stable(3.20)

Necessary and sufficient conditions for solvability of our problem are statedin the following theorem.

Theorem 3.17. The dynamic measurement feedback disturbance decouplingproblem with stability admits at least one solution if and only if there existan (A,B)-controlled invariant V and an (A, C)-conditioned invariant S suchthat:

H ⊆ S ⊆ V ⊆ ES is externally stabilizableV is internally stabilizable

(3.21)

Proof. A short outline of the “only if” part of the proof. Define the followingoperations on subspaces of the extended state space X :Projection:

P (W) =

{x :

[xz

]∈ W

}(3.22)

Intersection:

I(W) =

{x :

[x0

]∈ W

}(3.23)

Clearly, I(W)⊆P (W), H = I(H) = P (H), E = P (E) = I(E). The “onlyif” part of the proof of Theorem 3.17 follows from (3.20) and the followinglemmas.

Lemma 3.18. Subspace W is an internally and/or externally stable A-

invariant only if P (W) is an internally and/or externally stabilizable (A,B)-controlled invariant.

Lemma 3.19. Subspace W is an internally and/or externally stable A-

invariant only if I(W) is an internally and/or externally stabilizable (A, C)-conditioned invariant.

Page 65: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

3.1 Disturbance decoupling 59

The “if” part of the proof is constructive, i.e., if a resolvent pair (S,V) isgiven, directly provides a compensator (N,M,L,K) satisfying all the require-ments in the statement of the problem. This consists of a special type of stateobserver fed by the measured output y plus a special feedback connectionfrom the observer state to the manipulable input u. �

Amore constructive set of necessary and sufficient conditions, based on thedual lattice structures af self-bounded controlled invariants and their duals,providing a convenient set of resolvent pair, is stated in the following theorem.

Theorem 3.20. Consider the subspaces Vm and SM defined in (3.4) and(3.11). The dynamic measurement feedback disturbance decoupling problemwith stability admits at least one solution if and only if

S∗(C,H) ⊆ V∗

(B,E)

SM is externally stabilizableVM =Vm + SM is internally stabilizable

(3.24)

If Theorem 3.20 holds, (SM ,VM) is a convenient resolvent pair. Similarly,define Sm =Vm ∩SM . It can easily be proven that (Sm,Vm) is also a convenientresolvent pair.

Note that conditions (3.24) consist of a structural condition ensuring fea-sibility of disturbance decoupling without internal stability and two stabiliz-ability conditions ensuring internal stability of the overall system.

The layout of the possible resolvent pairs in the dual lattice structure isshown in Fig. 3.12, that also points out the correspondences between any self-bounded controlled invariant belonging to the first lattice and an elementof the second and viceversa. This enables to derive other resolvent pairssatisfying Theorem 3.17.

V∗(B,E)

VM

Vm

SM

Sm

S∗(C,H)

+Vm

∩SM

Fig. 3.12. The resolvents with minimum fixed poles.

Page 66: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

60 3 Disturbance Decoupling and Unknown-Input State Observation

3.1.5 Matlab commands referring to Section 3.1

>> [Ac,Bc,Cc,Dc]=hud(A,B,C,H[,D,G]); (or [sysc]=hud(sys,H[,G]);)computes a feedforward decoupling compensator solving Problem 3.8. Thefirst four matrices in the call list are those appearing in equations (3.1),while D and G refer to possible feedthroughs from u and h to y. First,assume that these feedthrough terms are absent. Assume for the systemis left invertible, since, if not, a prelimunary squaring down can be used.First, the former of conditions (3.8) in Corollary (3.9) is checked and, if itis satisfied, a basis matrix V of the (A,B)-controlled invariant Vm definedin (3.4) is computed. Then use (2.27) and set Ac =X, Cc= − U . Owingto the left-invertibility assumption, the projections of H on Vm and imBare unique, so that matrices Bc and Dc can be derived as

[Bc

−Dc

]=[V B

]#H

If the matrices D and G are present and nonzero, substitute the quadruple(A,B,C,H) with (A1, B1, C1, H1) defined by

A1 =

[A OC O

]B1 =

[BD

]C1 =

[O I]

H1 =

[HG

](3.25)

>> [Am,Bm,Cm,Dm,Fs,Us]=extendf(A,B,C,D); given an LTI system de-scribed by the quadruple (A,B,C,D) that is non-left invertible (i.e., typ-ically with more inputs than outputs) computes Fs and Us such that thenew system sysm=ss(Am,Bm,Cm,Dm) is left-invertible and, besides the ze-ros of (A,B,C,D), has a number of new zeros equal to dimR∗, defined ininteractive mode.

+ +

uv y

x

Σ

Fs

Us

Σm

Fig. 3.13. Squaring down of a non-left invertible system.

The new system is defined by Am =A+B Fs; Bm =B Us ; Cm =C +DFs ;Dm =DUs. When a state feedback matrix Fm has been derived for(Am, Bm, Cm, Dm), the solution referred to (A,B,C,D) is recovered by

Page 67: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

3.2 Model following 61

using F=Us*Fm+Fs. This procedure is called squaring down and is used todeal with non-left invertible systems when sythesizing controllers withcomputational processes that require left invertibility 2. If D=O, Fs

is computed as Fs=effesta(A,B,Rv), where Rv denotes a basis ma-trix of R∗ =V∗ ∩S∗ 3, while Bm is defined as Bm =B Us, where Us isa basis matrix of (B−1 V∗)⊥, computable with the Matlab commandBm=B*ortco(invt(B,vstar(A,B,C))). If D 6=O, the above procedure isstill valid if used for the triple (A1, B1, C1) defined in (2.32).

3.2 Model following

The model following problem has been deeply investigated by numerous au-thors. In the geometric approach context, it was first approached by Morsein 1973. Model following is a particular case of measurable signal decoupling,and its solution in this context is straightforward and appealing. Using self-bounded controlled invariants and the geometric interpretation of invariantzeros make the most recent contributions very complete. In fact, they considerboth feedforward and feedback and the nonminimum-phase case.

+

_ h

u y

ym

yΣΣc

Σm

ΣFig. 3.14. Model following.

Refer to Fig. 3.14. The model following problem is stated as follows.

Problem 3.21. (model following) Determine, if possible, a dynamic feedfor-ward compensator Σc such that the output of the system Σ strictly follows (isequal to) the output of a given model Σm.

Definition 3.22. (minimum delay) The minimum delay γ of a triple(A,B,C) is the minimum value of i such that C Ai B is nonzero.

Theorem 3.23. Let Σ and Σm have no common poles and zeros and beassumed to be stable, square, left and right invertible. Problem 3.21 has asolution if2 Typical examples are the Matlab routines care and dare to solve the infinite-horizon LQR problem for continuous and discrete-time systems, respectively.

3 Recall that R∗ 6= {0} if and only if V∗ ∩ imB 6= {0}.

Page 68: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

62 3 Disturbance Decoupling and Unknown-Input State Observation

ρ(Σ) ≤ γ(Σm) (3.26)

Z(Σ) ⊆ Cg (3.27)

where ρ and γ denote the relative degree and the minimum delay, respectively.

Proof. The block diagram shown in Fig. 3.14 clearly repeates that in Fig. 3.3,provided the model is considered a part of the controlled system.

Assume that Σ is described by the triple (A,B,C) and Σm by the triple

(Am, Bm, Cm). The overall sistem Σ is described by

A =

[A 00 Am

], B =

[B0

], H =

[0Bm

], C =

[C −Cm

](3.28)

The structural condition expressed by inclusion (3.7) is satisfied if con-dition (3.27) holds. This is a straightforward consequence of Definition 2.47.Hence the structural condition is satisfied if a model is chosen with a suffi-ciently high minimum delay. Consider the stabilizability condition (3.6). As-suming that Σ and Σm have no equal invariant zeros, it can be shown thatthe internal eigenvalues of Vm are the union of the invariant zeros of Σ andthe eigenvalues of Am, so that in general model following with stability is notachievable if Σ is nonminimum-phase. �

Condition (3.27) can be evaded by properly replicating in Σm the unstablezeros of Σ. This can be achieved, for instance, by assuming a model Σm con-sisting of q independent single-input single-output systems all having as zerosthe unstable invariant zeros of Σ, so that these are cancelled as internal eigen-values of Vm. This makes it possible to achieve both input-output decouplingand internal stability, but restricts the model choice. If Σ is nonminimumphase, perfect or almost perfect following of a minimum phase model mayalso be achieved if h is previewed by a significant time interval t0, as pointedout in Subsection 3.1.3.

+

+ _

_

Σ

Σm

Σc

r h u

y

y

ym

Fig. 3.15. Model following with feedback.

In the geometric approach context the model following problem withfeedback, corresponding to the block diagram shown in Fig. 3.15, is also easily

Page 69: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

3.2 Model following 63

solvable. Like in the feedforward case, both Σ and Σm are assumed to be stableand Σm to have at least the same relative degree as Σ.

+

+ _

_

Σ

Σm

Σc

r h u

y

y

ym

Fig. 3.16. A structurally equivalent connection.

+

+

_ +

Σ

Σm

Σc

h

r

u

y

Σ′m

Fig. 3.17. Another structurally equivalent connection.

Replacing the feedback connection with that shown in Fig. 3.16 does notaffect the structural properties of the system. However, it may affect stability.The new block diagram represents a feedforward model following problem.In fact, note that h is obtained as the difference of r (applied to the inputof the model) and ym (the output of the model). This corresponds to theparallel connection of Σm and a diagonal algebraic system with gain −1, thatis invertible, having zero relative degree. Its inverse is Σm with a feedbackconnection through the identity matrix, as shown in Fig. 3.17. Let the modelconsist of q independent single-input single-output systems all having aszeros the unstable invariant zeros of Σ. Since the invariant zeros of a systemare preserved under any feedback connection, a feedforward model followingcompensator designed with reference to the block diagram in Fig. 3.17 doesnot include them as poles. It is also possible to include multiple internalmodels in the feedback connection shown in the figure (this is well known inthe single input/output case), that are repeated in the compensator, so thatboth Σ′

m and the compensator may be unstable systems. In fact, zero outputin the modified system may be obtained as the difference of diverging signals.

Page 70: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

64 3 Disturbance Decoupling and Unknown-Input State Observation

However, stability is recovered when going back to the original feedbackconnection represented in Fig. 3.15.

3.2.1 Matlab commands referring to Section 4.6

>> r=rhomin(A,B,C[,D]); (or r=rhomin(sys);) provides the minimum de-lay of the LTI system sys=ss(A,B,C,D). When D=O the minimum de-lay is computed as the minimum values of i such that C Ai B 6=O. WhenD 6=O the minimum delay is computed referring to the auxiliary system(2.32) and lowered by one.

3.3 The multivariable regulator problem

The multivariable regulator problem was first approached and solved by Fran-cis in 1977. This reformulation using the geometric approach tools is reportedin the Basile and Marro book.

+ _ ΣΣe Σr

r u

d

e y

Fig. 3.18. The multivariable regulator with internal model.

A convenient reference block diagram for the multivariable regulator withinternal model is shown in Fig. 3.18. It extends to the multivariable case thedesign philosophy usually adopted in the standard approach to the singleinput-single output control system design. The exosystem Σe generates theexogenous signals, r and d, applied to the regulation loop including the plantΣ and a regulator Σr to be determined. For instance, the exogenous signalsmay be steps, ramps, sinusoids. The eigenvalues of Σe are assumed to belongto the imaginary axis of the complex plane. The overall system considered,included the exosystem, is described by a linear homogeneous set of differentialequations, whose initial state is the only variable affecting evolution in time.The plant and the exosystem are modelled as a unique regulated systemwhich is not completely controllable or stabilizable (the exosystem is notcontrollable). The corresponding equations are

x(t) = Ax(t) +B u(t)

e(t) = E x(t)(3.29)

Page 71: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

3.3 The multivariable regulator problem 65

with

x =

[x1

x2

], A =

[A1 A3

0 A2

], B =

[B1

0

], E =

[E1 E2

]

In (3.29) the plant corresponds to the triple (A1, B1, E1). Note that theexosystem state x2 influences both the plant through matrix A3 and the errore through matrix E2. The pair (A1, B1) is assumed to be stabilizable and(A,E) detectable. The regulator is modelled by equations (3.17), like in thedisturbance decoupling problem by dynamic output feedback.

The overall system is referred to as the autonomous extended system

˙x(t) = A x(t)

e(t) = E x(t)(3.30)

with

x =

x1

x2

z

, A =

A1 + B1KE1 A3 +B1KE2 B1L

0 A2 0ME1 ME2 N

, E =

[E1 E2 0

]

Problem 3.24. (autonomous regulator problem) Derive, if possible, a reg-ulator Σr : (N,M,L,K) such that the closed-loop system with the exosystemdisconnected is stable and limt→∞ e(t)= 0 for all the initial states of the au-tonomous extended system.

Let x1 ∈Rn1 , x2 ∈R

n2 , z ∈Rm. If the internal model principle is used to

design the regulator, the autonomous extended system is characterized byan unobservability subspace containing these modes, that are all not strictlystable by assumption. In geometric terms, an A-invariant W ⊆ kerE havingdimension n2 exists, which is internally not strictly stable, but externallystrictly stable. In fact, since the eigenvalues of A are clearly those of A2

plus those of the regulation loop, that are strictly stable, W is externallystrictly stable. Hence A|W has the eigenstructure of A2 (n2 eigenvalues) and

AX/W that of the control loop (n1 +n2 eigenvalues). Hence in geometric terms

Problem 3.24 is re-stated as follows.

Problem 3.25. (autonomous regulator problem in geometric terms) Refer to

the extended system 3.30 and let E =kerE. Given the mathematical model ofthe plant and the exosystem, determine, if possible, a regulator (N,M,L,K)

such that an A-invariant W exists satisfying

W ⊆ E

σ(A|X/W) ⊆ Cg

(3.31)

Page 72: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

66 3 Disturbance Decoupling and Unknown-Input State Observation

In the extended state space X with dimension n1 +n2 +m, define theA-invariant extended plant P as

P = { x : x2 = 0} = im

In1 OO OO Im

(3.32)

By a dimensionality argument, the A invariant W , besides (3.31), must satisfy

W ⊕ P = X (3.33)

The main theorem on asymptotic regulation simply translates the ex-tended state space conditions (3.31) and (3.33) into the plant plus exosystemstate space where matrices A, B and E are defined. Define the A-invariantplant P through

P = {x : x2 = 0} = im

[In1

O

](3.34)

Theorem 3.26. Let E =kerE. The autonomous regulator problem admits asolution if and only if an (A,B)-controlled invariant V exists such that

V ⊆ E

V ⊕ P = X(3.35)

The “only if” part of the proof derives from (3.31) and (3.33), whilethe “if” part provides a quadruple (N,M,L,K) that solves the problem.Unfortunately the necessary and sufficient conditions stated in Theorem 3.26are nonconstructive. The following theorem provides constructive sufficientand almost necessary 4 conditions in terms of the invariant zeros of the plant.

Theorem 3.27. Let us define V∗ =maxV(A,B, E). The autonomous regulatorproblem admits a solution if

V∗ + P = X

Z(A1, B1, E1) ∩ σ(A2) = ∅(3.36)

Proof. Let F be a matrix such that (A+BF )V∗ ⊆ V∗. Introduce the similar-ity transformation T = [T1 T2 T3], with imT1 =V∗ ∩P , im[T1 T2] =V∗ and T3

such that im[T1 T3] =P . In the new basis the linear transformation A+BFhas the structure

4 The conditions become necessary if boundedness of the control variable u isrequired. This is possible also when the output y is unbounded if a part of theinternal model is contained in the plant.

Page 73: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

3.3 The multivariable regulator problem 67

A′ = T−1(A+BF )T =

A′

11 A′12 A

′13

O A′22 O

O O A′33

(3.37)

Recall that P is an A-invariant and note that, owing to the particularstructure of B, it is also an (A+BF )-invariant for any F .

By a dimensionality argument the eigenvalues of the exosystem are thoseof A′

22, while the invariant zeros of (A1, B1, E1) are a subset of σ(A′11) since

RV∗ is contained in V∗ ∩P . All the other elements of σ(A′11) are arbitrarily

assignable with F . Hence, owing to (3.36), the Sylvester equation

A′11 X −X A′

22 = −A′12 (3.38)

admits a unique solution. The matrix

V =T1 X +T2

is a basis matrix of an (A,B)-controlled invariant V satisfying the solvabilityconditions (3.35). �

Remarks.

• If the plant is invertible and conditions (3.36) are satisfied, a unique (A,B)-controlled invariant V satisfying conditions (3.35) exists.

• The proof of Theorem 3.27 provides the computational framework toderive a resolvent when the sufficient conditions stated (that are alsonecessary if the boundedness of the plant input is required) are satisfied.

• Relations (3.36) are respectively a structural condition and a stabilizabilitycondition in terms of invariant zeros; they are easily checkable by meansof the algorithms previously described.

• The stability condition is very mild in this case since it only requires thatthe plant has no invariant zeros equal to eigenvalues of the exosystem.

• When a resolvent has been determined by means of the computationalprocedure described in the proof of Theorem 3.27, it can be used to derivea regulator with the procedure outlined in the “if” part of the proof ofTheorem 3.26.

• The order of the obtained regulator is n (that of the plant plus that ofthe exosystem) with the corresponding 2n1 +n2 closed-loop eigenvaluescompletely assignable under the assumption that (A1, B1) is controllableand (E,A) observable.

• The internal model principle is satisfied since the from the proof of the “if”part of Theorem 3.26 it follows that the eigenstructure of the regulatorsystem matrix N contains that of A2.

• It is necessary to repeat an exosystem for every regulated output toachieve independent steady-state regulation (different internal models areobtained in the regulator).

Page 74: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

68 3 Disturbance Decoupling and Unknown-Input State Observation

• The autonomous regulator problem may also be solvable if the plantis nonminimum phase; minimality of phase is only required for perfecttracking , not for asymptotic tracking .

3.3.1 Matlab commands referring to Section 3.3

>> [L,M,N,K] = regtr(A,B,C,J,P1,P2,[infor,infor1]) Regulator de-sign with the geometric approach tools.A,B,C the matrices of the plantJ the Jordan block of the internal modelP1 the eigenvalues to be assigned by state feedback (nP1=nA)P2 the eigenvalues to be assigned by output injection (nP2=nA+mE*nJ)infor: if present and=1 some information on design is displayedinfor1: if present and=1 some data are saved in the file regdata.mat.

3.4 Noninteraction and fault detection andidentification

Another problem that in the geometric approach context was approached withstate feedback at the beginning, even though involving measurable exogenoussignals, is noninteracting control. The related problem is stated as follows.

Problem 3.28. (noninteracting control) Given an LTI system Σ whose out-put is partitioned by blocks (y1, y2, . . .), derive a controller with again as manyinputs (α1, α2, . . .) such that input αi allows complete reachability of outputyi while maintaining at zero all the other outputs.

Only two output blocks y1 and y2 will herein be considered for the sake ofsimplicity. Therefore Σ is described by

x(t) = Ax(t) + B u(t)y1(t) = C1 x(t)y2(t) = C2 x(t)

(3.39)

with x∈Rn, u∈R

p, y1 ∈Rq1 , y2 ∈R

q2 .

+ + +

+

G1

G2

α1

α2

y1

y2

x

Fa)

+ + +

+

G1

G2

α1

α2

y1

y2

stateextensionF

b)

Fig. 3.19. Noninteracting control with state feedback.

Page 75: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

3.4 Noninteraction and fault detection and identification 69

Fig. 3.19.a shows the solutions proposed by Wonham and Morse in theirfirst paper on geometric approach, based on state feedback and algebraicfeedforward units. This solution was probably suggested by the measurablesignal decoupling layout with algebraic feedforward and feedback shown inFig. 3.2. The completely algebraic solution shown in Fig. 3.19.a includes twofeedforward units G1, G2 and state feedback F . Achieving the most com-plete noininteraction with this technique is more restrictive than with othermethods since, in general, the same state feedback cannot transform any twocontrolled invariants into simple (A+BF )-invariants. This drawback can beovercome by extending the state with a suitable bank of integrators as pro-posed by Wonham and Morse in their second paper and shown in Fig. 3.19.b.

+

+

α1

α2

Σ1

Σ2

Σu

y1

y2

Fig. 3.20. Noninteracting control with dynamic feedforward units.

An alternative solution, inspired by the measurable signal decoupling bymeans of a dynamic feedforward unit whose layout is shown in Fig. 3.3, wasproposed by Basile and Marro.

Theorem 3.29. Let C1 =kerC1, C2 =kerC2, B= imB. Problem 3.28 is solv-able if and only if

C1 R2 = Rq1

C2 R1 = Rq2 (3.40)

where Ri is the constrained reachability subspace on Ci (i=1, 2).

Proof. Refer to Fig. 3.20 and suppose for a while that the controlled systemΣ is stable. Let yi ∈R

qi , Ci =kerCi (i=1, 2). The maximum subspace thatcan be reached from the origin while being invisible at output y2 is R∗

(B,C2)

and the maximum subspace that can be reached from the origin while beinginvisible at output y1 is R∗

(B,C1). These subspaces are controlled invariants

internally stabilizable (or, more exactly, pole assignable) whose dynamics canbe reproduced in the feedforward units Σ1 and Σ2. Hence noninteraction ispossible if and only if conditions (3.40) hold. �

Equivalent expressions for conditions (3.40) are

C1 + (V∗2 ∩ S∗

2 ) = Rn

C2 + (V∗1 ∩ S∗

1 ) = Rn (3.41)

Page 76: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

70 3 Disturbance Decoupling and Unknown-Input State Observation

Remark. The above conditions imply that system Σ is not left-invertiblewith respect to any output yi (i=1, 2).

There is a certain degree of freedom in the choice of inputs α1 and α2.They can be assumed of full dimension, i.e., corresponding to input matricesspanning the whole subspaces R∗

(B,C2)and R∗

(B,C1). On the other hand, since

only dynamic reachability is required, owing to a well known Heymann lemmathey can also be assumed to be scalar without affecting problem solvability.It is not necessary that the controlled system Σ is stable, but, similarlyto the measurable disturbance decoupling problem, only stabilizability anddetectability are required. The overall block diagram for this case, similar tothat in Fig. 3.4, is shown in Fig. 3.21.

+

+

+

+

+

+

Σ1

Σ2

α1

α2

y1y2

ym

−ym Σs xs=0

Fig. 3.21. Using a stabilizer in feedforward noninteracting control.

+

+

+ +

+

+

α1

α2

Σ1

Σ2

Σu

y1y2

ym 0

−ym

Fig. 3.22. Nulling a measurable output.

Note that the stabilizer shown in Fig. 3.21 is not influenced by the inputsα1 and α2 since the measured output ym (from which Σ is detectable) can benulled by a signal −ym generated in the feedforward units, where a replica ofthe state evolution produced by α1 and α2 is available. This detail is pointedout in Fig. 3.22.

u1u2 Σ

y Σ1

Σ2

α1

α2

um

Fig. 3.23. A block diagram for fault detection and identification.

Page 77: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

3.4 Noninteraction and fault detection and identification 71

u1u2uh

v1 v2 vk

Σ y

unit 1

unit 2

unit h+k

α1

α2

umαh+k

d

Fig. 3.24. A more general layout for the FDI problem.

Let us now consider the block diagram in Fig. 3.23, which clearly is thedual of that in Fig. 3.22. The mathematical model of Σ is

x(t) = Ax(t) + Bm um(t) + B1 u1(t) + B2 u2(t)

y(t) = C x(t)(3.42)

where um refers to a measurable input, while both u1 and u2 are assumedto be inaccessible. The fault detection and identification problem is stated asfollows.

Problem 3.30. (fault detection and identification) Given an LTI systemΣ having an inaccessible input partitioned by blocks (u1, u2, . . .), derive anobserver with again as many scalar outputs (α1, α2, . . .) such that output αi

is different from zero if any component of ui is different from zero while allthe other outputs are maintained at zero.

A geometric approach to this problem was first proposed by Massoumniaand restated in improved terms by Massoumnia, Verghese and Willsky.

A more general layout for the fault detection and identification problemherein considered is shown in Fig. 3.24. System Σ is described by

x(t) = Ax(t) + B um(t) +H d(t) +h∑

i=1

Bi ui (3.43)

y(t) = C x(t) [ +k∑

j=1

Dj vj(t)] (3.44)

where (A,B,C) is the nominal system (without any fault or disturbanceinput) with input um (accessible for measurement), and

• ui (i=1, . . . , h) the actuator fault inputs,• Bi =(∆B)i the corresponding signatures,• vj (j=1, . . . , k) the sensor fault inputs,• Dj =(∆C)j the corresponding signatures,• d the disturbance input.

Page 78: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

72 3 Disturbance Decoupling and Unknown-Input State Observation

The fault detecting units provide the outputs

• αℓ (ℓ=1, . . . , h+ k), called the residuals. The residuals are herein assumedto be scalar.

The fault inputs are assumed to be

• white noises in on-line processes,• impulses in single-throw processes or batch processes.

They are usually applied through filters or exosystems whose dynamicsis included in matrix A, so that the actual faults are modelled as colorednoise or a specific transient, respectively. Hence the sensor faults often aretaken into account with further terms of the type Bi ui. This is the reason forthe square brackets in equation (3.44). Thus the overall system whose layoutshown in Fig. 3.24 is led back to that in Fig. 3.23, provided that disturbanced is considered as fault to be rejected by all the detecting units.

3.4.1 Matlab commands referring to Section 3.4

Page 79: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

4

Geometric approach to H2-optimal regulation

and filtering

The study of the Kalman linear-quadratic regulator (LQR) is the central topicof most courses and treatises on advanced control systems. See, for instance,the books by Kwakernaak and Sivan (1973), Anderson and Moore (1989),Syrmos and Lewis (1995). More recently, in the nineties, a certain attentionwas given to the H2 optimal control, which is substantially a restatement of theLQR with some standard and well settled problems of the geometric approach(for instance, disturbance decoupling with output feedback). Feedthrough isnot present in general, so that the standard Riccati equation-based solutionsare not implementable and the existence of optimal solution is not ensured.Books on this subject are by Stoorvogel (1992), Saberi, Sannuti and Chen(1995). The computational method they use to solve the H2-optimal problemare linear matrix inequalities (LMI), supported by a “special coordinate basis”that points out the geometric features of the systems dealt with.

An alternative approach, which will be briefly presented in the following,is to treat the singular and cheap problems, where feedthrough is not present,by directly referring to the LTI system obtained by differentiating the Hamil-tonian function, which can be considered as a generic dynamic system, withall the previously described features.

4.1 Disturbance decoupling in minimal H2-norm

Let us recall that the H2-norm of a continuous-time LTI system Σ representedby the equations (1.1) is defined as

‖Σ‖2 =

(tr

(∫ ∞

0

gT(t) g(t) dt

))1/2

(4.1)

where “tr” denotes the trace of a matrix, namely the sum of the elementson the main diagonal and g(t) denotes the impulse response of the system,namely the matrix of time functions whose columns are the solutions of

Page 80: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

74 4 Geometric approach to H2-optimal regulation and filtering

the autonomous system x(t)=Ax(t), y(t)=C x(t) with initial conditionscorresponding to the columns of B.

Likewise, for a discrete-time LTI system Σ represented by the differenceequations (1.2), the H2-norm is

‖Σd‖2 =

(tr

(∞∑

k=0

gTd (k) gd(k)

))1/2

(4.2)

where the impulse response g(k) is defined like in the continuous-time case,but referring to the autonomous system x(k+1)=Ax(k), y(k)=C k(t).

The Parseval theorem states that equivalent expressions referred to thetransfer function models (1.7) and (1.8) are

‖Σ‖2 =

(1

2πtr

(∫ ∞

−∞

GT(jω)G(jω) dω

))1/2

(4.3)

and

‖Σd‖2 =

(1

2πtr

(∫ π

−π

GTd(e

jω)Gd(ejω) dω

))1/2

(4.4)

where G(jω) and Gd(ejω) denote the frequency response of Σ and Σd, respec-

tively1. The continuous-time case will be primarily considered herein, and onlythe main distinguishing differences will be added on for the discrete-time case.

The standard linear quadratic regulator (LQR) problem is stated as fol-lows: given a stabilizable LTI system whose state evolution is described by

x(t) = Ax(t) + B u(t) , x(0) = x0 (4.5)

determine a control function u(·) such that the corresponding state trajectoryminimizes the performance index

J =

∫ ∞

0

(x(t)TQx(t) + u(t)TRu(t) + 2x(t)TS u(t)

)dt (4.6)

where Q, R and S are symmetric positive semidefinite matrices of suitablesizes. If R> 0 (positive definite), the problem is said to be regular and itssolution is standard, if R≥ 0 (positive semidefinite) the problem is said to besingular , while if R=0 the problem is said to be cheap.

It is well known that the regular LQR problem is solved by a state feedbackF , independent of the initial state x0. The layout of the solution is shown inFig. 4.1.

The equivalence of the minimum H2-norm disturbance rejection problemand the classical Kalman regulator problem is a simple consequence of the

1 Recall that GT denotes herein the conjugate transpose of G if G is complex.

Page 81: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

4.1 Disturbance decoupling in minimal H2-norm 75

u

x0δ(t)

x

Σ

F

Fig. 4.1. The standard LQR problem.

expression of the H2-norm in terms of the impulse response of the triple(A,B,C). In fact, since Q, R and S are symmetric and nonnegative semidef-inite, there exist matrices C and D such that

[C D

]T [C D

]=

[Q SST R

](4.7)

Consider the two-input system (3.1) with also a possible feedthrough, i.e.,

x(t) = Ax(t) +B u(t) +H h(t)

y(t) = C x(t) +Du(t)(4.8)

where C and D are defined in (4.7), and refer to Fig. 3.1. The system (4.8) isassumed to be left-invertible, not necessarily stabilizable, and with no zeroson the imaginary axis. It is easily seen that in our case the H2-norm is thesquare root of (4.6) for the system (A+BF,H,C) described by

x(t) = (A+ BF ) x(t) , x(0) = H

y(t) = (C +DF ) x(t)(4.9)

where state and output are now matrices instead of vectors, so that it isminimized for any H by the Kalman feedback matrix F .

The LQR problem is solvable with the standard geometric tools. Accordingto the classical optimal control approach, consider the Hamiltonian function

M(t) = x(t)TQx(t) + u(t)TRu(t) + 2x(t)TS u(t) + p(t)T(Ax(t) + B u(t)

)

and set the state, costate equations and stationary condition as

x(t) =∂M(t)

∂p(t), p(t) = −

∂M(t)

∂x(t), 0 =

∂M(t)

∂u(t)

We derive the following Hamiltonian system

x(t) = Ax(t) + B u(t) , x(0)=hi

p(t) = −2Qx(t)− ATp(t)− 2S u(t)

0 = 2STx(t) +BTp(t) + 2Ru(t)

(4.10)

Page 82: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

76 4 Geometric approach to H2-optimal regulation and filtering

where hi denotes a generic column of H, or, in more compact form,

˙x(t) = A x(t) + B u(t) + H h(t)

0 = C x(t) + D u(t)(4.11)

with

x =

[xp

], H =

[H0

]

A =

[A 0

−2Q −AT

], B =

[B

−2S

], C =

[2ST BT

], D = 2R

The set of equations (4.11) can be considered as referring to an LTI dy-namic system whose output is constrained to be zero. It follows that mini-mizing the H2 norm of system (4.9) is equivalent to the perfect decoupling

problem for the quadruple (A, B, C, D), that admits a solution if and only

if there is an internally stable (A, B)-controlled invariant V∗, output-nullingfor the overall extended system whose projection on the state space of theoriginal system, defined as

P V∗ =

{x :

[xp

]∈ V∗

}(4.12)

contains the image of the matrix initial state H. It can be proven that theinternal unassignable eigenvalues of V∗ having nonzero real parts are stable-unstable by pairs. Hence a solution of the LQR problem is obtained as follows:

1. compute V∗;2. compute a matrix F such that (A+ BF )V∗⊆ V∗;

3. compute Vs, the maximum internally stable (A+ BF )-invariant contained

in V∗ (this is a standard eigenvalue-eigenvector problem) and define V∗H2

as P Vs;4. if H∈V∗

H2, the problem admits a solution F , that is easily computable

directly from V∗H2, which is an (A,B)-controlled invariant.

The above procedure provides a state feedback matrix F correspondingto the minimum H2 norm of the LTI system with input h and output y. Thisimmediately follows from the previously recalled expression of the H2 normin terms of the impulse response. In fact, the impulse response correspondsto the set of initial states defined by the column vectors of matrix H.

Let us briefly consider the extension to the discrete-time case, correspond-ing to the two-input system

x(k + 1) = Ax(k) + B u(k) +H h(k)

y(k) = C x(k) +Du(k)(4.13)

Page 83: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

4.1 Disturbance decoupling in minimal H2-norm 77

In this case the Hamiltonian function is

M(k) = x(k)TQx(k) + u(k)TRu(k) + 2x(k)TS u(k)

+ p(k + 1)T(Ax(k) + B u(k)

)

and the state, costate equations and stationary condition are

x(k + 1) =∂M(k)

∂p(k + 1), p(k) =

∂M(k)

∂x(k), 0 =

∂M(k)

∂u(k)

i.e.x(k + 1) = Ax(k) + B u(k) , x(0)=hi

p(k) = 2Qx(k) + AT p(k + 1) + 2S u(k)

0 = 2ST x(k) + BT p(k+1) + 2Ru(k)

Like in the continuous-time case, it is convenient to express this systemin the compact form

x(k + 1) = A x(k) + B u(k) + H h(k)

0 = C x(k) + D u(k)(4.14)

with

x =

[xp

], H =

[H0

]

A =

[A 0

−2A−T Q −A−T

], B =

[B

−2A−T S

]

C =[−2BTA−TQ+ 2ST BTA−T

], D = 2R− 2BTA−TS

The drawback due to the term A−T when A is singular can be overcomeby using a stabilizing state feedback to be subtracted to the final state feed-back solving the problem. The solution is obtained again with a geometricprocedure, but, unlike the continuous-time case, in this case a dead-beat likemotion is also feasible and V∗

H2covers the whole state space of system (4.13)

if (A,B) is stabilizable. Hence the problem of minimizing the H2 norm fromh to y is always solvable in the discrete-time case. An interesting geometricinterpretation of the dead-beat part of the solution in terms of S∗, that in-fluences the state space trajectory as shown in Fig. 2.16.a, is given in Marro2002.

Page 84: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

78 4 Geometric approach to H2-optimal regulation and filtering

u

h

x

y

Σ

F

Fig. 4.2. Minimal H2 norm decoupling (Kalman regulator).

4.1.1 The Kalman regulator

The Kalman regulator is the minimum H2 norm extension of the exact dis-turbance decoupling problem with stability considered in Subsection 3.1.1.

Referring to Fig. 4.2, recall that the H2 norm of a system Σ : (A,B,C) isthe mean power of the output signal when the input is white noise with zeromean and unitary variance. The problem of minimizing the H2 norm from hto y has a solution if and only if

H⊆V∗H2

(4.15)

Condition (4.15) replaces (3.3) and (3.6) for a minimal norm solution. Italways holds in the regular case, since V∗

H2=R

n in this case, but also appliesto the singular and cheap cases, where the dimension of V∗

H2is always re-

duced. V∗H2

is an internally stabilizable controlled invariant, but, for synthesispurposes it may be replaced with the minimum controlled invariant containedin it and containing H by using (3.4) with V∗

H2instead of C, to reduce the

number of fixed modes.

4.1.2 The Kalman dual filter and the Kalman filter

Refer now to the problems considered in Subsection 3.1.2, measurable signaldecoupling and unknown-input observation of a linear function of the stateand consider their minimum H2 norm extensions, that are represented by theblock diagrams shown in Fig. 4.3.

The necessary and sufficient condition for the solution of the minimal H2

norm problem shown in Fig. 4.3.a is

H⊆V∗H2

+ B (4.16)

similar to condition (3.7) for the exact decoupling. It directly ensures stabilityand also holds in the singular and cheap cases. The block diagram in Fig. 4.3.brefer to the Kalman filter, which is here deduced by duality. The transpose ofthe matrix on the right of (4.7) represents the covariance matrix of a globalwhite noise injected on the state and the output of Σ. The singular and cheapcases correspond to incomplete or absent measurement noise (noise injectedat the output).

Page 85: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

4.1 Disturbance decoupling in minimal H2-norm 79

h

u Σ

Σc

y

a)

Measurable signalH2-optimal decoupling(Kalman dual filter)

+ +

ue

y

−e

ΣΣo

η

b)

Unknown-input H2-optimalstate observation(Kalman filter)

Fig. 4.3. Minimal H2 norm decoupling problems, primal and dual.

4.1.3 Other H2-optimal control and filtering problems

The problems considered in Subsection 3.1.3 can also be revisited in theminimal H2 norm context. The corresponding layouts are shown in Fig. 4.4.

hpdelay h

u Σ

Σc

y

a)

Previewed signalH2-optimal decoupling(Kalman dual smoother)

+ +

ue

y

ed

−ed

delay

Σo

Ση

b)

Unknown-input H2-optimalstate observation with delay

(Kalman smoother)

Fig. 4.4. Other problems solved in minimal H2 norm.

Solution of the minimum H2 norm decoupling of a previewed signal witha feedforward unit as shown in Fig. 4.4.a is related to the solution of the finitehorizon LQR problem, that has been the object of numerous contributionin the literature. It has been profitably revisited introducing a significantcomputational shortcut for the stable and antistable invariant subspaces ofthe Hamiltonian system by Ferrante, Marro and Ntogramatzidis (2005), thatmakes it possible to include also this problem in the geometric approachcontext. The solution is restricted to the regular case for continuous-timesystems, but is rather general in discrete-time case. Duality yields the optimalsmoother shown in Fig. 4.4.b.

Fig. 4.5 refers to H2 optimal decoupling with dynamic output feedback.This is another problem already deeply investigated in the literature but forwhich the geometric approach through the Hamiltonian system may give asignificant insight, in particular into the features of the singular and cheapcases, and provide new solutions. It has been investigated with geometric toolsin the regular continuous-time case by Saberi, Sannuti, Stoorvogel and Chen(1995) and in the book by Trentelman, Hautus and Stoorvogel (2001).

Page 86: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

80 4 Geometric approach to H2-optimal regulation and filtering

u

h

y

e

Σ

Σc

Fig. 4.5. H2 optimal decoupling with dynamic output feedback.

+

_

+

_

+

_

Σ

Σm

Σc

r h u

d

e=0

y

ym

Σoy1

us

regulatorcontrolledsystem

reduced-orderobserver

model(possibly q SISO)

Fig. 4.6. modfol.

4.1.4 Matlab commands referring to Section 4.1

>> [V,F,X]=vstargh2(A,B,C,D]); (or [V,F,X]=vstargh2(sys);] com-putes the maximum internally stabilizable controlled invariant minimiz-ing the output ℓ2 norm of continuous-time LTI system sys=ss(A,B,C,D).Matrix F is a friend of V and matrix X is used to compute the cost asshown below. The algorithm is based on a generalization of the geometricapproach applied to the Hamiltonian system described in Subsection 4.1.The quadruple (A,B,C,D) is assumed to be left invertible and stabiliz-able. Let us refer to the the Hamiltonian system (4.11) with Q=CTC,

R=DTD, S=CTD. By using vstarg , the routine computes matrices V , Fand partitions them as in (4.1), i.e., V T =

[V1P1

]and F =

[F1 F2

]. Let us

assume V =V1. It is a basis matrix of the subspace of the admissible trajec-tories of system (4.11) relative to x, that are expressible as x(t)=V α(t),with α(t) satisfying the differential equation

α(t) = V # (A+BF )V α(t) , α(0) = α0 (4.17)

Page 87: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

4.1 Disturbance decoupling in minimal H2-norm 81

where F is defined as F =(F1 +F2 P1 V#1 ), while X =V T

1 P1 is the matrixof the cost, that is computable as c[0,∞) =αT

0 X α0.>> modfol; feedforward or feedback model following. System and model are

*.mat files with matrices A,B,C,D and sampling time Tc (Tc=0 in thecontinuous-time case). Their names are requested in interactive mode. Ifthe system is non-minimum phase, the tracking error is minimized in norm.

>> [C1,D1]=stoor(A,B,C,D) sys1=stoor(sys) with sys1=ss(A,B,C1,D1)

computes C1 and D1 such that the exact disturbance decoupling problemfor (A,B,C1,D1) is equivalent to the minimum H2 norm decoupling for theoriginal system (A,B,C,D). Left invertibility is requested. It also worksin the discrete-time case with the calls [C1,D1]=stoor(A,B,C,D,1) orsys1=stoor(sys) with sys being defined as a discrete-time LTI system.Left invertibility is not required.

Remark 4.1. Properties of application stoor

1. the exact disturbance decoupling problem with state feedback for anyinput disturbance matrix H such that imH ⊆V∗

1 , where V∗1 is referred

to Σ1, corresponds to a minimal H2-norm solution for Σ;2. the number of outputs of Σ1 is equal to that of Σ;3. Σ1 is minimum-phase;4. Σ1 has the same global relative degree as Σ;5. the steady-state gain of Σ1 is equal to that of Σ.

Page 88: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software
Page 89: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

A

Some basic Matlab commands

This is a primer for the most frequently used commands for m-files in Mat-lab, with specific emphasis to dynamic system representations. They refer toversion 5.3 or subsequent. Use the command help program to obtain furtherinformation on the particular function program.

A.1 Vectors, matrices and polynomials

>> u=[1 4 2 7]; defines u as a row vector;

>> v=[1;4;2;7]; defines v as a column vector;

>> t=0:.1:10; defines t as a vector with equidistant elements (typically atime axis); if the middle value (step) is left out, it is assumed equal to one.

>> n=length(u); gives in n the length (number of elements) of vector u;

>> x=u(2); defines as x the second element of u.

>> x=u(2:4); defines as x the vector consisting of the elements of u fromthe second to the fourth;

>> x=u([1 3]); defines as x the vector consisting of the first and the thirdelement of u;

>> x=u(length(u):-1:1); defines as x x the vector obtained from u byreversing the order of elements;

>> n=norm(u); defines as n the Euclidean norm of u.

>> A=[1 2.5 -4; 5 7.8 9; 10 .1 3; 5 -4 0]; defines a generic matrixA by rows;

>> A=zeros(2,3); B=zeros(2); defines A as the 2× 3 matrix with allelements zero, B as the 2× 2 with all elements zero;

>> A=ones(2,3); B=ones(2); defines A as the 2× 3 matrix with all ele-ments equal to one, as B the 2× 2 with all elements equal to one;

Page 90: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

84 A Some basic Matlab commands

>> A=eye(3); defines A as the 3× 3 identity matrix;

>> A=rand(3,4); defines A as a 3× 4 matrix whose elements are randomreal numbers with uniform distribution in [0, 1];

>> A=randn(3,4); defines A as a 3× 4 matrix whose elements are randomreal numbers with Gaussian distribution and unit variance;

>> A=[]; defines A as an empty matrix;

>> [m,n]=size(A); gives m as the number of rows and n as the number ofcolumns of matrix A;

>> A1=A(1:3,2:3); defines A1 as the submatrix of A with the specifiedintervals of rows and columns;

>> A=[A1 A2; A3]; defines matrix A as the composition of the submatricesA1, A2, A3 with suitable dimensions;

>> u=A(2,:); defines vector u as the second row of matrix A;

>> v=A(:,1); defines vector v as the first column of matrix A;

>> B=A’; defines B as the transpose of matrix A (the conjugate transposeif A is complex);

>> B=inv(A); defines B as the inverse of matrix A;

>> B=pinv(A); defines B as the pseudo-inverse of matrix A;

>> p=eig(A); defines p as the column vector of the eigenvalues of matrix A;

>> [T,D]=eig(A); provides as T the matrix of the eigenvectors of matrixA and as D the diagonal matrix of the corresponding eigenvalues (recallthat if the eigenvalues are multiple the eigenvectors may be non-linearlyindependent, while if A is symmetric or Hermitian its eigenvalues are allreal and its eigenvectors are linearly independent);

>> n=norm(A); defines as n the norm of matrix A (square root of thegreatest eigenvalue of A′ A, where A′ is the conjugate transpose of A);

>> p=poly(A); gives in the row vector p the coefficients of the characteristicpolynomial of matrix A, starting from that of the greatest power;

>> r=roots(p); gives in the column vector r the roots of the polynomialwhose coefficients are the elements of vector p;

>> p=poly(r); gives in the row vector p the coefficients of the polynomialwhose roots are the elements of vector r;

>> B=expm(A); defines B as the exponential of matrix A.

A.2 Interaction with the Command Window

>> a=’message’; defines a as the alpha-numeric string “message”;

Page 91: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

A.4 Binary logic 85

>> disp(’message’) displays “message” in the Command Window;

>> fprintf(’message’) displays “message” without carriage return;

>> fprintf(’message\n’) diplays “message” with carriage return;

>> fprintf(’\nmessage\n\n’) the same as above, but with one carriagereturn before and two after the display;

>> a=input(’a : ’); enables to define from the keyboard the value of a(constant or vector or matrix) while displaying “ a : ”;

>> fprintf(’ a = %.4g’,a) displays the value of a in a suitable formatwith the message “ a = ”;

>> fprintf(’\na = %.4g’,a) also displays the value of a, but performinga carriage return before.

A.3 Cell arrays

>> a=cell(3,2); defines a as the 3× 2 cell array of empty matrices, i.e.,

a = [] []

[] []

[] []

>> a{2,1}=b; inserts in the cell with location 2, 1 of the array a the elementb (vector, matrix or string) 1;

>> b=a{2,1}; takes out from the cell array a the element with location 2, 1(vector, matrix or string) and defines it as b.

A.4 Binary logic

>> p=a<b; sets p equal to one if a is less than b, equal to zero if not2;

>> p=a>b; p=a<=b; p=a>=b; p=a==b; p=a~=b; statements similar to theprevious one, but referring to “greater or equal”, “less or equal”, “non-equal”. They can also be used to compare element by element matrices orvectors with equal dimensions and provide matrices or vectors of binaryvariables (zeros and ones);

>> p=a|b; gives in p the outcome of the operation OR of the binary variablesa and b ;

1 Note the use of curly brackets.2 According to the rule followed in Matlab, in logical operations any number differ-ent from zero is interpreted as “true” and zero as “false”. The value correspondingto “true” as outcome of a logical operation is one.

Page 92: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

86 A Some basic Matlab commands

>> p=a&b; gives in p the outcome of the operation AND of the binaryvariables a and b ;

>> p=~a; gives in p the value of NOT a (complement of a);

>> p=any(v); sets p equal to one if at leat one of the elements of vector vis true (different from zero);

>> p=all(v); sets p equal to one if all the elements of vector v are true(different from zero);

>> i=find(v); gives in vector i the indices of the elements of the vector vof binary variables whose values are different from zero;

>> [m,i]=max(v); [m,i]=min(v); gives as m the value of the maximum(minimum) of vector v and as i that of the corresponding index.

A.5 Conditional execution of a block of commands

>> for k=1:n, ... , end repeates the block of commands up to “end”for k from 1 to n; it is possible to exit the loop with the command “break”,in general subject to an “if ... end”;

>> if m, ... , end executes the block of commands up to “end” if thebinary variable m is nonzero;

>> while m, ... , end repeates the block of commands up to “end” whilethe binary variable m is nonzero; it is possible to exit the loop with thecommand “break”, in general subject to an “if ... end”.

A.6 Further commands

>> pause stops the program execution until the enter key is pressed;

>> shg, pause shows the current figure and stops the program executionuntil the enter key is pressed; such a command sequence is used to displaya figure during the execution of a program;

>> pause(3) stops the program execution for three seconds;

>> return quits the program (m-file or function) and goes back to theCommand Window or to the program from which it was called;

>> figure defines an (empty) new figure, that is automatically numberedin sequence;

>> figure(n) makes the figure n to be the current figure;

>> plot(t,y),shg,pause plots the function y versus the time vector t;

>> stairs(t,y),shg,pause plots the sequence y versus the time vector tas a piecewise constant function;

Page 93: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

A.7 Linear Time-Invariant (LTI) systems 87

>> [t,y]=stairs(t,y); computes the previous piecewise constant functionwithout any plot; the plot can subsequently be obtained with the previouscommand plot(t,y);

>> subplot(m,n,1), ..., subplot(m,n,2), ... subdivides a figure inm×n (matrix) subplots (axes) that are subsequently entered by rows asshown;

>> title(’message’) gives the current figure the title “message”;

>> v=axis; defines the 4-element row vector v with the ends of the currentaxes;

>> axis(v) gives the current axes the ends defined in the row vector v;

>> grid grids the current figure;

>> hold on refers the next plots to the same axes as the last one;

>> hold off calcels “hold on”;

>> clear clears all the variables from the workspace of the CommandWindow or of a function;

>> clear a b M clears the variables a, b and M from the workspace of theCommand Window or of a function;

>> save file a b M saves in the file file.mat the variables a, b and M ;

>> load file loads the file file.mat thus importing in the current workspaceall the variables saved in it;

>> close all closes all the graphic windows (figures).

A.7 Linear Time-Invariant (LTI) systems

>> sys1=ss(A,B,C,D); defines sys1 as the continuous-time system whosestate-space decription is provided by the four matrices A,B,C,D;

>> sys1=ss(A,B,C,D,T); defines sys1 as the discrete-time system de-scribed by the matrices A,B,C,D wth sampling time T . If knowledgeof the sampling time is not necessary, just set T = − 1;

>> sys1=ss([],[],[],D); or simply sys1=ss(D); defines sys1 as a purelyelgebraic system (A,B,C,D are empty matrices and there is no differencebetween continuous and discrete time);

>> [A,B,C,D]=ssdata(sys1); extracts the state-space matrices of sys1 ;

>> sys2=tf(sys1); defines sys2 as the transfer function form of sys1 ;

>> [num,den]=tfdata(sys2); provides the coefficients of the numeratorand denominator polynomials of the transfer matrix di of sys2 (in themultivariable case as cell arrays);

>> sys3=ss(sys2); defines sys3 as the state-space form of sys2;

Page 94: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

88 A Some basic Matlab commands

>> sys4=zpk(sys1); defines sys4 as the pole-zero-gain form of sys1;

>> [z,p,k]=zpkdata(sys4); provides the vectors of zeros and poles andthe gain matrix of the elements of the transfer matrix of sys4 (in themultivariable case zeros and poles are provided as cell arrays, while thegain is a real matrix);

>> sys4=minreal(sys3); defines sys4 as a minimal realization of sys3 ;

>> sys2=c2d(sys1,T); defines sys2 as the discrete-time system obtainedfrom sys1 (continuous-time) according to the zero-order hold equivalenceand sampling time T ;

>> M=dcgain(sys1); defines M as the steady-state gain matrix of sys1 ;

>> impulse(sys1,tf),shg,pause plots the impulse response of sys1 in thetime interval [0, tf ] with automatic choice of the sampling instants; thiscan also be imposed by specifying, instead of the extreme tf , the wholetime scale 0 : dt : tf ;

>> step(sys1,tf),shg,pause plots the step response of sys1 with thesame convention as impulse about the time scale;

>> initial(sys1,x0,tf),shg,pause plots the free response of sys1 fromthe initial state x0 with the same convention as impulse about the timescale;

>> [y,t]=impulse(sys1,tf); provides the impulse response of sys1 with-out any plot; y is a three-dimensional matrix of the type y(v,h,k), wherev(:, h, k) is the column vector of the values of yh (h-th component of theoutput) produced by an impulse at uk (k-th component of the input),while t is the column vector of the corresponding time instants;

>> [y,t]=step(sys1,tf); [y,t]=initial(sys1,x0,tf); provide thestep reponse and the initial state response of sys1 without any plot;

>> lsim(sys1,u,t,[x0]),shg,pause plot the response of sys1 to the inputsignal u(t) given by samples (by rows) with the corresponding columnvector of the time instants t from the initial state zero or, if specified, x0;if sys1 is discrete-time, t may be omitted or defined as an empty matrix;

>> [y,t]=lsim(sys1,u,t,[x0]); provides the response of sys1 to theinput signal u(t) without any plot; output y is a three-dimensional vectorlike in impulse;

>> F=place(A,B,p); given a controllable pair (A,B), computes a statefeedback matrix F such that A−BF has the eigenvalues assigned in vectorp (real or complex conjugate by pairs);

>> n=norm(sys1); provides as n the H2-norm of system sys1 .

Page 95: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

A.9 M-files and functions 89

A.8 A few commands of the GA Toolbox

>> Q=ima(A,p); orthonormalization of the columns of matrix A (with p=1a preliminary exchange of colums is performed, with p=0 it is not);

>> Q=ortco(A); orthogonal complement od a subspace;

>> Q=sums(A,B); sum of subspaces;

>> Q=ints(A,B); intersection of subspaces;

>> Q=invt(A,X); inverse image in A of the subspace imX;

>> Q=ker(A); kernel (null space) of the matrix A;

>> Q=mininv(A,X); minimal A-invariant containing imX;

>> Q=maxinv(A,X); maximal A-inveriant contained in imX;

>> [P,Q]=stabi(A,X); matrices for the internal and external stability ofthe A-invariant imX.

A.9 M-files and functions

The above commands can be used in a text file that will herein be giventhe generic name routine.m. The extension “m” is mandatory: the name “m-file” comes from it. By entering from the Command Window routine, thecommands in the file are executed in sequence. Execution is continues untilthe command pause or input is found. The defined variables remain in theworkspace of the Command Window, in the sense that they remain accessibleby keyboard.If the statement function routine is introducedin the first row of the m-file,all the variables defined in it are no longer accessible from the Commanf Win-dow, since the function has a separate workspace. It is possible to exchangevariables between the Command Window and the function routine by statingin the first row of the file function [Y1,Y2,...]=routine(X1,X2,...), where thesymbols represent variables used (the Xi) and defined (the Yi) in the function.The statement [B1,B2]=routine(A1,A2) (note that use of different symbolsis possible) enables exchange of data from/to the Command Window or anym-file or any other function and the function routine.

Page 96: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

90 A Some basic Matlab commands

A.10 Some system connections

Let us consider the systems Σ1 and Σ2 described by

x1(t) = A1 x1(t) + B1 u1(t) , (A.1)

y1(t) = C1 x1(t) +D1 u1(t) , (A.2)

x2(t) = A2 x2(t) + B2 u2(t) , (A.3)

y2(t) = C2 x2(t) +D2 u2(t) . (A.4)

First, refer to the cascade connection shown in Fig.A.1. By considering in

u2

Σ2

y2 u1

Σ1

y1

u y

Fig. A.1. Cascade connection.

(A.1) the connestion equation u1 = y2 and taking into account equation (A.4)it follows that

x1(t) = A1 x1(t) +B1C2 x2(t) +B1D2 u2(t) .

Performing the same substitution in (A.2) yields

y1(t) = C1 x1(t) +D1C2 x2(t) +D1D2 u2(t) .

Since u= u2 , y= y1 , the following matrices are obtained for the overallsystem

A =

[A1 B1C2

O A2

], B =

[B1D2

B2

],

C =[C1 D1C2

], D =

[D1D2

].

(A.5)

The Matlab commend that defines the overall system is>> sys=sys1*sys2; (the order of the factors is mandatory).

For the parallel connection shown in Fig.A.2, by substituting equations (A.2)and (A.4) in y= y1+ y2 and taking into account the identity u1 = u2 = u , itfollows that

y(t) = C1 x1(t) + C2 x2(t) + (D1 +D2) u(t) .

Hence the overall system matrices are

A =

[A1 OO A2

], B =

[B1

B2

],

C =[C1 C2

], D =

[D1 +D2

].

(A.6)

Page 97: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

A.10 Some system connections 91

+

+

u

u1

u2

Σ1

Σ2

y1

y2

y

Fig. A.2. Parallel connection.

The Matlab command that defines this system is>> sys=sys1+sys2;

Lastly, let us consider the feedback connection shown in Fig.A.3. Substituting

+_

u u1 y1

Σ1

Σ2

y

u2y2

Fig. A.3. Feedback connection.

the relation u1 = u− y2 in (A.2) and taking into account (A.4) yields

y1(t) = C1 x1(t) +D1 (u(t)− C2 x2(t)−D2 y1(t)) ,

i.e.,

y1(t) = (I +D1D2)−1 (C1 x1(t)−D1C2 x2(t) +D1 u(t)) .

Likewise, substituting the relation u2= y1 in (A.4) and taking into account(A.2) yields

y2(t) = C2 x2(t) +D2 (C1 x1(t) +D1 (u(t)− y2(t))) ,

i.e.,

y2(t) = (I +D2D1)−1 (C2 x2(t) +D2C1 x1(t) +D1D2 u(t)) .

By performing the similar substitutions of u1 and u2 in (A.1) and (A.3) itfollows that

x1(t) = A1 x1(t) + B1

(u(t)− (I +D2D1)

−1

(C2 x2(t) +D2C1 x1(t) +D1D2 u(t)))

,

x2(t) = A2 x2(t) + B2 (I +D1D2)−1 (C1 x1(t)−D1C2 x2(t) +D1 u(t)) ,

Page 98: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

92 A Some basic Matlab commands

and, taking into account the relation y= y1, for the overall system the fol-lowing matrices are derived

A =

[A1 −B1(I +D2D1)

−1D2C1 −B1(I +D2D1)−1C2

B2(I +D1D2)−1C1 A2 − B2(I +D1D2)

−1D1C2

],

B =

[B1 −B1(I +D2D1)

−1D2D1

B2(I +D1D2)−1D1

],

C =[(I +D1D2)

−1C1 −(I +D1D2)−1D1C2

],

D =[(I +D1D2)

−1D1

].

(A.7)

The Matlab command that defines this overall system is>> sys=feedback(sys1,sys2);

It is necessary that the matrices (I+D1D2), (I+D2D1) (which have the samedeterminant) are nonsingular. This happens, in particular, if at least one ofthe two systems is purely dynamic.

Page 99: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

References

1. B. D. O. Anderson, “Output-nulling invariant and controllability subspaces,” inProceedings of the 6th IFAC Congress, Boston, 1975, paper 43.6.

2. M. L. J. Hautus and L. M. Silverman, “Sistem structure and singular control,”Linear Algebra Appl., vol. 50, pp. 369–402, 1983.

3. H. L. Trentelman, A. A. Stoorvogel, and M. Hautus, Control Theory for Linear

Systems, Communications and Control Engineering. Springer, Great Britain,2001.

4. W. M. Wonham, Linear Multivariable Control: A Geometric Approach, SpringerVerlag, New York, 3rd edition, 1985.

5. W. Grizzle and M. H. Shor, “Sampling, infinite zeros and decoupling of linearsystems,” Automatica, vol. 24, no. 3, pp. 387–396, 1988.

Page 100: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software
Page 101: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

Index

algorithm for maxJ (A, C), 19algorithm for minJ (A,B)), 19

basic operations on subspaces, 13

completely controllable pair, 21completely observable pair, 21computation of S∗ and a friend for a

quadruple, 36computation of V∗ and a friend for a

quadruple, 35computation of the invariant zeros, 41conditioned invariant externally

stabilizable, 33conditioned invariant internally

stabilizable, 34conditioned invariant subspace, 29controllability, 20controlled invariant externally stabiliz-

able, 33controlled invariant internally stabiliz-

able, 33controlled invariant subspace, 28conversione from continuous to discrete,

6

detectable pair, 21Dirac impulse, 4dual observer, 25dual system, 10

exosystem, 4

feedthrough matrix, 3

finite delay, 9finite impulse response system, 9FIR system, 9friend F (computation of), 31friend of a controlled invariant, 28

Grassmann’s manifold, 15Grassmann’s rule, 15

impulse response, 4invariant complementable, 18invariant externally stable, 17invariant internally stable, 17invariant subspace, 16invariant zero, 41invariant zero structure, 41invariant zeros, 41

Kalman canonical decomposition, 23

lattices of invariants, 19LTI system, 3

matrix exponential, 3MIMO system, 3minimal realization, 24modular rule, 14

observability, 20observer, 25output injection, 24

pole assignment, 24pole assignment with an observer, 27

Page 102: Controlled and Conditioned Invariants in Linear System Theory · G. Marro Controlled and Conditioned Invariants in Linear System Theory Volume 2: New Applications and Improved Software

96 Index

quadruple, 3

quadruples, 34

quotient space, 18

reachable subspace RV , 32

relative degree, 38

relative degree delay, 39

relative degree filter, 39

routine effesta, 44

routine effe, 36

routine extendf, 60

routine gazero, 43

routine hud, 60

routine ima, 15

routine ints, 15

routine invt, 15

routine ker, 16

routine mainco, 36

routine maxinv, 27

routine miinco, 36

routine mininv, 27

routine modfol, 81

routine ortco, 15

routine place, 27

routine regtr, 68

routine reldeg, 45

routine rhomin, 64

routine rvstar, 37

routine sstar, 37

routine stabi, 27routine stoor, 81routine subsplit, 27routine sums, 15routine vstargh2, 80routine vstarg, 45routine vstar, 36routine zbasis, 45

sampling time, 6self-bounded controlled invariant, 49separation theorem, 27SISO system, 3stabilizable pair, 21state feedback, 24state space models, 3subspace maxV(A,B, C)), 29subspace minS(A, C,B)), 30subspace of the stable modes, 18Sylvester equation, 18system autonomous, 4system internally asymptotically stable,

4system left-invertible, 38system minimum phase, 42system non-purely dynamic, 3system right-invertible, 38

transfer function models, 4triple, 3