Stabilization of Markovian jump linear system over networks with random communication delay

6
Automatica 45 (2009) 416–421 Contents lists available at ScienceDirect Automatica journal homepage: www.elsevier.com/locate/automatica Brief paper Stabilization of Markovian jump linear system over networks with random communication delay Ming Liu a,* , Daniel W.C. Ho a , Yugang Niu b a Department of Mathematics, City University of Hong Kong, Hong Kong, China b School of Information Science and Engineering, East China University of Science and Technology, Shanghai 200237, China article info Article history: Received 19 December 2007 Received in revised form 20 March 2008 Accepted 12 June 2008 Available online 13 December 2008 Keywords: Markovian parameters Stabilization Networked control systems Network-induced delays abstract This paper is concerned with the stabilization problem for a networked control system with Markovian characterization. We consider the case that the random communication delays exist both in the system state and in the mode signal which are modeled as a Markov chain. The resulting closed-loop system is modeled as a Markovian jump linear system with two jumping parameters, and a necessary and sufficient condition on the existence of stabilizing controllers is established. An iterative linear matrix inequality (LMI) approach is employed to calculate a mode-dependent solution. Finally, a numerical example is given to illustrate the effectiveness of the proposed design method. © 2008 Elsevier Ltd. All rights reserved. 1. Introduction The Markovian jump linear system (MJLS) is a linear system with randomly jumping parameters, where the jumps are modeled by the transitions of a Markov chain. Over decades, increasing effort has been devoted to discrete Markovian jump linear systems (DMJLSs) with time delay, and some important results have been reported in the existing literature ( Boukas and Liu (2001), Cao and Lam (1999), Chen, Guan, and Yu (2004), Ji, Chizeck, Feng, and Loparo (1991), Niu, Ho, and Wang (2007), Shi, Boukas, and Agarwal (1999) and Xiong, Lam, Gao, and Ho (2005) and references therein), regarding applications, stability conditions, and stabilization problems. For example, the stochastic stabilization problem for DMJLSs with state delays were investigated by Cao and Lam (1999) and Shi et al. (1999), where the results are delay-independent. Delay-dependent results were developed in Boukas and Liu (2001) and Chen et al. (2004) as well. Besides, the problem of sliding mode control (SMC) for stochastic systems with Markovian switching was studied in Niu et al. (2007). It is worth pointing out that the This paper was not presented at any IFAC meeting. This paper was recommended for publication in revised form by Associate Editor George Yin under the direction of Editor Ian R. Petersen. This work was supported by CityU SRG 7002208, and the China National Natural Science Foundation 60674015, and Shanghai Leading Academic Discipline Project (B504). * Corresponding author. Tel.: +852 2788 8652; fax: +852 2788 8561. E-mail addresses: [email protected] (M. Liu), [email protected] (D.W.C. Ho), [email protected] (Y. Niu). main control category used in Boukas and Liu (2001), Cao and Lam (1999), Chen et al. (2004), and Shi et al. (1999) is designing a control law, according to the current system mode and current system state, such that the unstable plant is stabilized without the delayed terms and remains stable in the presence of the delayed terms. As is well known, in most practical systems, the original plant, controller, sensor and actuator are difficult to be located at the same place, and thus signals are required to be transmitted from one place to another. In modern industrial systems, these components are often connected over networks, giving rise to the so-called networked control systems (NCSs). There are many advantages in NCSs, such as low cost, reduced weight and power requirements, simple installation and maintenance, and high reliability. Thus, more and more attention has been paid to the stability and stabilization of NCSs recently (Azimi-Sadjadi (2003), Xiao and Arash hassibi (2000), Xiong and Lam (2007) and Zhang, Shi, and Chen (2005) and references therein). It should be pointed out, however, that most of the results in the existing literature are focused on NCSs where the plant is a deterministic system. To the best of the authors’ knowledge, the stability and stabilization problems for NCSs with the plant being a stochastic system have not been fully investigated to date. Especially for the case where the plant is a Markovian jump linear system, very few results related to NCSs have been available in the literature so far, which motivates the present study. It is worth mentioning that the stochastic stabilization problem for DMJLSs with delayed input has been studied (Xiong & Lam, 2006). The main contribution of Xiong and Lam (2006) is modeling 0005-1098/$ – see front matter © 2008 Elsevier Ltd. All rights reserved. doi:10.1016/j.automatica.2008.06.023

Transcript of Stabilization of Markovian jump linear system over networks with random communication delay

Automatica 45 (2009) 416–421

Contents lists available at ScienceDirect

Automatica

journal homepage: www.elsevier.com/locate/automatica

Brief paper

Stabilization of Markovian jump linear system over networks with randomcommunication delayI

Ming Liu a,∗, Daniel W.C. Ho a, Yugang Niu ba Department of Mathematics, City University of Hong Kong, Hong Kong, Chinab School of Information Science and Engineering, East China University of Science and Technology, Shanghai 200237, China

a r t i c l e i n f o

Article history:Received 19 December 2007Received in revised form20 March 2008Accepted 12 June 2008Available online 13 December 2008

Keywords:Markovian parametersStabilizationNetworked control systemsNetwork-induced delays

a b s t r a c t

This paper is concerned with the stabilization problem for a networked control system with Markoviancharacterization. We consider the case that the random communication delays exist both in the systemstate and in the mode signal which are modeled as a Markov chain. The resulting closed-loop system ismodeled as aMarkovian jump linear systemwith two jumping parameters, and a necessary and sufficientcondition on the existence of stabilizing controllers is established. An iterative linear matrix inequality(LMI) approach is employed to calculate amode-dependent solution. Finally, a numerical example is givento illustrate the effectiveness of the proposed design method.

© 2008 Elsevier Ltd. All rights reserved.

1. Introduction

The Markovian jump linear system (MJLS) is a linear systemwith randomly jumping parameters, where the jumps aremodeledby the transitions of a Markov chain. Over decades, increasingeffort has been devoted to discreteMarkovian jump linear systems(DMJLSs) with time delay, and some important results have beenreported in the existing literature ( Boukas and Liu (2001), Caoand Lam (1999), Chen, Guan, and Yu (2004), Ji, Chizeck, Feng, andLoparo (1991), Niu, Ho, andWang (2007), Shi, Boukas, and Agarwal(1999) andXiong, Lam, Gao, andHo (2005) and references therein),regarding applications, stability conditions, and stabilizationproblems. For example, the stochastic stabilization problem forDMJLSs with state delays were investigated by Cao and Lam (1999)and Shi et al. (1999), where the results are delay-independent.Delay-dependent results were developed in Boukas and Liu (2001)and Chen et al. (2004) aswell. Besides, the problem of slidingmodecontrol (SMC) for stochastic systems with Markovian switchingwas studied in Niu et al. (2007). It is worth pointing out that the

I This paper was not presented at any IFAC meeting. This paper wasrecommended for publication in revised form by Associate Editor George Yinunder the direction of Editor Ian R. Petersen. This work was supported by CityUSRG 7002208, and the China National Natural Science Foundation 60674015, andShanghai Leading Academic Discipline Project (B504).∗ Corresponding author. Tel.: +852 2788 8652; fax: +852 2788 8561.E-mail addresses: [email protected] (M. Liu),

[email protected] (D.W.C. Ho), [email protected] (Y. Niu).

0005-1098/$ – see front matter© 2008 Elsevier Ltd. All rights reserved.doi:10.1016/j.automatica.2008.06.023

main control category used in Boukas and Liu (2001), Cao and Lam(1999), Chen et al. (2004), and Shi et al. (1999) is designing a controllaw, according to the current system mode and current systemstate, such that the unstable plant is stabilizedwithout the delayedterms and remains stable in the presence of the delayed terms.As is well known, in most practical systems, the original

plant, controller, sensor and actuator are difficult to be located atthe same place, and thus signals are required to be transmittedfrom one place to another. In modern industrial systems, thesecomponents are often connected over networks, giving rise tothe so-called networked control systems (NCSs). There are manyadvantages in NCSs, such as low cost, reduced weight andpower requirements, simple installation and maintenance, andhigh reliability. Thus, more and more attention has been paidto the stability and stabilization of NCSs recently (Azimi-Sadjadi(2003), Xiao and Arash hassibi (2000), Xiong and Lam (2007)and Zhang, Shi, and Chen (2005) and references therein). It shouldbe pointed out, however, that most of the results in the existingliterature are focused on NCSs where the plant is a deterministicsystem. To the best of the authors’ knowledge, the stability andstabilization problems for NCSs with the plant being a stochasticsystem have not been fully investigated to date. Especially for thecase where the plant is a Markovian jump linear system, very fewresults related to NCSs have been available in the literature so far,which motivates the present study.It is worthmentioning that the stochastic stabilization problem

for DMJLSs with delayed input has been studied (Xiong & Lam,2006). Themain contribution of Xiong and Lam (2006) is modeling

M. Liu et al. / Automatica 45 (2009) 416–421 417

Fig. 1. Structure of a networked control system with communication delays.

the resulting closed-loop system as a new Markovian jump linearsystem with extended state space. In Xiong and Lam (2006), it isassumed that a constant time delay exists in the mode signal anda time-varying delay exists in the system state. In this paper, weconsider a more realistic situation as shown in Fig. 1, where theplant and controller are connected with the network, and randomcommunication delays exist both in the system state and in themode signal. Such a situation will cover more general cases inpractical NCSs. To the authors’ best knowledge, this problem forNCSs has not been investigated in the existing literature, andunfortunately, the model setting and control category developedby Xiong and Lam (2006) cannot be directly applied to our casewhere the time delay in the mode signal is now randomly varying.New control techniques are needed to design a control law basedupon past system information to stabilize an unstable plant. It isan important and challenging research topic, which motivates ourcurrent research. Following the work of Xiong and Lam (2006),in this paper, we will present a new method to overcome thisdifficulty.In this paper, we study the stabilization of NCSswithMarkovian

characterization. The random communication delay is modeled asa Markov chain, and the resulting closed-loop system is modeledas a Markovian jump linear system with two jumping parameters.A stochastic stability criterion and the corresponding controllerdesign technique are given in the form of LMIs. It should bepointed out that our method presented in this paper is not a trivialextension, but a new one with more wide application foreground.

2. Problem formulation

Consider the networked control setup in Fig. 1, where the plantis a discrete-time Markovian jump linear system defined on acomplete probability space (Ω , F , P ):

x(k+ 1) = A(θ(k))x(k)+ B(θ(k))u(k), (1)

where k ∈ Z+, x(k) ∈ Rn is the system state and u(k) ∈ Rm is thecontrol input; θ(k) : k ∈ Z+ denotes the system mode whichis a time-homogeneous Markovian process with right continuoustrajectories. It is assumed that θ(k) : k ∈ Z+ takes values onthe finite set = , 1, 2, . . . , η with transition probability matrixΠ1 , [πij], where πij , Pr(θk+1 = j | θk = i) ≥ 0 for alli, j ∈ = and k ∈ Z+,

∑η

j=1 πij = 1 for each i ∈ =. The matricesAi , A(θ(k) = i), Bi , B(θ(k) = i), i ∈ = are constant matrices ofappropriate dimensions.As is shown in Fig. 1, system state x(k) and system mode

θ(k) are transmitted over networks. In this paper, for convenienceof analysis, it is assumed that random communication delayoccurs only in the sensor-to-controller (S/C) side. Another kindof time delay, which occurs in the controller-to-actuator (C/A)side, is not considered here. Besides, the effect of data packetdropout is not taken into account. Considering the effect of randomcommunication delay τk, the mode-dependent state-feedbackcontrol law is described by

u(k) = K(θ(k− τk))x(k− τk). (2)

For simplicity, we denote θk , θ(k). Applying controller (2) tothe open-loop system (1), we can obtain the following closed-loopsystem

x(k+ 1) = A(θk)x(k)+ B(θk)K(θ(k− τk))x(k− τk). (3)

Remark 1. System (3) is no longer a traditional Markovian jumpsystem with respect to θk because of the delayed mode signalθ(k − τk). On the other hand, it is worth mentioning that thefollowing system was studied in Xiong and Lam (2006)

x(k+ 1) = A(θk)x(k)+ B(θk)K(θ(k− τr))x(k− τx(k)) (4)

where τr is the constant time delay and τx(k) is the time-varyingdelay. In Xiong and Lam (2006), system (4) was modeled as a newMarkovian jump systemwith extended space. However, themodelsetting and control techniques proposed therein cannot be directlyapplied to system (3) because the time delay τk in K(θ(k− τk)) ofsystem (3) is randomly varying. In the following discussion,wewillpresent a newmodel setting instead of system (3) to overcome thisdifficulty.

In real communication systems, the current time delay isusually related to the previous time delays. Therefore, it isreasonable to model random time delay τk (k ∈ Z+) asa homogeneous Markov chain which takes values in ℵ ,1, 2, . . . , d, where d ∈ Z+ is constant, the transition probabilitymatrix of which is denoted as Π2 , [λlh]. That means, τk ‘‘jump’’from mode l to hwith probabilities

λlh = Pr(τk+1 = h | τk = l) l, h ∈ ℵ, (5)

where λlh > 0 and∑dh=1 λlh = 1. For simplicity, denote θk = θ (k)

and θk−τk = θ(k − τk). We define the following vector-valuedrandom variable θ (k) ,

[θkθk−1 · · · θk−τk · · · θk−d

],and

define the following matrix functions:

A(θk) , A(θk), B(θk) , B(θk),

K(θk) ,[K T(θk) · · · K T(θk−τk) · · · K

T(θk−d)]T. (6)

Moreover, we define matrices R1(τk) and R2(τk) as follows:

R1(τk) , [0m×m · · · 0m×m Im×m 0m×m · · · 0m×m]

∈ Rm×(d+1)m,R2(τk) , [0n×n · · · 0n×n In×n 0n×n · · · 0n×n]

∈ Rn×(d+1)n, (7)

where R1(τk) and R2(τk) have all elements being zeros except forthe (τk + 1)th block being identity.From (6) and (7) we have

K (θ(k− τk)) = [0m×m · · · 0m×mIm×m0m×m · · · 0m×m]

×[K T(θk) · · · K T(θk−τk) · · · K

T(θk−d)]T

= R1(τk)K(θk).

Thus, the control law (2) can be described by

u(θ(k− τk)) = R1(τk)K(θk)x(k− τk). (8)

At sampling time k, we augment the state variable as ξ(k) =[xT(k) · · · xT(k− τk) · · · xT(k− d)]T.Then, we have

x(k− τk) = [0n×n · · · 0n×nIn×n0n×n · · · 0n×n]

×[xTk · · · x

Tk−τk · · · x

Tk−d

]T= R2(τk)ξ(k). (9)

418 M. Liu et al. / Automatica 45 (2009) 416–421

By substituting (8) and (9) into (3), we obtain the following closed-loop system

ξ(k+ 1) = [A(θk)+ B(θk)R1(τk)K(θk)R2(τk)]ξ(k), (10)with

A(θk) =

A(θk) 0 · · · 0 0I 0 · · · 0 00 I · · · 0 0...

... · · ·...

...0 0 · · · I 0

, B(θk) =

B(θk)0...0

.

Definition 1. The closed-loop system in (10) is said to bestochastically stable if E(

∑∞

k=0 ‖ξ(k)‖2| ξ0, θ0, τ0) < ∞ for

every initial condition ξ0 , ξ(0) ∈ Rn, and θ0 , θ(0) ∈ =, τ0 ∈ ℵ.

3. Main results

The following lemma is firstly introduced, which is useful forthe development of our work, the proof of which can be seenin Xiong and Lam (2006).

Lemma 1 (Xiong & Lam, 2006). Given d ∈ N, we define two sets=d+1 , =×=×· · ·×= and=d+1 , 1, 2, . . . , ηd+1, and introducethe mapping ψ : =d+1 → =d+1 with

ψ(χ) = i+ (i−1 − 1)η + · · · + (i−d+1 − 1)ηd−1

+ (i−d − 1)ηd (11)

where χ = [i i−1i−2 · · · i−d]T ∈ =d+1 and i, i−1, . . . , i−d ∈ =. Then,the mapping ψ(·) is a bijection from =d+1 to =d+1.

Theorem 1. Closed-loop system (10) is a delay-free Markovian jumplinear system possessing dηd+1 operation modes with two Markovianjump parameters θk and τk.

Proof. The vector-valued stochastic process θk, k ∈ Z+is a discrete-time vector-valued Markov chain. Besides, notethat τk, k ∈ Z+ is a Markov chain independent ofθk, k ∈ Z+. Hence, it can be concluded that system (10)possesses two Markovian jumping parameters, τk and vector-valued θk. Consequently, system (10) is a Markovian jump linearsystem with two jumping parameters. Let ψ(θk) = v and τk = l.By Lemma 1, v ∈ =d+1 is employed to represent the systemmode.At time k, we say that system (10) is in mode (v, l). Since v ∈ =d+1and l ∈ ℵ, system (10) possesses dηd+1 modes. This completes theproof.

The transition probability matrix of Markovian jump linearsystem (10) is denoted as Ξ = [p(v,l)(µ,h)]. In the followingdiscussion, based on Theorem 1, we constructΞ from thematricesΠ1 andΠ2.First, we derive the transition probability Pr(ψ(θk+1) = µ |

ψ(θk) = v) by a similar idea presented in Xiong and Lam (2006)as follows: Since ψ(·) is a bijection, for any arbitrary two modesv, µ ∈ =d+1, we can uniquely obtain two vectors v, µ ∈ =d+1,such that

v , ψ−1(v) = [i, i−1, . . . , i−d]T,

µ , ψ−1(µ) = [j, j−1, . . . , j−d]T.(12)

The connection between the vectors θk+1 and θk is shown in thefollowing figure.

θTk︷ ︸︸ ︷θk+1, θk, · · · θk−d+1,︸ ︷︷ ︸

θTk+1

θk−d

Letting θk = v and θk+1 = µ, it can be seen that the transitionprobability Pr

(θk+1 = µ | θk = v

)is nonzero providing that the

following conditions are valid θk = i = j−1, θk−1 = i−1 =j−2, . . . , θk−d+1 = i−d+1 = j−d.Then, we have

Pr(ψ(θk+1) = µ | ψ(θk) = v)

= Pr(θk+1 = µ | θk = v)= Pr(θk+1 = j, θk = j−1, . . . , θk−d+1= j−d | θk = i, θk−1 = i−1, . . . , θk−d = i−d)

= πijδ(i, j−1)δ(i−1, j−2) · · · δ(i−d+1, j−d). (13)

Next, we construct Ξ = [p(v,l)(µ,h)]. It follows from system (10)and Theorem 1 that

p(v,l)(µ,h) = Pr(ψ(θk+1) = µ, τk+1 = h | ψ(θk) = v, τk = l)

= Pr(θk+1 = µ, τk+1 = h | θk = v, τk = l). (14)

Note that the stochastic processes θk, k ∈ Z+ and τk, k ∈ Z+are independent. Thus, formula (14) can be written as

p(v,l)(µ,h) = Pr(θk+1 = µ | θk = v)Pr(τk+1 = h | τk = l). (15)

Substituting (13) into (15), and noting that Pr(τk+1 = h | τk = l) =λlh, one can obtain

p(v,l)(µ,h) = πijλlhδ(i−1, j−2) · · · δ(i−d+1, j−d). (16)

Based upon Theorem 1, we are now in a position to provide anecessary and sufficient condition to verify the stochastic stabilityof system (10).

Theorem 2. Closed-loop system (10) is stochastically stable if andonly if there exist matrices P(v, l) ∈ S+ and Ki−l of appropriatedimensions such that the following inequality(Av + BvKi−lR2(l)

)TP(i, l)

(Av + BvKi−lR2(l)

)− P(v, l) < 0 (17)

holds for all i ∈ =, v ∈ =d+1, l ∈ ℵ, where P(i, l) =∑dh=1

∑η

j=1 λlhπijP(β + j, h), Av , Bv , Ki−l , R1(l) and R2(l) are definedas in (18), and β is defined as in (19) below.

Proof. Sufficiency: For system (10), we construct the followingLyapunov function

V (ξk, θk, τk, k) = ξ Tk P(ψ(θk), τk)ξk.

At time k, let the system mode be (v, l), that is, τk = l and v =ψ(θk) = i+ (i−1− 1)η+ · · ·+ (i−d+1− 1)ηd−1+ (i−d− 1)ηd, andP(ψ(θk), τk) = P(v, l). Then, at time k, the system matrices are

R1(τk)K(θ(k)) = [0 · · · 0Im×m0 · · · 0]

×

[K Ti · · · K

Ti−l · · · K

Ti−d

]T= Ki−l ,

A(θ(k)) = Av =

[ Ai 0 0I 0 00 In−2 0

],

where In−2 = diagI, . . . , I︸ ︷︷ ︸n−2

,

B(θ(k)) = Bv =[BTi 0

T· · · 0T

]T,

R2(τk) = R2(l). (18)

At time k + 1, system (10) may jump to any mode (µ, h), whereµ ∈ =d+1 and h ∈ ℵ, that is, τk+1 = h and µ = ψ(θk+1) =

M. Liu et al. / Automatica 45 (2009) 416–421 419

j+ (j−1 − 1)η + · · · + (j−d+1 − 1)ηd−1 + (j−d − 1)ηd. From (16),we know that p(v,l)(µ,h) = λlhπijδ(i−1, j−2) · · · δ(i−d+1, j−d). Hence,it can be concluded that p(v,l)(µ,h) = λlhπij if µ = β + j, where

β = (i− 1)η + (i−1 − 1)η2 + · · ·

+ (i−d+2 − 1)ηd−1 + (i−d+1 − 1)ηd, (19)

otherwise, p(v,l)(µ,h) = 0. As a result, we have

d∑h=1

ηd+1∑µ=1

p(v,l)(µ,h)P(µ, h) =d∑h=1

η∑j=1

λlhπijP(β + j, h). (20)

In light of (20), it follows from (10) that

E(V (ξk+1, θk+1, τk+1, k+ 1 | ξk, θk, τk, k)

)− V (ξk, θk, τk, k)

= ξ Tk (Av + BvKi−lR2(l))Td∑h=1

η∑j=1

λlhπijP(β + j, h)

× (Av + BvKi−lR2(l))− P(v, l)ξk

= ξ Tk Si(v, l)ξk (21)

where Si(v, l) = (Av.+ BvKi−lR2(l)t)T P(i, l) (Av + Bv Ki−lR2(l))−

P(v, l). In fact, (17) implies Si(v, l) < 0. Hence, formula (21) resultsin

E(V (ξk+1, θk+1, τk+1, k+ 1 | ξk, θk, τk, k)

)− V (ξk, θk, τk, k)≤ −λmin (−Si(v, l)) ξ Tk ξk

≤ −α‖ξk‖2, (22)

where α = infλmin (−Si(v, l)) , i ∈ =, v ∈ =d+1, l ∈ ℵ > 0and λmin (−Si(v, l)) denotes the minimum eigenvalue of−Si(v, l).Note that θ0 = θ0, it can be shown from (22) that for anyn ≥ 1,E(V (ξn+1, θn+1, τn+1, n + 1)) − E (V (ξ0, θ0, τ0, 0)) ≤−αE

(∑nm=0 ‖ξm‖

2). Let n → ∞, this inequality implies that

E (∑∞

m=0 ‖ξm‖2) ≤ 1

αE (V (ξ0, θ0, τ0, 0)) − E(V (ξn+1, θn+1,

τn+1, n + 1)) < 1α

E (V (ξ0, θ0, τ0, 0)) = 1αξ T0 P(θ0, τ0)ξ0 < ∞.

By Definition 1, it can be verified that system (10) is stochasticallystable.Necessity: See the Appendix. This completes the proof.

Theorem 2 gives a necessary and sufficient condition on theexistence of the mode-dependent state-feedback stabilizing gains.Based on the results in Theorem2, the controller design techniquesare presented in Theorem 3.

Theorem 3. Consider closed-loop system (10), there exists a state-feedback control law (2) such that system (10) is stochastically stableif and only if there exist matrices P(v, l) ∈ S+ and Ki−l of appropriatedimensions such that the following LMI[−P(v, l) T (i, l)T T(i, l) −Z(v)

]< 0, (23)

with T (i, l) = [(λl1)1/2(Av + BvKi−lR2(l)), . . . , (λld)1/2× (Av +

BvKi−lR2(l))], Z(v) = diagZ(v, 1), . . . , Z(v, d), where Z(v, h) =(∑η

j=1 πijP(β+j, h))−1, h ∈ ℵ holds for all i ∈ =, v ∈ =d+1, l ∈ ℵ.

Proof. Letting Z(v, h) = (∑η

j=1 πijP(β + j, h))−1 for each h ∈ ℵ,

LMI (23) can be obtained from (17) by the Schur Complement. Thiscompletes the proof.

Fig. 2. State of the open-loop system.

Fig. 3. Randommode θk .

4. Numerical example

In this section, for the purpose of illustrating the usefulnessand flexibility of the theory developed in this paper, we present asimulation example. Attention is focused on the design of a mode-dependent stabilizing controller for a Markovian jump linearsystem. We assume that θk ∈ = , 1, 2, that is η = 2. Also, weassume that the random-network-induced delay τk ∈ ℵ , 1, 2,that is d = 2. Thus we have =3 = 1, 2, 3, 4, 5, 6, 7, 8,

=3=

[111

],

[112

],

[211

],

[212

],

[121

],

[122

],

[221

],

[222

].

By Theorem 1, system (10) has 16 modes. The system data andtransition probability matrices Π1 and Π2 are taken as follows:A1 =

[1.7 10 0.17

], A2 =

[0.6 00.04 2

], B1 =

[01

], B2 =

[11

],Π1 =[

0.1 0.90.5 0.5

], Π2 =

[0.2 0.80.5 0.5

]. Assume the initial condition of

system (10) to be x(−2) = x(−1) = x(0) = [1 − 0.5]T. Asseen in Fig. 2, the open-loop system with u(k) = 0 is unstable.By Theorem3, applying the algorithm fromZhang, Huang, and Lam(2003),we canobtain a controllerK1 = [−0.0761−0.2457], K2 =[−0.0375 − 0.2321]. We made some simulations of the behaviorof the resulting closed-loop system. Fig. 3 (respectively, Fig. 4) isone of the possible realizations of the Markovian jumping modeθk (respectively, τk). Under this mode sequence the correspondingstate trajectories of the closed-loop system (10) is shown in Fig. 5.It is shown that the closed-loop system is stochastically stable.

5. Conclusion

In this paper, we study the stabilization problem for a class ofnetworked control systems with a discrete-time Markovian jumplinear plant. Based on the approach of Xiong and Lam (2006),the resulting closed-loop system is modeled as a Markovian jumplinear system with two Markovian jump parameters. A necessaryand sufficient conditions of stochastic stability for NCSs is obtained

420 M. Liu et al. / Automatica 45 (2009) 416–421

Fig. 4. Randommode τk .

Fig. 5. State of the closed-loop system.

in terms of a set of LMIs with matrix inversion constraints,from which the state-feedback gain can be solved by an existingiterative LMI algorithm. A numerical example has been given toillustrate the main results. Our future work will be focused on thecase where both the sensor-to-controller delay and controller-to-actuator delay are taken into account.

Acknowledgements

The authors thank the referees and the editor for their valuablecomments and suggestions.

Appendix

Proof of Necessity in Theorem 2. Given m ∈ Z+ being the initialtime, for any k ≥ m, (k ∈ Z+), we define

A(k,m) ,A(θk−1, τk−1)× · · · × A(θm, τm) k > m,I k = m,

where A(θk, τk) , A(θk)+ B(θk)R1(τk)K(θk)R2(τk).Then, for any k ≥ m, (k ∈ Z+), we have

ξk = A(θk−1, τk−1)ξk−1= A(θk−1, τk−1)× A(θk−2, τk−2)ξk−2...

= A(θk−1, τk−1)× · · · × A(θm, τm)ξm

= A(k,m)ξm. (24)

Assume that the closed-loop system (10) is stochastically stable.By Definition 1, we have E

(∑∞

k=0 ‖ξk‖2| ξ0, θ0, τ0

)< ∞, which

implies that E(∑∞

k=0 ξTk Q ξk | ξ0, θ0, τ0

)< ∞ holds for any given

matrix Q ∈ S+.At timem, let the systemmode be (v, l), where (v, l) is the same

as defined in the previous discussion.

Let the functionΦ : R+×R+×R(d+1)n×=d+1×ℵ be definedby

Φ(n,m, ξm, v, l)

, E

(n∑k=m

ξ Tk Q (ψ(θk), τk)ξk | ξm, ψ(θm) = v, τm = l

), (25)

where Q (ψ(θk), τk) ∈ S+, and n ≥ m.In fact, from (24) we know that ξk = A(k,m)ξm for k ≥ m.

From (10), θk and τk are random variables. Note that the right sideof (25) is a conditioned expectation. Due to this fact, the variablesξm and ξ Tm can be moved out of the expectation. Hence, at time m,for any n ≥ m, it can be shown from (25) that

Φ(n,m, ξm, v, l)

= ξ TmE

(n∑k=m

(A(k,m))TQ (ψ(θk), τk)A(k,m) | ξm, ψ(θm)

= v, τm = l

)ξm = ξ

TmP (n−m, v, l) ξm (26)

where P(n−m, v, l) , E(∑nk=m(A(k,m))

T Q (ψ(θk), τk) A(k,m) |ξm, ψ(θm) = v, τm = l). Since system (10) is stochasticallystable, P(n − m, v, l) is a monotonically increasing and positive-defined matrix-valued function bounded from above. Thus, thelimit P(v, l) , limn→∞ P(n − m, v, l) exists, where P(v, l) ∈ S+.At timem+ 1, let the mode be (µ, h), where (µ, h) is the same asdefined in the previous discussion. Then, at time m + 1, we haveE(P(n−m− 1, v, l)) =

∑dh=1

∑ηd+1

µ=1 p(v,l)(µ,h)P(n−m− 1, µ, h).

It is noted that from (20) we have∑dh=1

∑ηd+1

µ=1 p(v,l)(µ,h)P(n −m−1, µ, h) =

∑dh=1

∑η

j=1 λlhπijP(n−m−1, β+ j, h). Therefore,we haveE(P(n−m−1, v, l)) =

∑dh=1

∑η

j=1 λlhπijP(n−m−1, β+j, h).Then, it follows from (10) that E(ξ Tm+1P(n − m − 1, ψ(θm+1),

τm+1)ξm+1) − ξTmP(n − m, v, l)ξm = ξ Tm(Av + BvKi−lR2(l))

(∑dh=1

∑η

j=1 λlhπijP(n−m−1, β+ j, h))(Av+ BvKi−lR2(l))− P(n−m, v, l)ξm.On the other hand, from (26)we know that ξ TmP(n−m, v, l)ξm =

Φ(n,m, ξm, v, l), ξ Tm+1P(n − m − 1, ψ(θm+1), τm+1)ξm+1 =

Φ(n,m + 1, ξm+1, ψ(θm+1), τm+1). In light of this equation, wehave E(ξ Tm+1P(n − m − 1, ψ(θm+1), τm+1)ξm+1) − ξ TmP(n −m, v, l)ξm = E(Φ(n,m+ 1, ξm+1, ψ(θm+1), τm+1)) −Φ(n,m, ξm,v, l) =

∑nk=m+1 ξ

Tk Q (ψ(θk), τk)ξk −

∑nk=m ξ

Tk Q (ψ(θk), τk)ξk =

−ξ TmQ (ψ(θm), τm)ξm = −ξTmQ (v, l)ξm. This implies that for any

ξm (1 < m < n), it holds that ξ Tm(Av + BvKi−lR2(l))T(∑dh=1

∑η

j=1

λlhπijP(n−m−1, β+ j, h))(Av+ BvKi−lR2(l))− P(n−m, v, l)ξm =−ξ TmQ (v, l)ξm for all i ∈ =, v ∈ =d+1, l ∈ ℵ. Since thisequation holds for any ξm (0 < m < n), we have (Av +BvKi−lR2(l))

T (∑dh=1

∑η

j=1 λlhπij × P(n − m − 1, β + j, h))(Av +BvKi−lR2(l)) − P(n − m − 1, v, l) = −Q (v, l). In this equation,letting n→∞ and noting that Q (v, l) > 0, one can obtain(Av + BvKi−lR2(l)

)T×

(d∑h=1

η∑j=1

λlhπijP(β + j, h)

(Av + BvKi−lR2(l)

)− P(v, l)

= −Q (v, l) < 0.

M. Liu et al. / Automatica 45 (2009) 416–421 421

As a result, it is shown that there exist matrices P(v, l) (v ∈=d+1, l ∈ ℵ), such that inequality (17) holds for all i ∈ =, v ∈=d+1, l ∈ ℵ.

References

Azimi-Sadjadi, B. (2003). Stability of networked control systems in the presenceof packet losses. In Proceedings of the conference on decision and control (pp.676–681).

Boukas, E. K., & Liu, Z. K. (2001). Robust H∞ control of discrete-time Markovianjump linear systems with mode-dependent time-delays. IEEE Transactions onAutomatic Control, 46(12), 1918–1924.

Cao, Y.-Y., & Lam, J. (1999). Stochastic stabilizability andH2/H∞ control for discrete-time jump linear systems with time delay. Journal of the Franklin Institute,336(8), 1263–1281.

Chen, W.-H., Guan, Z.-H., & Yu, P. (2004). Delay-dependent stability and H2/H∞control of uncertain discrete-time Markovian jump systems with mode-dependent time delays. Systems and Control Letters, 52(5), 361–376.

Ji, Y., Chizeck, H. J., Feng, X., & Loparo, K. A. (1991). Stability and control ofdiscrete-time jump linear systems. Control Theory and Advanced Technology,7(2), 247–270.

Niu, Y., Ho, D. W. C, & Wang, X. (2007). Sliding mode control for Ito stochasticsystems with Markovian switching. Automatica, 43, 1784–1790.

Shi, P., Boukas, E.-K., & Agarwal, R. K. (1999). Control of Markovian jump discrete-time systems with norm bounded uncertainty and unknown delay. IEEETransactions on Automatic Control, 44(11), 2139–2144.

Xiao, L., & Arash hassibi, (2000). Control with random communication delays viaa discrete-time jump system approach. In Proceedings of the Amarican Controlconference.

Xiong, J., & Lam, J. (2006). Stabilization of discrete-time Markovian jump linearsystems via time-delayed controllers. Automatica, 42(5), 747–753.

Xiong, J., & Lam, J. (2007). Stabilization of linear systems over networks withbounded packet loss. Automatica, 43(1), 80–87.

Xiong, J., Lam, J., Gao, H., & Ho, D. W. C. (2005). On robust stabilization ofMarkovian jump systems with uncertain switching probabilities. Automatica,41(5), 897–903.

Zhang, L., Huang, B., & Lam, J. (2003).H∞ model reduction ofMarkovian jump linearsystems. Systems & Control Letters, 50(3), 103–118.

Zhang, L., Shi, Y., & Chen, T. (2005). A new method for stabilization of networkedcontrol systems with random delays. IEEE Transactions on Automatic Control,50(8), 1177–1181.

Ming Liu earned his B.S. and M.S. degrees from North-eastern University, Shenyang, China, in 2003 and 2006,respectively. He is now pursuing his Ph.D. degree in theDepartment of Mathematics at City University of HongKong. His current research interests include quantizedcontrol and networked control systems.

Daniel W.C. Ho received first class B.Sc., M.Sc. and Ph.D.degrees in mathematics from the University of Salford(UK) in 1980, 1982 and 1986, respectively. From 1985 to1988, Dr. Ho was a Research Fellow in Industrial ControlUnit, University of Strathclyde (Glasgow, Scotland). In1989, he joined the Department of Mathematics, CityUniversity of Hong Kong,where he is currently a Professor.He is now serving as an Associate Editor for AsianJournal of Control. His research interests includeH-infinitycontrol theory, adaptive neural wavelet identification,nonlinear control theory, complex network, networked

control system and quantized control.

Yugang Niu received the B.Sc. degree from Hebei NormalUniversity, Shijiazhuang, PR China, in 1986, and the M.Sc.and Ph.D. degrees from the Nanjing University of Science& Technology, Nanjing, PR China, in 1992 and 2001,respectively. His postdoctoral research was carried outin the East China University of Science & Technology,Shanghai, PR China, fromMay 2001 to June 2003. In 2002,as Research Associate, he visited the University of HongKong for three months. In 2005 and 2006, as ResearchFellow, he visited the City University of Hong Kong for sixmonths, respectively.

In 2001, Dr. Niu joined theDepartment of Automation, the East ChinaUniversityof Science & Technology, where he is currently a Professor.Dr. Niu has published more than 10 papers in the past 5 years in Automatica,

IEEE Trans. Automatic Control, Systems& Control Letters, IEEE Trans. Fuzzy Systems, IEEProc. Control Theory and Application and so on. His research interests include slidingmode control, stochastic systems, networked control systems, Markovian jumpingsystems, filtering, etc.