14. Stochastic Processes · represents a stochastic process. Stochastic processes are everywhere:...
Transcript of 14. Stochastic Processes · represents a stochastic process. Stochastic processes are everywhere:...
1
14. Stochastic Processes
Let denote the random outcome of an experiment. To every such outcome suppose a waveform
is assigned.The collection of such waveforms form a stochastic process. The set of and the time index t can be continuousor discrete (countably infinite or finite) as well.For fixed (the set of all experimental outcomes), is a specific time function.For fixed t,
is a random variable. The ensemble of all such realizationsover time represents the stochastic
ξ
),( ξtX
}{ kξ
Si ∈ξ
),( 11 itXX ξ=
),( ξtX
PILLAI/Cha
t1
t2
t
),(n
tX ξ
),(k
tX ξ
),(2
ξtX
),(1
ξtX
Fig. 14.1
),( ξtX
0
),( ξtX
Introduction
2
process X(t). (see Fig 14.1). For example
where is a uniformly distributed random variable in represents a stochastic process. Stochastic processes are everywhere:Brownian motion, stock market fluctuations, various queuing systemsall represent stochastic phenomena.
If X(t) is a stochastic process, then for fixed t, X(t) representsa random variable. Its distribution function is given by
Notice that depends on t, since for a different t, we obtaina different random variable. Further
represents the first-order probability density function of the process X(t).
),cos()( 0 ϕω += tatX
ϕ
})({),( xtXPtxFX ≤=
),( txFX
(14-1)
(14-2)
PILLAI/Cha
(0,2 ),π
dxtxdFtxf X
X
),(),( =∆
3
For t = t1 and t = t2, X(t) represents two different random variablesX1 = X(t1) and X2 = X(t2) respectively. Their joint distribution is given by
and
represents the second-order density function of the process X(t).Similarly represents the nth order densityfunction of the process X(t). Complete specification of the stochasticprocess X(t) requires the knowledge of for all and for all n. (an almost impossible taskin reality).
})(,)({),,,( 22112121 xtXxtXPttxxFX ≤≤= (14-3)
(14-4)
),, ,,,( 2121 nn tttxxxf X
),, ,,,( 2121 nn tttxxxf X
niti , ,2 ,1 , =
PILLAI/Cha
21 2 1 2
1 2 1 21 2
( , , , )( , , , )
XX
F x x t tf x x t tx x
∂=
∂ ∂∆
4
Mean of a Stochastic Process:
represents the mean value of a process X(t). In general, the mean of a process can depend on the time index t.
Autocorrelation function of a process X(t) is defined as
and it represents the interrelationship between the random variablesX1 = X(t1) and X2 = X(t2) generated from the process X(t).
Properties:
1.
2.
(14-5)
(14-6)
*1
*212
*21 )}]()({[),(),( tXtXEttRttR XXXX == (14-7)
.0}|)({|),( 2 >= tXEttRXX
PILLAI/Cha
(Average instantaneous power)
( ) { ( )} ( , )Xt E X t x f x t dxµ
+∞
−∞= = ∫∆
* *1 2 1 2 1 2 1 2 1 2 1 2( , ) { ( ) ( )} ( , , , )XX XR t t E X t X t x x f x x t t dx dx= = ∫ ∫
∆
5
3. represents a nonnegative definite function, i.e., for anyset of constants
Eq. (14-8) follows by noticing that The function
represents the autocovariance function of the process X(t).Example 14.1Let
Then
.)(for 0}|{|1
2 ∑=
=≥n
iii tXaYYE
)()(),(),( 2*
12121 ttttRttC XXXXXX µµ−= (14-9)
.)(
∫−=T
TdttXz
∫ ∫
∫ ∫
− −
− −
=
=T
T
T
T
T
T
T
T
dtdtttR
dtdttXtXEzE
XX
2121
212*
12
),(
)}()({]|[|
(14-10)
niia 1}{ =
),( 21 ttRXX
∑∑= =
≥n
i
n
jjiji ttRaa XX
1 1
* .0),( (14-8)
PILLAI/Cha
6
Similarly
,0}{sinsin}{coscos)}{cos()}({)(
0 0
0
=−=+==
ϕωϕωϕωµ
EtaEtataEtXEtX
).(cos2
)}2)(cos()({cos2
)}cos(){cos(),(
210
2
210210
22010
221
tta
ttttEa
ttEattRXX
−=
+++−=
++=
ω
ϕωω
ϕωϕω
(14-12)
(14-13)
Example 14.2
).2,0(~ ),cos()( 0 πϕϕω UtatX += (14-11)
This gives
PILLAI/Cha
∫ ===π
ϕϕϕϕ π2
0 }.{sin0cos}{cos since 2
1 EdE
7
Stationary Stochastic ProcessesStationary processes exhibit statistical properties that are
invariant to shift in the time index. Thus, for example, second-orderstationarity implies that the statistical properties of the pairs {X(t1) , X(t2) } and {X(t1+c) , X(t2+c)} are the same for any c. Similarly first-order stationarity implies that the statistical properties of X(ti) and X(ti+c) are the same for any c.
In strict terms, the statistical properties are governed by thejoint probability density function. Hence a process is nth-orderStrict-Sense Stationary (S.S.S) if
for any c, where the left side represents the joint density function of the random variables andthe right side corresponds to the joint density function of the randomvariables A process X(t) is said to be strict-sense stationary if (14-14) is true for all
),, ,,,(),, ,,,( 21212121 ctctctxxxftttxxxf nnnn XX +++≡(14-14)
)( , ),( ),( 2211 nn tXXtXXtXX ===
).( , ),( ),( 2211 ctXXctXXctXX nn +=′+=′+=′
. and ,2 ,1 , , ,2 ,1 , canynniti == PILLAI/Cha
8
For a first-order strict sense stationary process,from (14-14) we have
for any c. In particular c = – t gives
i.e., the first-order density of X(t) is independent of t. In that case
Similarly, for a second-order strict-sense stationary processwe have from (14-14)
for any c. For c = – t2 we get
),(),( ctxftxf XX +≡
(14-16)
(14-15)
(14-17)
)(),( xftxf XX =
[ ( )] ( ) , E X t x f x dx a constant.µ
+∞
−∞= =∫
), ,,(), ,,( 21212121 ctctxxfttxxf XX ++≡
) ,,(), ,,( 21212121 ttxxfttxxf XX −≡ (14-18)
PILLAI/Cha
9
i.e., the second order density function of a strict sense stationary process depends only on the difference of the time indices In that case the autocorrelation function is given by
i.e., the autocorrelation function of a second order strict-sensestationary process depends only on the difference of the time indices Notice that (14-17) and (14-19) are consequences of the stochastic process being first and second-order strict sense stationary. On the other hand, the basic conditions for the first and second order stationarity – Eqs. (14-16) and (14-18) – are usually difficult to verify.In that case, we often resort to a looser definition of stationarity,known as Wide-Sense Stationarity (W.S.S), by making use of
.21 τ=− tt
.21 tt −=τ
(14-19)
PILLAI/Cha
*1 2 1 2
*1 2 1 2 1 2 1 2
*1 2
( , ) { ( ) ( )}
( , , )
( ) ( ) ( ),
XX
X
XX XX XX
R t t E X t X t
x x f x x t t dx dx
R t t R R
τ
τ τ
=
= = −
= − = = −∫ ∫
∆
∆
10
(14-17) and (14-19) as the necessary conditions. Thus, a process X(t)is said to be Wide-Sense Stationary if(i)and(ii)
i.e., for wide-sense stationary processes, the mean is a constant and the autocorrelation function depends only on the difference between the time indices. Notice that (14-20)-(14-21) does not say anything about the nature of the probability density functions, and instead deal with the average behavior of the process. Since (14-20)-(14-21) follow from (14-16) and (14-18), strict-sense stationarity always implies wide-sense stationarity. However, the converse is not true in general, the only exception being the Gaussian process.This follows, since if X(t) is a Gaussian process, then by definition
are jointly Gaussian randomvariables for any whose joint characteristic function is given by
µ=)}({ tXE
(14-21)
(14-20)
),()}()({ 212*
1 ttRtXtXE XX −=
)( , ),( ),( 2211 nn tXXtXXtXX ===
PILLAI/Chanttt ,, 21
11
where is as defined on (14-9). If X(t) is wide-sense stationary, then using (14-20)-(14-21) in (14-22) we get
and hence if the set of time indices are shifted by a constant c to generate a new set of jointly Gaussian random variables
then their joint characteristic function is identical to (14-23). Thus the set of random variables and have the same joint probability distribution for all n and all c, establishing the strict sense stationarity of Gaussian processes from its wide-sense stationarity.
To summarize if X(t) is a Gaussian process, thenwide-sense stationarity (w.s.s) strict-sense stationarity (s.s.s).
Notice that since the joint p.d.f of Gaussian random variables dependsonly on their second order statistics, which is also the basis
),( ki ttCXX
1 ,
( ) ( , ) / 2
1 2( , , , )XX
n n
k k i k i kk l k
X
j t C t t
n eµ ω ω ω
φ ω ω ω =
−∑ ∑∑= (14-22)
12
1 1 1 1
( )
1 2( , , , )XX
n n n
k i k i kk k
X
j C t t
n eµω ω ω
φ ω ω ω = = =
− −∑ ∑∑= (14-23)
niiX 1}{ =
niiX 1}{ =′
⇒
PILLAI/Cha
),( 11 ctXX +=′)(,),( 22 ctXXctXX nn +=′+=′
12
for wide sense stationarity, we obtain strict sense stationarity as well.From (14-12)-(14-13), (refer to Example 14.2), the process
in (14-11) is wide-sense stationary, butnot strict-sense stationary.
Similarly if X(t) is a zero mean wide sense stationary process in Example 14.1, then in (14-10) reduces to
As t1, t2 varies from –T to +T, variesfrom –2T to + 2T. Moreover is a constantover the shaded region in Fig 14.2, whose area is given by
and hence the above integral reduces to
),cos()( 0 ϕω += tatX
PILLAI/Cha
2zσ
.)(}|{|
212122 ∫ ∫− −
−==T
T
T
Tz dtdtttRzE XXσ
21 tt −=τ)(τXXR
)0( >τ
τττττ dTdTT )2()2(21)2(
21 22 −=−−−−
.)1)((|)|2)((2
2 2||
212
2 2 ∫∫ −−
−=−=T
t TT
T
tz dRdTR XXXX τττττσ τ
(14-24)
T− T
T−τ
τ
τ−T2
2t
1t
Fig. 14.2
21tt −=τ
13
Systems with Stochastic InputsA deterministic system1 transforms each input waveform intoan output waveform by operating only on the time variable t. Thus a set of realizations at the input corresponding to a process X(t) generates a new set of realizations at the output associated with a new process Y(t).
),( itX ξ)],([),( ii tXTtY ξξ =
)},({ ξtY
Our goal is to study the output process statistics in terms of the inputprocess statistics and the system function.
1A stochastic system on the other hand operates on both the variables t and .ξ
PILLAI/Cha
][⋅T→ )(tX → )(tY
t t
),(i
tX ξ),(
itY ξ
Fig. 14.3
14
Deterministic Systems
Systems with Memory
Time-Invariantsystems
Linear systems
Linear-Time Invariant(LTI) systems
Memoryless Systems
)]([)( tXgtY =
)]([)( tXLtY =
PILLAI/Cha
Time-varyingsystems
Fig. 14.3
.)()(
)()()(
∫
∫∞+
∞−
∞+
∞−
−=
−=
τττ
τττ
dtXh
dXthtY( )h t( )X t
LTI system
15
Memoryless Systems:The output Y(t) in this case depends only on the present value of the input X(t). i.e., (14-25)
PILLAI/Cha
)}({)( tXgtY =
Memorylesssystem
Memorylesssystem
Memorylesssystem
Strict-sense stationary input
Wide-sense stationary input
X(t) stationary Gaussian with
)(τXXR
Strict-sense stationary output.
Need not bestationary in any sense.
Y(t) stationary,butnot Gaussian with
(see (14-26)).).()( τητ XXXY RR =
(see (9-76), Text for a proof.)
Fig. 14.4
16
Theorem: If X(t) is a zero mean stationary Gaussian process, andY(t) = g[X(t)], where represents a nonlinear memoryless device,
then
Proof:
where are jointly Gaussian random variables, and hence
)(⋅g
)}.({ ),()( XgERR XXXY′== ητητ (14-26)
212121 ),()(
)}]({)([)}()({)(
21dxdxxxfxgx
tXgtXEtYtXER
XX
XY
∫ ∫=
−=−= τττ
(14-27)
)( ),( 21 τ−== tXXtXX
PILLAI/Cha
* 1
1 2
/ 21 2
1 2 1 2
* *
12 | |
(0) ( )
( ) (0)
( , )
( , ) , ( , )
{ } XX XX
XX XX
X X
x A x
T T
A
R R
R R
f x x e
X X X x x x
A E X X LL
π
τ
τ
−−=
= =
= = =
∆
17
where L is an upper triangular factor matrix with positive diagonal entries. i.e.,
Consider the transformation
so that
and hence Z1, Z2 are zero mean independent Gaussian random variables. Also
and hence
The Jacobaian of the transformation is given by
. 0
22
1211
=
lll
L
IALLLXXELZZE ===−− −− 11 *1**1* }{}{
* * *1 * 1 2 21 2 .x A x z L A Lz z z z z− −= = = +
22222121111 , zlxzlzlxzLx =+=⇒=
PILLAI/Cha
1 1 1 2 1 2( , ) , ( , )T TZ L X Z Z z L x z z− −= = = =∆ ∆
18
Hence substituting these into (14-27), we obtain
where This gives
.|||||| 2/11 −− == ALJ
2 21 2
1/ 211 1 12 2 22 2
11 1 22 2 1 21 2
12 2 22 2 1 21 2
/ 2 / 21 1| | 2 | |
1 2
1 2
( ) ( ) ( )
( ) ( ) ( )
( ) ( ) ( )
XY J A
z z
z z
z zR l z l z g l z e e
l z g l z f z f z dz dz
l z g l z f z f z dz dz
πτ
+∞ +∞ − −
−∞ −∞
+∞ +∞
−∞ −∞
+∞ +∞
−∞ −∞
= + ⋅
=
+
=
∫ ∫
∫ ∫
∫ ∫
22
212 22
22
11 1 1 22 2 21 2
12 2 22 2 22
/ 2
2
2
1 2
2
12
/ 212
( ) ( ) ( )
( ) ( )
( ) ,
z z
z
z
u lll
l z f z dz g l z f z dz
l z g l z f z dz
e
ug u e du
π
π
−
+∞ +∞
−∞ −∞
+∞
−∞
+∞ −
−∞
+
=
∫ ∫
∫
∫
0
PILLAI/Cha22 2 .u l z=
19
222
222 22
2
2
( )
/ 2112 22 2
( )( )
( ) ( )
( ) ( ) ( ) ,
u
XY
uu
XX
f u
u
df uf u
du
u
ll
ulR l l g u e du
R g u f u du
πτ
τ
+∞ −
−∞
′− =−
+∞
−∞
=
′= −
∫
∫
Hence).( gives since 2212* τXXRllLLA ==
the desired result, where Thus if the input to a memoryless device is stationary Gaussian, the cross correlation function between the input and the output is proportional to theinput autocorrelation function.
PILLAI/Cha
),()}({)(
})()(|)()(){()(
τητ
ττ
XXXX
XXXY
RXgER
duufugufugRR uu
=′=
′+−= ∫∞+
∞−
∞+∞−
0
)].([ XgE ′=η
20
Linear Systems: represents a linear system if
Let
represent the output of a linear system.Time-Invariant System: represents a time-invariant system if
i.e., shift in the input results in the same shift in the output also.If satisfies both (14-28) and (14-30), then it corresponds to a linear time-invariant (LTI) system.LTI systems can be uniquely represented in terms of their output to a delta function
][⋅L
)}({)( tXLtY =
)}.({)}({)}()({ 22112211 tXLatXLatXatXaL +=+ (14-28)
][⋅L
)()}({)}({)( 00 ttYttXLtXLtY −=−⇒=
(14-29)
(14-30)
][⋅L
PILLAI/Cha
LTI)(tδ )(th
Impulse
Impulseresponse ofthe system
t
)(th
Impulseresponse
Fig. 14.5
21
Eq. (14-31) follows by expressing X(t) as
and applying (14-28) and (14-30) to Thus)}.({)( tXLtY =
∫∞+
∞−−=
)()()( ττδτ dtXtX
(14-31)
(14-32)
(14-33)PILLAI/Cha
.)()()()(
)}({)(
})()({
})()({)}({)(
∫∫
∫
∫
∫
∞+
∞−
∞+
∞−
∞+
∞−
∞+
∞−
∞+
∞−
−=−=
−=
−=
−==
ττττττ
ττδτ
ττδτ
ττδτ
dtXhdthX
dtLX
dtXL
dtXLtXLtY
By Linearity
By Time-invariance
then
LTI
∫
∫∞+
∞−
∞+
∞−
−=
−=
)()(
)()()(
τττ
τττ
dtXh
dXthtYarbitrary
input
t
)(tXt
)(tY
Fig. 14.6
)(tX )(tY
22
Output Statistics: Using (14-33), the mean of the output processis given by
Similarly the cross-correlation function between the input and outputprocesses is given by
Finally the output autocorrelation function is given by
).()()()(
})()({)}({)(
thtdth
dthXEtYEt
XX
Y
∗=−=
−==
∫
∫∞+
∞−
∞+
∞−
µτττµ
τττµ
(14-34)
).(),(
)(),(
)()}()({
})()()({
)}()({),(
2*
21
*
21
*
21
*
21
2*
121
thttR
dhttR
dhtXtXE
dhtXtXE
tYtXEttR
XX
XX
XY
∗=
−=
−=
−=
=
∫
∫
∫
∞+
∞−
∞+
∞−
∞+
∞−
ααα
ααα
ααα
*
*
(14-35)
PILLAI/Cha
23
or ),(),(
)(),(
)()}()({
})( )()({
)}()({),(
121
21
21
2*
1
2*
121
thttR
dhttR
dhtYtXE
tYdhtXE
tYtYEttR
XY
XY
YY
∗=
−=
−=
−=
=
∫
∫
∫
∞+
∞−
∞+
∞−
∞+
∞−
βββ
βββ
βββ
*
).()(),(),( 12*
2121 ththttRttR XXYY ∗∗=
(14-36)
(14-37)
PILLAI/Cha
h(t))(tXµ )(tYµ
h*(t2) h(t1) → ),( 21 ttRXY→ → ),( 21 ttRYY),( 21 ttRXX
(a)
(b)
Fig. 14.7
24
In particular if X(t) is wide-sense stationary, then we haveso that from (14-34)
Also so that (14-35) reduces to
Thus X(t) and Y(t) are jointly w.s.s. Further, from (14-36), the output autocorrelation simplifies to
From (14-37), we obtain
XX t µµ =)(
constant.a cdht XXY ,)()(
µττµµ == ∫
∞+
∞−(14-38)
)(),( 2121 ttRttR XXXX −=
(14-39)
).()()(
,)()(),( 21
2121
τττ
τβββ
YYXY
XYYY
RhR
ttdhttRttR
=∗=
−=−−= ∫∞+
∞−
(14-40)
).()()()( * ττττ hhRR XXYY ∗−∗= (14-41)PILLAI/Cha
. ),()()(
)()(),(
21*
*
2121
ttRhR
dhttRttR
XYXX
XXXY
−==−∗=
+−= ∫∞+
∞−
ττττ
ααα∆
25
From (14-38)-(14-40), the output process is also wide-sense stationary.This gives rise to the following representation
PILLAI/Cha
LTI systemh(t)
Linear system
wide-sense stationary process
strict-sense stationary process
Gaussianprocess (alsostationary)
wide-sense stationary process.
strict-sensestationary process(see Text for proof )
Gaussian process(also stationary)
)(tX )(tY
LTI systemh(t)
)(tX
)(tX
)(tY
)(tY
(a)
(b)
(c)Fig. 14.8
26
White Noise Process:W(t) is said to be a white noise process if
i.e., E[W(t1) W*(t2)] = 0 unless t1 = t2.W(t) is said to be wide-sense stationary (w.s.s) white noise if E[W(t)] = constant, and
If W(t) is also a Gaussian process (white Gaussian process), then all of its samples are independent random variables (why?).
For w.s.s. white noise input W(t), we have
),()(),( 21121 tttqttRWW −= δ (14-42)
).()(),( 2121 τδδ qttqttRWW =−= (14-43)
White noiseW(t)
LTIh(t)
Colored noise( ) ( ) ( )N t h t W t= ∗
PILLAI/Cha
Fig. 14.9
27
and
where
Thus the output of a white noise process through an LTI system represents a (colored) noise process.Note: White noise need not be Gaussian.
“White” and “Gaussian” are two different concepts!
)()()(
)()()()(*
*
τρττ
τττδτ
qhqh
hhqRnn
=∗−=
∗−∗=
(14-45)
.)()()()()(
** ∫
∞+
∞−+=−∗= αταατττρ dhhhh (14-46)
PILLAI/Cha
(14-44)
[ ( )] ( ) , WE N t h dµ τ τ
+∞
−∞= ∫ a constant
28
Upcrossings and Downcrossings of a stationary Gaussian process:Consider a zero mean stationary Gaussian process X(t) with
autocorrelation function An upcrossing over the mean value occurs whenever the realization X(t) passes through zero withpositive slope. Let represent the probabilityof such an upcrossing inthe interval We wish to determine
Since X(t) is a stationary Gaussian process, its derivative process is also zero mean stationary Gaussian with autocorrelation function
(see (9-101)-(9-106), Text). Further X(t) and are jointly Gaussian stationary processes, and since (see (9-106), Text)
).(τXXR
t∆ρ
). ,( ttt ∆+.ρ
Fig. 14.10
)(tX ′
)()( ττ XXXX RR ′′−=′′)(tX ′
,)()(ττ
τd
dRR XXXX −=′
PILLAI/Cha
Upcrossings
t
)(tX
Downcrossing
29
we have
which for gives
i.e., the jointly Gaussian zero mean random variables
are uncorrelated and hence independent with variances
respectively. Thus
To determine the probability of upcrossing rate,
0=τ
)()()(
)()( τττ
ττ
τ XXXXXX
XX Rd
dRd
dRR ′′ −==−−
−=−
(14-48)
(14-47)
(0) 0 [ ( ) ( )] 0XXR E X t X t′′= ⇒ =
)( and )( 21 tXXtXX ′== (14-49)
,ρ
0 )0( )0( and )0( 22
21 >′′−=== ′′ XXXXXX RRR σσ (14-50)
2 21 1
2 21 2
1 2 1 2 1 21 2
2 21( , ) ( ) ( ) .2X X X X
x x
f x x f x f x e σ σ
πσ σ
−
+= = (14-51)
PILLAI/Cha
30
PILLAI/Cha
we argue as follows: In an interval the realization moves from X(t) = X1 to and hence the realization intersects with the zero level somewherein that interval if
i.e.,Hence the probability of upcrossing in is given by
Differentiating both sides of (14-53) with respect to we get
and letting Eq. (14-54) reduce to
), ,( ttt ∆+,)()()( 21 tXXttXtXttX ∆+=∆′+=∆+
1 2 .X X t> − ∆
(14-52)
) ,( ttt ∆+
(14-53)
t
)(tX
)(tX
)( ttX ∆+
ttt ∆+
Fig. 14.11.)()(
),(
1
12
0 2
0 210
21
2 12
2 21 21
xdxfxdxf
dxxdxxft
tx
x txx
XX
XX
∫∫
∫ ∫∞
∆−
∞
∞
= ∆−=
=
=∆ρ
,t∆
(14-54)2 1
2 2 2 2 0
( ) ( )X Xf x x f x t dxρ∞
= − ∆∫,0→∆t
1 2 1 20, 0, and ( ) 0 X X X t t X X t< > + ∆ = + ∆ >
31
PILLAI/Cha
[where we have made use of (5-78), Text]. There is an equal probability for downcrossings, and hence the total probability for crossing the zero line in an interval equals where
It follows that in a long interval T, there will be approximately crossings of the mean value. If is large, then the autocorrelation function decays more rapidly as movesaway from zero, implying a large random variation around the origin (mean value) for X(t), and the likelihood of zero crossings should increase with increase in agreeing with (14-56).
)0()0(
21)/2(
21
)0(21
)()0(2
1)0()(
2
0 222
0 222
XX
XX
XX
X
XX
XX
RR
R
dxxfxR
dxfxfx
′′−==
== ∫∫∞∞
ππσ
π
πρ
(14-55)
) ,( ttt ∆+ ,0 t∆ρ
.0 )0(/)0(10 >′′−= XXXX RR
πρ (14-56)
T0ρ)0( XXR ′′−
τ)(τXXR
(0),XXR′′−
32
Discrete Time Stochastic Processes:
A discrete time stochastic process Xn = X(nT) is a sequence of random variables. The mean, autocorrelation and auto-covariance functions of a discrete-time process are gives by
and
respectively. As before strict sense stationarity and wide-sense stationarity definitions apply here also.For example, X(nT) is wide sense stationary if
and
)}()({),(
)}({
2*
121 TnXTnXEnnR
nTXEn
=
=µ
*2121 21),(),( nnnnRnnC µµ−=
(14-57)
(14-58)
(14-59)
constanta nTXE ,)}({ µ= (14-60)
PILLAI/Cha(14-61)* *[ {( ) } {( ) }] ( ) n nE X k n T X k T R n r r−+ = = =∆
33
i.e., R(n1, n2) = R(n1 – n2) = R*(n2 – n1). The positive-definite property of the autocorrelation sequence in (14-8) can be expressed in terms of certain Hermitian-Toeplitz matrices as follows: Theorem: A sequence forms an autocorrelation sequence ofa wide sense stationary stochastic process if and only if everyHermitian-Toeplitz matrix Tn given by
is non-negative (positive) definite for Proof: Let represent an arbitrary constant vector.Then from (14-62),
since the Toeplitz character gives Using (14-61),Eq. (14-63) reduces to
∞+∞−}{ nr
0, 1, 2, , .n = ∞
*
0*
1*
1*
110*
1
210
n
nn
n
n
n T
rrrr
rrrr
rrrr
T =
=
−
−
Tnaaaa ] , , ,[ 10=
(14-62)
PILLAI/Cha
∑∑= =
−=n
i
n
kikkin raaaTa
0 0
** (14-63)
.)( , ikkin rT −=
34
From (14-64), if X(nT) is a wide sense stationary stochastic processthen Tn is a non-negative definite matrix for everySimilarly the converse also follows from (14-64). (see section 9.4, Text)
If X(nT) represents a wide-sense stationary input to a discrete-timesystem {h(nT)}, and Y(nT) the system output, then as before the crosscorrelation function satisfies
and the output autocorrelation function is given by
or
Thus wide-sense stationarity from input to output is preserved for discrete-time systems also.
.,,2 ,1 ,0 ∞=n
(14-64)2
* * * *
0 0 0{ ( ) ( )} ( ) 0.
n n n
n i k ki k k
a T a a a E X kT X iT E a X kT= = =
= = ≥
∑∑ ∑
PILLAI/Cha
)()()( * nhnRnR XXXY −∗=
)()()( nhnRnR XYYY ∗=
).()()()( * nhnhnRnR XXYY ∗−∗=
(14-65)
(14-66)
(14-67)
35
Auto Regressive Moving Average (ARMA) Processes
Consider an input – output representation
where X(n) may be considered as the output of a system {h(n)}driven by the input W(n). Z – transform of (14-68) gives
or
,)()()(01∑∑==
−+−−=q
kk
p
kk knWbknXanX (14-68)
(14-69)
h(n)W(n) X(n)
00 0
( ) ( ) , 1p q
k kk k
k kX z a z W z b z a− −
= == ≡∑ ∑
∆1 2
0 1 21 2
0 1 2
( ) ( )( ) ( )( ) ( )1
qqk
pk p
b b z b z b zX z B zH z h k zW z A za z a z a z
− − −∞−
− − −=
+ + + += = = =
+ + + +∑(14-70) PILLAI/Cha
Fig.14.12
36
represents the transfer function of the associated system response {h(n)}in Fig 14.12 so that
Notice that the transfer function H(z) in (14-70) is rational with p poles and q zeros that determine the model order of the underlying system.From (14-68), the output undergoes regression over p of its previous values and at the same time a moving average based on
of the input over (q + 1) values is added to it, thus generating an Auto Regressive Moving Average (ARMA (p, q)) process X(n). Generally the input {W(n)} represents a sequence of uncorrelated random variables of zero mean and constant varianceso that
If in addition, {W(n)} is normally distributed then the output {X(n)} also represents a strict-sense stationary normal process.
If q = 0, then (14-68) represents an AR(p) process (all-pole process), and if p = 0, then (14-68) represents an MA(q) PILLAI/Cha
(14-72)
(14-71).)()()(0∑∞
=−=
kkWknhnX
),1( ),( −nWnW
2Wσ
).()( 2 nnR WWW δσ=
)( , qnW −
37
process (all-zero process). Next, we shall discuss AR(1) and AR(2)processes through explicit calculations.AR(1) process: An AR(1) process has the form (see (14-68))
and from (14-70) the corresponding system transfer
provided | a | < 1. Thus
represents the impulse response of an AR(1) stable system. Using(14-67) together with (14-72) and (14-75), we get the output autocorrelation sequence of an AR(1) process to be
PILLAI/Cha
)()1()( nWnaXnX +−= (14-73)
1|| ,)( <= aanh n (14-75)
(14-74)∑∞
=
−− =
−=
011
1)(n
nn zaaz
zH
2
||2
0
||22
1}{}{)()(
aaaaaannR
n
k
kknnnWWWXX −
==∗∗= ∑∞
=
+− σσδσ
(14-76)
38
where we have made use of the discrete version of (14-46). The normalized (in terms of RXX (0)) output autocorrelation sequence isgiven by
It is instructive to compare an AR(1) model discussed above by superimposing a random component to it, which may be an error term associated with observing a first order AR process X(n). Thus
where X(n) ~ AR(1) as in (14-73), and V(n) is an uncorrelated randomsequence with zero mean and variance that is also uncorrelated with {W(n)}. From (14-73), (14-78) we obtain the output autocorrelation of the observed process Y(n) to be
PILLAI/Cha
)()()( nVnXnY +=
.0 || ,)0()()( || ≥== na
RnRn n
XX
XXXρ
(14-78)
(14-77)
2Vσ
)(1
)()()()()(
22
||2
2
na
a
nnRnRnRnR
VW
VXXVVXXYY
n
δσσ
δσ
+−
=
+=+=
(14-79)
39
so that its normalized version is given by
where
Eqs. (14-77) and (14-80) demonstrate the effect of superimposing an error sequence on an AR(1) model. For non-zero lags, the autocorrelation of the observed sequence {Y(n)}is reduced by a constantfactor compared to the original process {X(n)}.From (14-78), the superimposederror sequence V(n) only affectsthe corresponding term in Y(n)(term by term). However,a particular term in the “input sequence”W(n) affects X(n) and Y(n) as well asall subsequent observations.
PILLAI/Cha
(14-80)
.1)1( 222
2
<−+
=a
cVW
W
σσσ
(14-81)
Fig. 14.13
nk
)()( kkYX
ρρ >
1)0()0( ==YX
ρρ
0
| |
1 0( )( ) (0) 1, 2,
YYY
YYn
nR nnR c a n
ρ=
= = = ± ±
∆
40
AR(2) Process: An AR(2) process has the form
and from (14-70) the corresponding transfer function is given by
so that
and in term of the poles of the transfer function, from (14-83) we have
that represents the impulse response of the system. From (14-84)-(14-85), we also have From (14-83),
PILLAI/Cha
)()2()1()( 21 nWnXanXanX +−+−= (14-82)
(14-83)
(14-84)
(14-85)
12
21
1
1
02
21
1 1111)()( −−
∞
=−−
−
−+
−=
−−== ∑ z
bz
bzaza
znhzHn
n
λλ
2 ),2()1()( ,)1( ,1)0( 211 ≥−+−=== nnhanhanhahh
0 ,)( 2211 ≥+= nbbnh nn λλ
. ,1 1221121 abbbb =+=+ λλ
, , 221121 aa −==+ λλλλ (14-86)
21 and λλ
41
and H(z) stable impliesFurther, using (14-82) the output autocorrelations satisfy the recursion
and hence their normalized version is given by
By direct calculation using (14-67), the output autocorrelations are given by
PILLAI/Cha
(14-88)
(14-87)
.1|| ,1|| 21 << λλ
)2()1()}()({
)}()]2()1({[
)}()({)(
21
*
*21
*
−+−=++
−++−+=
+=
nRanRamXmnWE
mXmnXamnXaE
mXmnXEnR
XXXX
XX
0
−
+−
+−
+−
=
∗+=
∗−=∗−∗=
∑∞
=
22
*2
22
*21
*2
*21
2*1
*12
*1
21
*1
212
0
*2
*2*
||1)(||
1)(
1)(
||1)(||
)()(
)()()()()()(
λλ
λλλ
λλλ
λλ
σ
σ
σ
nnnn
k
bbbbbb
khknh
nhnhnhnhnRnR
W
W
WWWXX
(14-89)
1 2( )( ) ( 1) ( 2).(0)
XXX X X
XX
R nn a n a nR
ρ ρ ρ= = − + −∆
42
where we have made use of (14-85). From (14-89), the normalizedoutput autocorrelations may be expressed as
where c1 and c2 are appropriate constants.Damped Exponentials: When the second order system in (14-83)-(14-85) is real and corresponds to a damped exponential response, the poles are complex conjugate which gives in (14-83). Thus
In that case in (14-90) so that the normalized correlations there reduce to
But from (14-86)
PILLAI/Cha
(14-90)nn
XX
XXX cc
RnRn *
22*11)0(
)()( λλρ +==
21 24 0a a+ <
* 1 2
jc c c e ϕ= =
* 1 2 1, , 1.jr e rθλ λ λ−
= = < (14-91)
(14-92)).cos(2}Re{2)( *11 ϕθλρ +== ncrcn nn
X
,1 ,cos2 22
121 <−===+ arar θλλ (14-93)
43
and hence which gives
Also from (14-88)
so that
where the later form is obtained from (14-92) with n = 1. But in (14-92) gives
Substituting (14-96) into (14-92) and (14-95) we obtain the normalizedoutput autocorrelations to be
PILLAI/Cha
21 22 sin ( 4 ) 0r a aθ = − + >
1)0( =Xρ
.)4(tan1
221
aaa +−
=θ (14-94)
(14-95)
(14-96)
)1()1()0()1( 2121 XXXX aaaa ρρρρ +=−+=
)cos(21
)1(2
1 ϕθρ +=−
= cra
aX
.cos2/1or ,1cos2 ϕϕ == cc
44
where satisfies
Thus the normalized autocorrelations of a damped second order system with real coefficients subject to random uncorrelated impulses satisfy (14-97).
More on ARMA processes
From (14-70) an ARMA (p, q) system has only p + q + 1 independentcoefficients, and hence its impulse response sequence {hk} also must exhibit a similar dependence amongthem. In fact according to P. Dienes (The Taylor series, 1931),
ϕ
.11cos
)cos(22
1
aaa
−−=
+θϕθ
(14-98)
1 ,cos
)cos()()( 22/
2 <−+
−= anan nX ϕ
ϕθρ (14-97)
( , 1 , , 0 ),k ia k p b i q= → = →
PILLAI/Cha
45
an old result due to Kronecker1 (1881) states that the necessary and sufficient condition for to represent a rational system (ARMA) is that
where
i.e., In the case of rational systems for all sufficiently large n, theHankel matrices Hn in (14-100) all have the same rank.
The necessary part easily follows from (14-70) by cross multiplyingand equating coefficients of like powers of
1Among other things “God created the integers and the rest is the work of man.” (Leopold Kronecker)PILLAI/Cha
0( ) kkkH z h z∞ −
== ∑
det 0, (for all sufficiently large ),nH n N n= ≥ (14-99)
(14-100)
, 0, 1, 2, .kz k− =
0 1 2
1 2 3 1
1 2 2
.
n
nn
n n n n
h h h hh h h h
H
h h h h
+
+ +
=
∆
46
This gives
For systems within (14-102) we get
which gives det Hp = 0. Similarly gives 1,i p q= − +
0 0
1 0 1 1
0 1 1
0 1 1 1 10 , 1.q q q m
q i q i q i q i
b hb h a h
b h a h a h
h a h a h a h i−
+ + − + − +
== +
= + + +
= + + + + ≥
1, letting , 1, ,q p i p q p q≤ − = − − +
0 1 1 1 1
1 1 2 1 1 2
0
0
p p p p
p p p p p p
h a h a h a h
h a h a h a h
− −
+ − −
+ + + + =
+ + + + =
(14-102)
(14-101)
2 p q−
(14-103)
PILLAI/Cha
47
and that gives det Hp+1 = 0 etc. (Notice that )(For sufficiency proof, see Dienes.)It is possible to obtain similar determinantial conditions for ARMA systems in terms of Hankel matrices generated from its outputautocorrelation sequence.
Referring back to the ARMA (p, q) model in (14-68), the input white noise process w(n) there is uncorrelated with its ownpast sample values as well as the past values of the system output.This gives
0, 1, 2,p ka k+ = =
0 1 1 1
1 1 2 2
1 1 2 2 2
0
0
0,
p p p
p p p
p p p p p
h a h a h
h a h a h
h a h a h
+ +
+ +
+ + + +
+ + + =
+ + + =
+ + + = (14-104)
PILLAI/Cha
*{ ( ) ( )} 0, 1E w n w n k k− = ≥
*{ ( ) ( )} 0, 1.E w n x n k k− = ≥
(14-105)
(14-106)
48
PILLAI/Cha
Together with (14-68), we obtain
and hence in general
and
Notice that (14-109) is the same as (14-102) with {hk} replaced
*
* *
1 0
*
1 0
{ ( ) ( )}
{ ( ) ( )} { ( ) ( )}
{ ( ) ( )}
ip q
k kk k
p q
k i k kk k
r E x n x n i
a x n k x n i b w n k w n i
a r b w n k x n i
= =
−= =
= −
= − − − + − −
= − + − −
∑ ∑
∑ ∑ (14-107)
10,
p
k i k ik
a r r i q−=
+ ≠ ≤∑ (14-108)
10, 1.
p
k i k ik
a r r i q−=
+ = ≥ +∑ (14-109)
49
by {rk} and hence the Kronecker conditions for rational systems canbe expressed in terms of its output autocorrelations as well.Thus if X(n) ~ ARMA (p, q) represents a wide sense stationary stochastic process, then its output autocorrelation sequence {rk} satisfies
where
represents the Hankel matrix generated from It follows that for ARMA (p, q) systems, we have
1rank rank , 0,p p kD D p k− += = ≥ (14-110)
(14-111)
( 1) ( 1)k k+ × +0 1 2, , , , , .k kr r r r
0 1 2
1 2 3 1
1 2 2
k
kk
k k k k
r r r rr r r r
D
r r r r
+
+ +
=
∆
PILLAI/Cha(14-112)det 0, for all sufficiently large .nD n=