prob theory & stochastic4

4
Exam zone

description

Exam zone 1 ¡ cos - 1 ¡ cos - 1 ¡ cos - 1 ¡ cos - c t 1 + cos ¡ 1 ( x 2 ) ¡ - c t 2 c t 1 + cos ¡ 1 ( x 2 ) ¡ - c t 2 + 7 ± x ± x 9 = ; 9 = ; f x ( t 1 ) ;x ( t 2 ) ( x ;x h h h h 8 < : 8 < : 1 2 1 2 2 i´ i´ ³ ³ ³ ³

Transcript of prob theory & stochastic4

Page 1: prob theory & stochastic4

Exam

zone

Page 2: prob theory & stochastic4

Example: Oscillator with Random Phase

Considertheoutputofasinusoidaloscillatorthathasarandomphaseandamplitudeofthe

form:

X ( t ) =cos(­ c t + £) ;

where £ » U ([0 ; 2 ¼ ]). Writing out the explicit dependence on the underlying sample space

S the oscillator output can be written as

x ( t; £) = cos(­ c t + £) : (31)

This random signal falls in the continuous-time, continuous parameter, and continuous am-

plitudecategoryandisusefulinmodelingpropagationphenomenasuchasmulti-pathfading.

The¯rstorderdistributionofthisprocesscanbefoundbylookingatthedistributionof

the R.V

X t (£) =cos(£+ µ o ) ;

where­ c t = µ o isanonrandomquantity. Thiscaneasilybeshownviathederivativemethod

shown in class to be of the form:

f X ( x ) = 1

¼ p

1 ¡ x 2 ; j x j < 1 : (32)

Notethatthisdistributionisdependentonlyonthesetofvaluesthattheprocesstakesand

is independent of the particular sampling instant t and the constant phase o®set µ o .

If the second-order distribution is needed then we use the conditional distribution of x ( t 2 )

as in :

f x ( t 1 ) ;x ( t 2 ) ( x ;x 1 2 ) = f x ( t 2 ) ( x 2 ) f x ( t 1 ) j x ( t 2 ) ( x 1 j x 2 ) (33)

If the value of x ( t 2 ) is to be equal to x 2 then we require cos(£+­ c t 2 ) = x 2 . This can

happen only when :

£ = cos ¡ 1

( x 2 ) ¡ ­ c t 2 or

£ = 2 ¼ ¡ cos ¡ 1

( x 2 ) ¡ ­ c t ; 2 (34)

where0 · cos ¡ 1 ( x 2 ) · ¼ . Allother possiblesolutionslieoutsidethedesiredinterval[0 ; 2 ¼ ].

Consequently the random process at t = t 1 can only take on the values:

x ( t 1 ) = cos ­ ³ c t 1 + cos ¡ 1

( x 2 ) ¡ ­ c t 2 ´ or

x ( t 1 ) = cos ­ ³ c t 1 ¡ cos ¡ 1

( x 2 ) ¡ ­ c t 2 ´ (35)

Thus the conditional distribution of x ( t 1 ) given that x ( t 2 ) = x 2 is of the form:

f x ( t 1 ) j x ( t 2 ) ( x 1 j x 2 ) = µ ¶ 1

2 ± x ³

1 ¡ cos ­ h c t 1 + cos ¡ 1 ( x 2 ) ¡ ­ c t 2

+ µ ¶ 1

2 ± x ³

1 ¡ cos ­ h c t 1 ¡ cos

¡ 1 ( x 2 ) ¡ ­ c t 2

i´ : (36)

Combining Eq. (32) and Eq. (36) we have:

f x ( t 1 ) ;x ( t 2 ) ( x ;x 1 2 ) =

8 <

:

1

2 ¼ q

1 ¡ x 2 2

9 =

;

± x ³ 1 ¡ cos ­ h

c t 1 + cos ¡ 1

( x 2 ) ¡ ­ c t 2 i´

+

8 <

:

1

2 ¼ q

1 ¡ x 2 2

9 =

;

± x ³ 1 ¡ cos ­ h

c t 1 ¡ cos ¡ 1

( x 2 ) ¡ ­ c t 2 io : (37)

7

Page 3: prob theory & stochastic4

Noteherethatthesecond-orderPDFdependsonlyonthedi®erencevariable ¿ = t 1 ¡ t 2 . Let

uslookatthe¯rst-orderandsecond-ordermomentsoftherandomprocess X ( t ). Themean

of the process is obtained by taking the expectation operator with respect to the random

parameter £ on both sides of Eq. (31) keeping in mind that the expectation integral is a

linear operation:

¹ X = E £ ( x t (£))= E £ [cos(­ c t + £)]

= E £ [cos(­ c t ) cos(£) ¡ sin(­ c t ) sin(£)]

= E £ [cos(£)]cos(­ c t ) ¡ E £ [sin(£)]sin(­ c t ) : (38)

Since the random parameter £ is uniformly distributed, the above expression reduces to:

¹ X = cos(­ c t ) µ 1

2 ¼

¶ Z 2 ¼

0 cos( µ ) dµ ¡ sin(­ c t )

µ 1

2 ¼

¶ Z 2 ¼

0 sin( µ ) dµ = 0 : (39)

The variance of the random process X ( t ) is obtained by via

¾ 2 X = E £

h ( x t (£) ¡ ¹ X ) 2 i = E £

³ [ x t (£)] 2 ́ ¡ ¹

2 X (40)

Substituting the mean of the process in the above expression we have:

¾ 2 X =

µ 1

2 ¼

¶ Z 2 ¼

0 cos (­

2 c t + µ ) dµ =

µ 1

2 ¼

¶ Z 2 ¼

0

" 1 + cos(2­ c t + 2 µ )

2

# dµ =

1

2 (41)

This means that the average power of the random sinusoidal signal X ( t ) is

P X ave = ¾

2 X =

1

2 :

Notethatthisisthesameastheaveragepowerofasinusoidwherethephaseisnotrandom.

Letuslookatthestatisticsfromthesecond-orderdistribution. Thecorrelationbetweenthe

R.Vs x ( t 1 ) and x ( t 2 ) denoted as R XX ( t ;t 1 2 ) is obtained via:

R XX ( t ;t 1 2 ) = E £ [ x ( t 1 ) x ( t 2 )] = Z 2 ¼

0 cos[­ c t 1 + µ ]cos[­ c t 2 + µ ] dµ

= µ 1

4 ¼

¶ Z 2 ¼

0 cos[­ ( c t 1 + t 2 ) + 2 µ ] dµ +

µ 1

4 ¼

¶ Z 2 ¼

0 cos[­ ( c t 1 ¡ t 2 )] dµ

= µ ¶ 1

2 cos[­ ( c t 1 ¡ t 2 )] : (42)

The covariance of R.Vs X ( t 1 ) and X ( t 2 ) denoted C XX ( t ;t 1 2 ) is given by:

C XX ( t ;t 1 2 ) = R xx ( t ;t 1 2 ) ¡ ¹ X ( t 1 ) ¹ X ( t 2 ) = µ ¶ 1

2 cos[­ ( c t 1 ¡ t 2 )] : (43)

The correlation coe±cient of the R.Vs X ( t 1 ) and X ( t 2 ) denoted ½ XX ( t ;t 1 2 ) is:

½ XX ( t ;t 1 2 ) = cos[­ ( c t 1 ¡ t 2 )] : (44)

Looking at the mean and the variance of the random process X ( t ) we can see that they

areshift-invariantandconsequentlytheprocessis¯rst-orderstationary. TheACFandother

second-order statistics of the process are dependent only on the variable ¿ = t 1 ¡ t 2 . The

8

Page 4: prob theory & stochastic4

randomprocess X ( t ) isthereforeaWSSprocessalso. TheACFcanthenexpressedinterms

of the variable ¿ = t 1 ¡ t 2 as:

R XX ( ¿ ) = µ ¶ 1

2 cos(­ c ¿ ) : (45)

Let us now look at time averages of a single sample function or realization of the random

process X ( t ). Thesamplemeanoftherandomprocessirrespectiveofthesamplerealization

that we choose is:

h ¹ X i T = 1

T

Z T 2

¡ T 2

cos[­ c t + £] dt: (46)

As T ! 1 we have:

lim T !1

h ¹ X i T = 0 : (47)

Thesamplemeanoftheprocessisthereforeindependentoftheparticularensemblewaveform

used to calculate the time-average, i.e., independent of the value of £ for the realization.

Consequently we have:

lim T !1

E fh ¹ X i g T = ¹ X ( t ) = 0

lim Var T !1

fh ¹ X i g T = 0 : (48)

The random process X ( t ) is therefore ergodic in the mean (¯rst-order ergodic). Let us now

lookatthesampleACFoftherandomprocess X ( t ). ThesampleACFisagainindependent

of the particular realization of the process as evident from :

lim T !1

h R XX ( ¿ ) i T = lim T !1

1

T

Z T 2

¡ T 2

cos[­ c t + £]cos[­ ( c t ¡ ¿ ) + £] dt

lim T !1

h R XX ( ¿ ) i T = lim T !1

1

2 T

Z T 2

¡ T 2

cos[2­ c t ¡ ­ c ¿ + 2£] dt + lim T !1

1

2 T

Z T 2

¡ T 2

cos(­ c ¿ ) dt

lim T !1

h R XX ( ¿ ) i T = µ ¶ 1

2 cos(­ c ¿ ) = R XX ( ¿ ) : (49)

The random process X ( t ) is therefore ergodic in the ACF (second-order ergodic).

The power-spectrum of this random signal, i.e., the Fourier transform of the ensemble

ACF can then be computed as :

P XX (­) = ¼

2 [ ± (­ + ­ c ) + [ ± (­ ¡ ­ c )] : (50)

Notethatthisexpressionforthepowerspectrumisidentical totheexpressionforthespec-

trum of a deterministic sinusoidal signal.

9