Bayesian Estimation of AR (1) with Change Point under Asymmetric Loss Functions
-
Upload
sep-publisher -
Category
Documents
-
view
17 -
download
2
description
Transcript of Bayesian Estimation of AR (1) with Change Point under Asymmetric Loss Functions
Statistics Research Letters (SRL) Volume 2 Issue 2, May 2013 www.srl-journal.org
53
Bayesian Estimation of AR (1) with Change Point under Asymmetric Loss Functions Mayuri Pandya
Department of Statistics, K.S. Bhavnagar University University Campus, Near Gymkhana, Bhavnagar β 364002, India [email protected]
Abstract
The object of this paper is a Bayesian analysis of the autoregressive model πππ‘π‘ = π½π½1πππ‘π‘β1 + πππ‘π‘ ,t=1,..,m and πππ‘π‘ = π½π½2πππ‘π‘β1 + πππ‘π‘ ,t=m+1,..,n where 0 < π½π½1,π½π½2 < 1, and πππ‘π‘ is independent random variable with an exponential distribution with mean ππ1 but later it was found that there was a change in the process at some point of time m which is reflected in the sequence after ππππ is changed in mean ππ2. The issue this study focused on is at what time and which point the change begins to occur. The estimators of m, π½π½1,π½π½2 and ππ1,ππ2 are derived from Asymmetric loss functions namely Linex loss & General Entropy loss functions. Both the non-informative and informative priors are considered. The effects of prior consideration on Bayes estimates of change point are also studied.
Keywords
Bayes Estimates; Change Point; Exponential Distribution; Auto Regressive Process
Introduction
In applications, time series data, the common issue, with characteristics which fail to be compatible with the usual assumption of linearity and Gaussian errors, for a variety of reasons. Bell and Smith (1986) concerned with the autoregressive process AR (1) ππππ = π½π½ππππβ1 + ππππ where 0 < π½π½ < 1 and ππππ βs are i.i.d. and non-negative, who considered the estimation and testing problem for three parametric models: Gaussian, uniform and exponential. For large series, non normality may not be of importance due to the additive nature of filtering processes; but otherwise for small series. Hence, model other than Gaussian, in particular, the exponential, is studied. This AR (1) model can be used as a model for water quality analysis. Let the initial level of pollutant be ππ0; and random quantities be ππ1, ππ2, β¦ Of that pollutants are βdumpedβ at regular fixed intervals into the relevant body of water and successive βdumpingβ a proportion (1 β π½π½ ) of the pollutant ( 0 β€ π½π½ β€ 1) is βwashed awayβ. If ππππ is level of pollutant at time t then ππππ= π½π½ππππππβ1 + ππππ
where, ππππ are assumed to have an exponential distribution. Some of the references for Bayesian estimation of parameters of AR (1) as defined above are A. Turkmann, M. A. (1990) and M. Ibazizen, H. Fellage (2003).
Apart from AR (1) with exponential errors, the phenomenon of change point is also observed in several situations in water quality analysis. It may happen that at some point of time instability in the sequence of pollutant level of a river is observed. The issue this study focused on is at what time and which point the change begins to occur, which is called change point inference problem. Bayesian ideas, playing an important role in study of such change point problem has been often proposed as a valid alternative to classical estimation procedure
A sequence of random variables ππ1, β¦ππππ ,ππππ+1 β¦ππππ is said to have a change point at m (1 β€ m β€ n) if ππππ βΌ πΉπΉ1 ( π₯π₯|ππ1 ) (i=1,2...m) and Xi βΌ πΉπΉ2 (π₯π₯|ππ2 ) (i=m+1,2...n), where πΉπΉ1 ( π₯π₯|ππ1 )β πΉπΉ2 ( π₯π₯|ππ2 ). The situation is in consideration in which πΉπΉ1 and πΉπΉ2 have exponential form, but the change point, m, is unknown. The problem has also been discussed within a Bayesian framework by Chernoff & Zacks (1964), Kander & Zacks (1966), A.F.M. Smith (1975), P.N. Jani & Mayuri Pandya (1999), Pandya, M. and Jani, P.N. (2006), Pandya, M. and Jadav, P. (2009) and Ebrahimi and Ghosh (2001). The Monograph of Broemeling and Tsurumi (1987) on structural changes and a survey by Zacks (1983) are also useful references.
In this paper, an AR (1) model with one change point has been proposed, where the error distribution is supposed to be the changing exponential distribution. In section 2, a change point model related to AR (1) with exponential is developed. In section 3, posterior densities of π½π½1,π½π½2 ,ππ1,ππ2 and m for this model are obtained. Bayes estimators of π½π½1,π½π½2 ,ππ1,ππ2 and m are
www.srl-journal.org Statistics Research Letters (SRL) Volume 2 Issue 2, May 2013
54
derived from asymmetric loss functions and symmetric loss functions in section 4. A numerical study to illustrate the above technique on generated observations is presented in section 5. In section 6, the sensitivity of the Bayes estimators of m when prior specifications deviate from the true values is studied. In this study, section 7 is about the simulation study implemented by the generation of 10,000 different random samples. The ending of the paper is about the detailed conclusion made on the study.
Proposed AR (1) Model
Let { ππππ , 0 β€ ππππ , n β₯ 1} be a random sequence having exponential distribution viz.
The p. d. f. of the distribution is given by,
ππ (ππππ| ΞΈ1) = 1 ΞΈ1β . ππβππππ ΞΈ1β , ππ = 1,2,3, β¦ .ππ
= 1 ΞΈ2β ππβππππ ΞΈ2β , ππ = ππ + 1, . β¦ .ππ
Further, let {Xn, Xn, >0, n β₯ 1 } be a sequence of random variables defined as,
Xi = οΏ½π½π½1ππππβ1 + Ξ΅i , ππ = 1,2,3 β¦ . .ππΞ²2ππππβ1 + Ξ΅i , i = m + 1, β¦ . n
οΏ½ (2.1)
With ππ0 fixed constant and, 0 < π½π½1,π½π½2 < 1.
In literature, there are two models for π₯π₯0
Model A: π₯π₯0 is constant, π₯π₯0=0 in particular;
Model B: π₯π₯0 has the same distribution as that of Ξ΅t.
Here, model A for its flexibility is examined, moreover, the likelihood function (conditional to π₯π₯0) for model B is exactly of the same form as that for model A. In the
both cases, estimation will be same.
(2.1) is the first order auto regressive process, AR (1), and m is unknown change point in the sequence to be estimated. For π½π½1 = π½π½2 =π½π½ , m = n, the model (2.1) reduces to the model studied by Bell and Smith (1986).
Since, the AR (1) defined in (2.1) is Markov process, the joint p. d. f. of ππ0,ππ1, β¦., ππππ ,β¦,ππππ is given by
ππ(ππ0,ππ1, β¦ . ,ππππ) = οΏ½ 1ππ1 οΏ½ππ
. ππ(β π π ππ + π½π½1 π π ππβ ) ππ1 β οΏ½ 1ππ2 οΏ½ππβ ππ
πππ½π½2(π π ππββ π π ππβ ) ππ2β . ππβ(π π ππβπ π ππ ) ππ2β
ππππ = β ππππ ; ππππβππππ=1 = β ππππβ1
ππππ=1
Since Ξ΅i β₯ 0, i = 1, 2, β¦, n. We have
ππππ β₯π½π½1. ππππβ1, and ππππ >0, i = 1, 2, β¦, m, ππππ β₯π½π½2. ππππβ1, ππππ >0, i = m+1, β¦, n. and as a result 0 < π½π½1, π½π½2 < 1 .
Then, the likelihood function giving the sample information is obtained
X=(ππ1,ππ2 β¦., ππππ ,ππππ+1, β¦ππππ) is,
πΏπΏοΏ½π½π½1,π½π½2, ΞΈ1, ΞΈ2,ππ | πποΏ½ =1ΞΈ1
ππ . ππβπ΄π΄ ππ1β . (ππ2)β(ππβ ππ). ππβπ΅π΅ ππ2β
ππ1,ππ2 > 0, ππ0 = 0
Where,
A = ππππ βπ½π½1ππππβ
B=ππππβππππβπ½π½2 (ππππβ - ππππβ ) (2.2)
Bayes Estimation
The ML methods as well as other classical approaches are based only on the empirical information provided by the data. However, when there is some technical knowledge on the parameters of the distribution available, a Bayes procedure seems to an attractive inferential method. The Bayes procedure is based on a posterior density, say, πποΏ½π½π½1,π½π½2, ΞΈ1, ΞΈ2,ππ | πποΏ½, which is proportional to the product of the likelihood function
πΏπΏοΏ½π½π½1,π½π½2, ΞΈ1, ΞΈ2,ππ | πποΏ½, with a prior joint density, say, πποΏ½π½π½1,π½π½2, ΞΈ1, ΞΈ2,ππ | πποΏ½ representing uncertainty on the parameters values.
πποΏ½π½π½1,π½π½2, ΞΈ1, ΞΈ2,ππ | πποΏ½ =
πΏπΏοΏ½π½π½1,π½π½2,ΞΈ1,ΞΈ2,ππ | πποΏ½ .πποΏ½π½π½1,π½π½2,ΞΈ1,ΞΈ2,ππ | πποΏ½
β β« β« β« β« πΏπΏοΏ½π½π½1,π½π½2,ΞΈ1,ΞΈ2,ππ | πποΏ½ .πποΏ½π½π½1,π½π½2,ΞΈ1,ΞΈ2,ππ | πποΏ½πππ½π½1πππ½π½2ππΞΈ1ππΞΈ2ΞΈ2ΞΈ1π½π½2π½π½1ππβ1ππ=1
Using Informative Priors for Ξ²1 and Ξ²2, ΞΈ1 and ΞΈ2
The first step in a Bayes analysis is the choice of the prior density on the distribution parameters. When technical information about the mechanism of process under consideration is accessible, then this information should be converted into a degree of belief on the distribution parameters (or function thereof). This problem is greatly aggravated when some βphysicalβ meaning can be attached to these parameters.
It is also supposed that some information on these parameters is available, and that this technical knowledge can be given in terms of prior mean values Β΅1, Β΅2. Independent beta priors on Ξ²1 and Ξ²2 with respective means Β΅1, Β΅2 and common standard deviation Ο viz are supposed.
ππ(π½π½1) = [ π½π½1ππ1β1 ( 1 β π½π½1)ππ1β 1] | Ξ² (ππ1, ππ1), 0
< π½π½1< 1, ππ1 > 0 , ππ1 > 0
Statistics Research Letters (SRL) Volume 2 Issue 2, May 2013 www.srl-journal.org
55
ππ(π½π½2) = [ π½π½2ππ2β1 ( 1 β π½π½2)ππ2β 1] | Ξ² (ππ2, ππ2), 0
< π½π½2 < 1, ππ2 > 0, ππ2 > 0
If the prior information is given in terms of prior means Β΅1, Β΅2 and Ο, then the hyper parameters can be obtained by solving
ππππ = ππππβ1 [(1 β ππππ)ππππ2 β ππππππππ] i = 1,2.
ππππ = ππππβ1 (1 β ππππ)ππππ i = 1,2. (3.1)
Let the joint prior density of ΞΈ1 and ΞΈ2 be
g(ΞΈ1, ΞΈ2) β 1ΞΈ1ΞΈ2
As in Broemeling et al.(1987), the marginal prior distribution of m is supposed to be discrete uniform over the set { 1, 2, β¦..n β 1} and ndependent of Ξ²1, Ξ²2
andΞΈ1, ΞΈ2.
ππ(ππ) =1
ππ β 1
For simplicity, it is assumed that, a priori, π½π½1,π½π½2, ΞΈ1, ΞΈ2 and m are independently distributed. The joint prior density of π½π½1,π½π½2, ΞΈ1, ΞΈ2 and m say, ππ1(π½π½1,π½π½2, ΞΈ1, ΞΈ2,ππ) is,
ππ1(π½π½1,π½π½2, ΞΈ1, ΞΈ2,ππ) =ππ1π½π½1ππ1β1 (1β π½π½1)ππ1β 1 .π½π½2
ππ2β1(1β π½π½2)ππ2β1
(ΞΈ1ΞΈ2 )
Where, ππ1 = 1π½π½(ππ1ππ1 ) π½π½ (ππ2ππ2 )(ππβ1)
.
Joint posterior density of π½π½1,π½π½2, ΞΈ1, ΞΈ2 and m say, ππ1(π½π½1,π½π½2, ΞΈ1, ΞΈ2,ππ| x) is,
ππ1 (π½π½1,π½π½2, ΞΈ1, ΞΈ2,ππ | ππ) = ππ1ΞΈ1β ( ππ+1)ππβπ΄π΄ ΞΈ1β ΞΈ2
β(ππβππ)β1 ππβπ΅π΅ ΞΈ2β
π½π½1ππ1β1 (1 β π½π½1)ππ1β 1 π½π½2
ππ2β1 (1 β π½π½2)ππ2β 1 / β1οΏ½πποΏ½ , β1οΏ½πποΏ½ is the marginal density of ππ given as,
β1οΏ½πποΏ½ = ππ1 β ππ1 ππβ1 ππ=1 (ππ) , (3.2)
where,
T1(m) = = π€π€ππ π€π€(ππ β ππ). οΏ½( 1ππ1
)(ππππ )βππ οΏ½ οΏ½π΄π΄π΄π΄π΄π΄πππ΄π΄ β1πΉπΉ1[ ππ1,ππ,β ππ1 , 1 + ππ1, ππππβ
ππππ ,1] οΏ½
οΏ½( 1ππ2
)οΏ½ [(ππππ β ππππ)]β(ππβππ) οΏ½π΄π΄π΄π΄π΄π΄πππ΄π΄β1πΉπΉ1[ ππ2, ,ππ βππ,β ππ2, 1 + ππ2, ππππββ ππππβ
ππππβ ππππ, 1]οΏ½ , (3.3)
And,
Note-1
π΄π΄π΄π΄π΄π΄πππ΄π΄ β1πΉπΉ1(ππ, ππ1,ππ2, ππ, π₯π₯,π¦π¦) is Appel Hypergeometric function defined as
π΄π΄π΄π΄π΄π΄πππ΄π΄ β1πΉπΉ1(ππ, ππ1,ππ2, ππ, π₯π₯,π¦π¦) = Ξππ
Ξππ Ξ(ππ β ππ) οΏ½π’π’ππβ1(1 β π’π’)ππβππβ1(1 β π’π’π₯π₯)βππ1 (1 β π’π’π¦π¦)βππ2πππ’π’,
1
0
Real[a] > 0, Real (c-a) > 0.
Integrating ππ1 ( π½π½1,π½π½2, ΞΈ1, ΞΈ2,ππ | ππ ) with (π½π½1,π½π½2 ) and (ΞΈ1, ΞΈ2), leads to the posterior distribution of change point m.
Marginal posterior density of change point m is given
by ππ1 (m | X ) = T1(m) β ππ1
ππβ1 ππ=1 (ππ)
(3.4)
T1(m) is as given in (3.3).
Marginal posterior densities of π½π½1 and π½π½2 say, ππ1(π½π½1| X) and g2(π½π½2|X) are as,
ππ1(π½π½1 | X) = β β« β« β« ππ1 (π½π½1,π½π½2, ΞΈ1, ΞΈ2,ππ | ππ) . dΞΈ1dΞΈ2dΞ²2β
0β
01
0ππβ1 ππ=1
=ππ1 β π½π½1ππ1β1 (1 β π½π½1)ππ1β 1 π€π€πππ€π€ (ππβ ππ)
π΄π΄ππ ππβ1 ππ=1 οΏ½οΏ½ 1
ππ2οΏ½οΏ½ [(ππππ β ππππ)]β(ππβππ)
. οΏ½π΄π΄π΄π΄π΄π΄πππ΄π΄ β1πΉπΉ1[ ππ2, ,ππ βππ,β ππ2, 1 + ππ2, ππππββ ππππβ
ππππβ ππππ, 1]οΏ½ / β1οΏ½πποΏ½ (3.5)
ππ1 (π½π½2 | ππ) =ππ1 β π½π½2ππ2β1 (1 β π½π½2)ππ2β 1 π€π€πππ€π€ (ππβ ππ)
π΅π΅ππβ ππ ππβ1 ππ=1 οΏ½οΏ½ 1
ππ1οΏ½ (ππππ )βππ οΏ½
οΏ½π΄π΄π΄π΄π΄π΄πππ΄π΄ β1πΉπΉ1[ ππ1,ππ,β ππ1 , 1 + ππ1, ππππβ
ππππ ,1 ] οΏ½ / β1οΏ½πποΏ½ (3.6)
Marginal posterior densities of and ΞΈ2, say ππ1 (ΞΈ1 | ππ ) and ππ1 (ΞΈ2 | ππ ) are given as,
ππ1 (ΞΈ1 | ππ ) = β β« β« β« ππ1 (π½π½1,π½π½2, ΞΈ1, ΞΈ2,ππ | ππ)dΞΈ2dΞ²1β
01
01
0ππβ1 ππ=1 dΞ²2
www.srl-journal.org Statistics Research Letters (SRL) Volume 2 Issue 2, May 2013
56
=ππ1 β ΞΈ1β( ππ+1) β«
π½π½1ππ1β1(1β π½π½1)ππ1β1
ππβπ΄π΄ ΞΈ1β1
0ππβ1 ππ=1 πππ½π½1π€π€(ππ β ππ){ οΏ½ οΏ½ 1
ππ2οΏ½ [(ππππ β ππππ)]β(ππβππ)
π΄π΄π΄π΄π΄π΄πππ΄π΄ β1πΉπΉ1[ ππ2, ,ππ βππ,β ππ2, 1 + ππ2, ππππββ ππππβ
ππππβ ππππ, 1 ]/ β1οΏ½πποΏ½ (3.7)
And
ππ1 (ΞΈ2 | ππ ) = β β« β« β« ππ1 (π½π½1,π½π½2, ΞΈ1, ΞΈ2,ππ | ππ)dΞΈ1dΞ²1β
01
01
0ππβ1 ππ=1 dΞ²2
= ππ1 β ΞΈ2β( ππβ ππ)π€π€ππβ«
π½π½2ππ2β1(1β π½π½2)ππ2β1
ππβπ΅π΅ ΞΈ2β1
0ππβ1 ππ=1 dπ½π½2 οΏ½οΏ½
1ππ1οΏ½ (ππππ )βππ οΏ½
οΏ½π΄π΄π΄π΄π΄π΄πππ΄π΄ β1πΉπΉ1[ ππ1,ππ,β ππ1 , 1 + ππ1, ππππβ
ππππ ,1] οΏ½ / β1οΏ½πποΏ½ (3.8)
Using Non Informative Priors for Ξ²1, Ξ²2, ΞΈ1 and ΞΈ2
The non-informative prior is a density which adds no information to that contained in the empirical data. If there is no available information on π½π½1,π½π½2, ΞΈ1, ΞΈ2 , m which are assumed to be a priori independent random
variables, then the non-informative prior is,
ππ2 (π½π½1,π½π½2, ΞΈ1, ΞΈ2,ππ ) β 1ΞΈ1ΞΈ2 (nβ1 )
Joint posterior density of ΞΈ1, ΞΈ2, Ξ²1, Ξ²2 and m say, ππ2 (π½π½1,π½π½2, ΞΈ1, ΞΈ2,ππ | ππ) is given by
= 1ππβ1
ΞΈ1β ( ππ+1) ππβππππ+π½π½1ππππβ ΞΈ1β ΞΈ2
β(ππβππ+1) ππβππππ+ππππ+π½π½2(ππππββ ππππβ ) ΞΈ2β β2(X)οΏ½
Where, β2(x) is the marginal density of x given by,
β2οΏ½π₯π₯οΏ½ = 1(nβ1 )
β ππ2 ππβ1 ππ=1 (ππ) (3.9)
Where,
T2(m) = π€π€πππ€π€(ππ β ππ) οΏ½β ππππβππ + (ππππβππππβ )βππ
(Smβ )(ππ)
οΏ½ οΏ½οΏ½β (ππππβ ππππ )β(ππβππ )+ (ππππβ ππππβ ππππβ+ππππβ )β(ππβππ )
(ππβππ)(ππππββππππβ )οΏ½οΏ½ (3.10)
Marginal posterior density of change point m is given by
ππ2οΏ½m | XοΏ½ = οΏ½ οΏ½οΏ½οΏ½οΏ½
ππ2 (π½π½1,π½π½2, ΞΈ1, ΞΈ2,ππ | ππ) dΞΈ1dΞΈ2dΞ²1dΞ²2
1
0
1
0
β
0
β
0
ππβ1
ππ=1
= T2(m) / β ππ2 ππβ1 ππ=1 (ππ) (3.11)
Marginal posterior density of ΞΈ1and ΞΈ2 are given as,
ππ2(ΞΈ1| ππ) = 1ππβ1
β ΞΈ1β(ππ+1) ΞΈ1ππβππππ ΞΈ1β
ππππβππβ1 ππ=1 . οΏ½ ππππππβ ΞΈ1β β 1οΏ½
οΏ½(ππππβ ππππ )β(ππβππ )+ (ππππβ ππππβ ππππβ+ππππβ )β(ππβππ )
(ππβππ)(ππππββππππβ )οΏ½ / β2οΏ½πποΏ½ (3.12)
ππ2(ΞΈ2 | ππ)
= 1ππβ1
β οΏ½ΞΈ2β(ππβππ+1) οΏ½ ππ
βοΏ½ππππβ ππππΞΈ2
οΏ½
(ππππββ ππππβ ) οΏ½ππ
οΏ½ππππβ β ππππβ οΏ½ΞΈ2
β 1οΏ½ππβ1ππ=1 οΏ½β Sm
βm + (SmβSmβ )βm
(Smβ )(ππ)
οΏ½ / β2οΏ½πποΏ½ (3.13)
Marginal posterior density of π½π½1 and π½π½2 are given as,
ππ2(π½π½1 | ππ) = 1ππβ1
β π€π€ππ π€π€(ππ β ππ)ππβ1ππ=1
1(ππππβπ½π½1ππππβ )ππ
οΏ½οΏ½β (ππππβ ππππ )β(ππβππ )+ (ππππβ ππππβ ππππβ+ππππβ )β(ππβππ )
(ππβππ)(ππππββππππβ )οΏ½οΏ½ /β2οΏ½πποΏ½ (3.14)
ππ2(π½π½2 | ππ) = 1ππβ1
β π€π€πππ€π€(ππ β ππ)ππβ1ππ=1
1
οΏ½ππππβππππβπ½π½2(ππππββππππβ )οΏ½ ( ππβππ )
β ( ππβππ)
[(β Smβm + (SmβSm
β )βm
(Smβ )(ππ)
)] /β2οΏ½πποΏ½ (3.15)
Remark 1: For n=m, π½π½1 = π½π½2, ΞΈ1 = ΞΈ2 the equations (3.5), (3.7) and (3.12), (3.14) reduce to the marginal posterior
densities of π½π½ ππππππ ππ of AR(1) process without change point.
Statistics Research Letters (SRL) Volume 2 Issue 2, May 2013 www.srl-journal.org
57
Bayes Estimates under Symmetric and Asymmetric Loss Functions
The Bayes estimate of a generic parameter (or function thereof) Ξ± based on a Squared Error Loss (SEL) function L1 (Ξ±, d)= (Ξ±-d)2, where d is decision rule to estimate Ξ± which is the posterior mean. As a consequence, the SEL function is relative to an integer parameter,
L1'(m, v) β (m-v)2 , m, v=0,1,2,β¦β¦
Hence, the Bayesian estimate of an integer-valued parameter under the SEL function L1'(m, v) is no longer the posterior mean and can be obtained by numerically minimizing the corresponding posterior loss. Generally, such a Bayesian estimate is equal to the nearest integer value to the posterior mean. So, the nearest value to the posterior mean is regarded as Bayes Estimate. The Bayes estimators of m under SEL are the nearest integer values to (4.1) and (4.2),
ππβ = β ππ ππ1(ππ)ππβ1ππ=1 / β ππ1(ππ)ππβ1
ππ=1 (4.1)
ππββ = β ππ ππ2(ππ)ππβ1
ππ=1 / β ππ2(ππ)ππβ1ππ=1 (4.2)
Other Bayes estimators of Ξ± based on the loss functions
L2 (Ξ±, d) = | Ξ± β d |
L3 (Ξ±, d) = οΏ½0, if | Ξ±β d | < ππ, ππ > 01, otherwise
οΏ½
is the posterior median and posterior mode, respectively.
Asymmetric Loss Function
The Loss function L (Ξ±, d) provides a measure of the financial consequences arising from a wrong decision
rule d to estimate an unknown quantity Ξ±. The choice of the appropriate loss function is just dependent on financial considerations, but independent upon the used estimation procedure. In this section, Bayes estimator of change point m is derived from different asymmetric loss function using both prior considerations explained in section 3.1 and 3.2. A useful asymmetric loss, known as the Linex loss function was introduced by Varian (1975). Under the assumption that the minimal loss at d, the Linex loss function can be expressed as,
L4 (Ξ±, d) = exp. [q1 (d- Ξ±)] β q1 (d β Ξ±) β I, q1 β 0.
The sign of the shape parameter q1 reflects the deviation of the asymmetry, q1 > 0 if over estimation is more serious than under estimation, and vice-versa, and the magnitude of q1 reflects the degree of asymmetry.
Minimizing expected loss function Em [L4 (m, d)] and using posterior distribution (3.4) and (3.11), the Bayes estimate of m using Linex loss function is obtained by means of the nearest integer value to (4.3) under informative and non-informative prior , say, say πππΏπΏ
β πππΏπΏ
ββ .
πππΏπΏβ = β 1 ππ1οΏ½ πΌπΌππ [β ππβππ1ππππ1(ππ) β ππ1(ππ)ππβ1
ππ=1βππβ1ππ=1 ],
πππΏπΏββ = β 1 ππ1οΏ½ πΌπΌππ [β ππβππ1ππππ2(ππ) β ππ2(ππ)ππβ1
ππ=1βππβ1ππ=1 ], (4.3)
Where T1 (m) and ππ2(ππ) are as given in (3.3) and (3.10).
Minimizing expected loss function Em [L4 (Ξ²i, d)] and using posterior distributions (3.5) as well as (3.6), the Bayes estimators of Ξ²1 and Ξ²2 are obtained using Linex loss function as,
π½π½1πΏπΏβ = β
1ππ1
ln οΏ½ πΈπΈ ππβππ1 π½π½1οΏ½
= β 1ππ1
ln [ β β«π½π½1ππ1β1 (1βπ½π½1)ππ1β 1 ππβ ππ1π½π½1
π΄π΄ππ πππ½π½1
10
ππβ1ππ=1 π€π€πππ€π€(ππ β ππ){( οΏ½ 1
ππ2) [(ππππ β ππππ)]β(ππβππ)
οΏ½π΄π΄π΄π΄π΄π΄πππ΄π΄ β1πΉπΉ1[ ππ2, ,ππ βππ,β ππ2, 1 + ππ2, ππππββ ππππβ
ππππβ ππππ, 1 ]οΏ½ / β1(ππ) (4.4)
π½π½2πΏπΏβ == β
1ππ1
ln [ οΏ½οΏ½π½π½2ππ2β1 (1 β π½π½2)ππ2β 1 . ππβ ππ1π½π½2
π΅π΅ππβππ+1 πππ½π½2
1
0
ππβ1
ππ=1
π€π€πππ€π€(ππ β ππ) { (ππππ )βππ οΏ½
οΏ½ π΄π΄π΄π΄π΄π΄πππ΄π΄ β1πΉπΉ1[ ππ1,ππ,β ππ1 , 1 + ππ1, ππππβ
ππππ ,1] οΏ½ / β1(ππ) (4.5)
Where οΏ½π΄π΄π΄π΄π΄π΄πππ΄π΄ β1πΉπΉ1[ ππ1,ππ,β ππ1 , 1 + ππ1, ππππβ
ππππ ,1 ] οΏ½ and
π΄π΄π΄π΄π΄π΄πππ΄π΄ β1πΉπΉ1[ ππ2, ,ππ βππ,β ππ2, 1 + ππ2, ππππββ ππππβ
ππππβ ππππ, 1] are same
as in note 1 and β1(ππ) is as in (3.2).
Another loss function, called General Entropy loss function (GEL), proposed by Calabria and Pulcini (1996) is given by,
www.srl-journal.org Statistics Research Letters (SRL) Volume 2 Issue 2, May 2013
58
L5(Ξ±, d) = (ππ πΌπΌβ )ππ3 β ππ3 π΄π΄ππ( d / Ξ±) β 1,
Whose minimum occurs at d = Ξ±, minimizing expectation E [L5 (m, d)] and using posterior density gi(m | π₯π₯ ), i = 1,2. The Bayes estimate of m is achieved
using General Entropy loss function by means of the nearest integer value to (4.6) and (4.7) under informative and non-informative prior, say πππΈπΈ
β and πππΈπΈββ
πππΈπΈβ = οΏ½πΈπΈ1[ππβππ3 ]οΏ½β1 ππ3β
= [β ππβππ3ππ1ππβ1ππ=1 (ππ) β ππ1
ππβ1ππ=1 (ππ)β ]β1 ππ3β , (4.6)
πππΈπΈββ = οΏ½πΈπΈ1[ππβππ3 ]οΏ½β1 ππ3β
= [β ππβππ3ππ2ππβ1ππ=1 (ππ) β ππ2
ππβ1ππ=1 (ππ)β ]β1 ππ3β , (4.7)
Putting q3 = - 1 in (4.6) and (4.7), Bayes estimate, posterior mean of m is acquired.
Minimizing expected loss function E [L5 (Ξ²i, d)] and
using posterior distributions (3.5) and (3.6), we get the Bayes estimates of Ξ²i using General Entropy loss function respectively as,
Ξ²1Eβ = ππ1{ β π€π€ππππβ1
ππ=1οΏ½ π€π€(ππ β ππ) 1
ππ1βππ3 (ππππ )βππ 1
(ππππββ ππππβ )ππ2 1ππ2
[(ππππ β ππππ)]β(ππβππ)
π΄π΄π΄π΄π΄π΄πππ΄π΄ β1πΉπΉ1 οΏ½ ππ1 β ππ3, ππ,β ππ1 , 1 + ππ1 β ππ3,, ππππβ
ππππ, 1 οΏ½
οΏ½π΄π΄π΄π΄π΄π΄πππ΄π΄ β1πΉπΉ1[ ππ2, ,ππ βππ,β ππ2, 1 + ππ2, ππππββ ππππβ
ππππβ ππππ, 1 ]οΏ½ οΏ½ / β1οΏ½πποΏ½οΏ½
β1 ππ3β (4.8)
π½π½2πΈπΈβ =ππ1{β π€π€ππππβ1
ππ=1οΏ½ π€π€(ππ β ππ) 1
ππ2βππ3 [(ππππ β ππππ)]β(ππβππ) 1
ππ1{(ππππ )βππ οΏ½
π΄π΄π΄π΄π΄π΄πππ΄π΄ β1πΉπΉ1[ ππ2 β ππ3, ,ππ βππ,β ππ2, 1 + ππ2 β ππ3,ππππβ β ππππβ
ππππ β ππππ, 1 ]}
οΏ½π΄π΄π΄π΄π΄π΄πππ΄π΄ β1πΉπΉ1 οΏ½ ππ1,ππ,β ππ1 , 1 + ππ1, ππππβ
ππππ ,1οΏ½οΏ½ οΏ½/β1οΏ½πποΏ½οΏ½
β1 ππ3β (4.9)
Where οΏ½π΄π΄π΄π΄π΄π΄πππ΄π΄ β1πΉπΉ1[ ππ1,ππ,β ππ1 , 1 + ππ1, ππππβ
ππππ ,1 ] οΏ½ and
π΄π΄π΄π΄π΄π΄πππ΄π΄ β1πΉπΉ1[ ππ2,ππ βππ,β ππ2, , 1 + ππ2, ππππββ ππππβ
ππππβ ππππ, 1] are same
as in note 1 and β1(ππ) is as in (3.2). Minimizing expected loss function E [L5 (Ξ²i, d)] and
using posterior distributions (3.14) and (3.15), we get the Bayes estimate π½π½πππΈπΈββ of Ξ²i, i = 1, 2 using non informative priors, under General Entropy Loss function is given as,
Ξ²1Eββ = οΏ½ 1
nβ1β Ξmnβ1
m=1 οΏ½ οΏ½οΏ½β (ππππβ ππππ )β(ππβππ )+ (ππππβ ππππβ ππππβ+ππππβ )β(ππβππ )
(ππβππ)(ππππββππππβ )οΏ½οΏ½
( ππππ)βππ οΏ½
β2πΉπΉ1[ 1 β ππ3,ππ, 2 β ππ3, ππππβ
ππππ ](1 β ππ3)β1/ h2οΏ½xοΏ½οΏ½
β1 q3β (4.10)
Ξ²2Eββ = οΏ½
1n β 1
οΏ½ οΏ½ Ξ(n β m).nβ1
m=1
β Smβm + (Sm β Sm
β )βm
(ππππβ )(ππ)
(ππππ β ππππ)β(ππβππ+1) οΏ½
β2πΉπΉ1[ 1 β ππ3,ππ βππ, 2 β ππ3, ππππββ ππππβ
ππππβ ππππ ](1 β ππ3)β1/ h2οΏ½xοΏ½οΏ½
β1 q3β (4.11)
Where β2(ππ) is same as in (3.9).
Minimizing expected loss function E [L5 (ΞΈ1, d)] and
using posterior distributions (3.7) as well as (3. 12), we get the Bayes estimate ΞΈ1πΈπΈ
β and ΞΈ1πΈπΈββ of ΞΈ1 using General
Entropy Loss function as,
ΞΈ1πΈπΈβ = ππ1 οΏ½ π€π€(ππ + ππ3)
ππβ1
ππ=1
Ξ(n β m) 1ππ1
1ππ2
(ππππ )β(ππ+ππ3)[(ππππ β ππππ)]β(ππβππ)
οΏ½π΄π΄π΄π΄π΄π΄πππ΄π΄ β1πΉπΉ1 οΏ½ ππ1,ππ,β ππ1 , 1 + ππ1, ππππβ
ππππ ,1 οΏ½οΏ½ οΏ½π΄π΄π΄π΄π΄π΄πππ΄π΄ β1πΉπΉ1[ ππ2,ππ βππ,β ππ2, , 1 + ππ2, ππππ
ββ ππππβ
ππππβ ππππ, 1οΏ½ / β1οΏ½πποΏ½}β
1ππ3 (4.12)
ΞΈ1πΈπΈββ = οΏ½ 1
ππβ1 β Ξ(n β m) π€π€(β1+ππ+ππ3)
ππππβ οΏ½β (ππππβ ππππ )β(ππβππ )+ (ππππβ ππππβ ππππβ+ππππβ )β(ππβππ )
(ππβππ)(ππππββππππβ )οΏ½ππβ1
ππ=1 οΏ½
οΏ½ οΏ½βοΏ½ 1π π πποΏ½β1+ππ+ππ3
+ οΏ½ 1π π ππβππππβ
οΏ½β1+ππ+ππ3
οΏ½ /β2οΏ½πποΏ½οΏ½β1 ππ3β
(4.13)
π π ππ β π π ππβ > 0, π π ππ > 0,ππ + ππ3 >0,
Statistics Research Letters (SRL) Volume 2 Issue 2, May 2013 www.srl-journal.org
59
Minimizing expected loss function E [L5 (ΞΈ2, d)] and using posterior distributions (3.8) and (3. 13), we get
the Bayes estimate ΞΈ2πΈπΈβ and ΞΈ2πΈπΈ
ββ of ΞΈ2 using General Entropy Loss function as,
ΞΈ2πΈπΈβ = {ππ1 β π€π€πππ€π€(ππ β ππ + ππ3)ππβ1
ππ=1οΏ½( 1ππ2
) [(ππππ β ππππ)]β(ππβππ+ππ3) οΏ½ 1ππ1οΏ½ (ππππ )β(ππ+1)
π΄π΄π΄π΄π΄π΄πππ΄π΄ β1πΉπΉ1 οΏ½ ππ2,ππ βππ + ππ3,β ππ2, , 1 + ππ2,ππππβ β ππππβ
ππππ β ππππ, 1οΏ½
οΏ½π΄π΄π΄π΄π΄π΄πππ΄π΄ β1πΉπΉ1[ ππ1,ππ,β ππ1 , 1 + ππ1, ππππβ
ππππ ,1 ] οΏ½ / οΏ½β1οΏ½πποΏ½οΏ½
β1 ππ3β (4.14)
ΞΈ2πΈπΈββ = οΏ½
1ππ β 1
οΏ½ π€π€ππππβ1
ππ=1
π€π€(β1 + ππ βππ + ππ3)
(ππππβ β ππππβ ) οΏ½
οΏ½β οΏ½1
ππππ β π π πποΏ½β1+ππβππ+ππ3
+ οΏ½1
ππππ β π π ππ β (ππππβ β ππππβ )οΏ½β1+ππβππ+ππ3
οΏ½
οΏ½οΏ½β Smβm + (Sm
β β Smβ )βm
(Smβ )(m) οΏ½ β2οΏ½πποΏ½οΏ½ οΏ½
β1 ππ3β
π π ππ β π π ππβ > 0, π π ππ > 0,ππ + ππ3>0, (4.15)
Remark 2: For n=m, π½π½1 = π½π½2, ΞΈ1 = ΞΈ2 the equations (4.4), (4.8), (4.10) and (4.12), (4.13) reduce the Bayes estimates under asymmetric loss function of π½π½ ππππππ ππ of AR(1) process without change point under asymmetric loss functions.
Numerical Study
Let us consider AR (1) model as
ππ1 = οΏ½ 0.1ππππβ1 + βππ , ππ = 1, 2, β¦ 120.8 ππππβ1 + βππ , ππ = 13,14, . . .20.
οΏ½
Where, βππ βs are independently distributed exponential distributions given in (2.1) with
ΞΈ1=0.6, ΞΈ2=1. 20 random observations have been generated from proposed AR (1) model given in (2.2). The first twelve observations are from exponential distribution with ΞΈ1=0.6 and next eight are from exponential distribution with ΞΈ2=1. Ξ²1 and Ξ²2 themselves are random observations from beta distributions with prior means Β΅1 = 0.1,
Β΅2 = 0.8 and common standard deviation Ο = 0.1 respectively, resulting in a1 = 0.001,
b =0.09, a2=0.48 and b2= 0.12. These observations are given in Table-1.
Posterior mean of m, ΞΈ1, ΞΈ2, Ξ²1, and Ξ²2 and the posterior median of m have been calculated.. Posterior mode appears to be a bad estimator of m. For a comparative purpose point of view, estimators under the non informative prior are also calculated. The results are shown in Table-2. The Bayes estimates πππΏπΏβ ,πππΈπΈ
β ππππ ππ, ΞΈ1πΈπΈβ ,ΞΈ2πΈπΈ
β ππππ ΞΈ1 ππππππ ΞΈ2, π½π½1πΈπΈβ , π½π½2πΈπΈ
β of Ξ²1 and Ξ²2 respectively for the data given in Table-1 have been computed. As well as the Bayes estimates under the non informative prior and asymmetric loss functions which are also calculated, from which the result shown in Table-3 reveal that m*L, m*E, π½π½1πΈπΈ
β , π½π½2πΈπΈβ , ΞΈ1πΈπΈ
β and ΞΈ2πΈπΈ
β are robust with respect to the change in the shape parameter of GE loss function.
TABLE 1 GENERATED OBSERVATIONS FROM PROPOSED AR (1) MODEL.
I 1 2 3 4 5 6 7 8 9 10
Xi 0.91 1.08 1.18 0.26 2.08 0.25 0.17 0.88 1.21 0.44
I 1 2 3 4 5 6 7 8 9 10
βππ 0.90 0.99 1.07 0.15 2.05 0.04 0.14 0.86 1.12 0.32
I 11 12 13 14 15 16 17 18 19 20
Xi 0.38 0.41 0.86 1.68 3.01 2.53 4.36 5.53 4.57 4.61
I 11 12 13 14 15 16 17 18 19 20
βππ 0.33 0.43 0.48 0.99 1.67 0.12 2.33 2.04 0.15 0.95
www.srl-journal.org Statistics Research Letters (SRL) Volume 2 Issue 2, May 2013
60
TABLE 2 THE VALUES OF BAYES ESTIMATES OF CHANGE POINT
Prior Density Bayes estimates of change point Bayes estimates of auto correlation
coefficient.
Bayes estimates of
Exponential parameters
Posterior Median Posterior Mean Ξ²1 Ξ²2 ΞΈ1 ΞΈ2
Informative 12.02 12 0.1 0.8 0.6 0.95
Non informative 12.22 11 0.12 0.82 0.63 1.03
TABLE 3 THE BAYES ESTIMATES USING ASYMMETRIC LOSS FUNCTIONS.
Prior Density Shape parameter
Bayes estimates of change point
Bayes estimates under General Entropy Loss
q1 q3 πππΏπΏβ πππΈπΈ
β π½π½2πΈπΈβ π½π½1πΈπΈ
β ΞΈ1πΈπΈβ ΞΈ2πΈπΈ
β
Informative 0.8 0.8 12 12 0.83 0.13 0.13 0.95
1.2 1.2 12 12 0.82 0.12 0.12 0.94
1.5 1.5 12 12 0.81 0.11 0.11 0.91
Non informative
πππΏπΏββ πππΈπΈ
ββ π½π½2πΈπΈββ π½π½1πΈπΈ
ββ ΞΈ1πΈπΈββ ΞΈ2πΈπΈ
ββ
0.8 0.8 13 13 0.86 0.16 0.68 1.4
1.2 1.2 13 13 0.85 0.15 0.66 1.3
1.5 1.5 13 13 0.83 0.13 0.65 1.2
TABLE-4 POSTERIOR MEAN M* FOR THE DATA GIVEN IN TABLE-1.
ΞΌ1 ΞΌ2 m* m*E 0.1
0.6
12
12
0.07
0.8
12
12
0.2
0.4
12
12
Sensitivity of Bayes Estimates
In this section, the studied sensitivity of the Bayes estimates is obtained from section 4 with respect to change in the prior of the parameter. The means 1Β΅ ,
2Β΅ and standard deviation Ο of beta prior have been used as prior information in computing the parameters a1, b1, a2, b2 of the prior. Following Calabria and Pulcini (1996), it is also assumed that the prior information should be correct if the true value of Ξ²1 (Ξ²2) is close to prior mean Β΅1(Β΅2) and is assumed to be wrong if Ξ²1 (Ξ²2 ) is far from Β΅1(Β΅2). We have computed m* and πππΈπΈ
β using (4.1) and (4.6) for the data given in Table-1 with common value of Ο =0.1. For q3 =0.9, considering different values of ( 1Β΅ , 2Β΅ ) and result are shown in Table-4.
The results shown in Table-4 lead to conclusion that m* and m*E are robust with respect to the correct choice of the prior density of Ξ²1 (Ξ²2) and a wrong choice of the prior density of Ξ²1 (Ξ²2). Moreover, they are also robust with respect to the change in the shape parameter of GE loss function.
Simulation Study
In section 4, we have obtained Bayes estimates of m on
the basis of the generated data given in Table-1 for given values of parameters. To justify the results, we have generated 10,000 different random samples with m=12, n=20, ΞΈ1=1.0, ΞΈ2=2.0, Ξ²1= 0.4, Ξ²2 = 0.6 and obtained the frequency distributions of posterior mean, median of m, mL*, mE* with the correct prior consideration. The results are shown in Table 5. We also obtain the frequency distributions of Bayes estimates of autoregressive coefficients given in section 4 with the both prior considerations and the results are shown in Table 6 and table 7. The value of shape parameter of the general entropy loss and Linex loss is taken as 0.1.
We also simulate several samples from AR (1) model explained section 2 with m=12, n=20, ΞΈ1 =1.0, 0.6, 0.7; ΞΈ2 =2.0, 0.8, 0.9 and Ξ²1=0.2, 0.4, 0.5; Ξ²2=0.4, 0.6, 0.7. For each ΞΈ1, ΞΈ2, Ξ²1 and Ξ²2, and Bayes estimators of change point m and autoregressive coefficients Ξ²1 and Ξ²2 using q3 = 0.9 has been computed for different prior means Β΅1
and Β΅2, Which lead to the same conclusion as for single sample, that the Bayes estimators posterior mean of m, and m*E are robust with respect to the correct choice of the prior specifications on Ξ²1 (Ξ²2) and wrong choice of the prior specifications on Ξ²2 (Ξ²1)
Statistics Research Letters (SRL) Volume 2 Issue 2, May 2013 www.srl-journal.org
61
TABLE 5: FREQUENCY DISTRIBUTIONS OF THE BAYES ESTIMATES OF THE CHANGE POINT
Bayes estimate % Frequency for 01-10 11-13 14-20
Posterior mean 18 70 12
Posterior median 10 74 16
Posterior mode 12 73 15
mL* 13 77 10
mE* 14 78 8
TABLE 6: FREQUENCY DISTRIBUTIONS OF THE BAYES ESTIMATES OF AUTOREGRESSIVE COEFFICIENTS Ξ1 AND Ξ2 USING GENERAL ENTROPY LOSS FUNCTION (INFORMATIVE)
Bayes estimate % Frequency for 0.1-0.3 0.3-0.5 0.5-0.7 0.7-0.9
π½π½1πΈπΈβ 13 82 02 03
π½π½2πΈπΈβ 11 01 84 04
TABLE 7: FREQUENCY DISTRIBUTIONS OF THE BAYES ESTIMATES OF (NON-INFORMATIVE)
Bayes estimate % Frequency for 0.1-0.3 0.3-0.5 0.5-0.7 0.7-0.9
π½π½1πΈπΈββ 13 78 06 03 π½π½2πΈπΈββ 11 01 74 14
Conclusions
Our numerical study shows that the Bayes estimators posterior mean of m, and m*E are robust with respect to the different choice of the prior specifications on Ξ²1, Ξ²2 and are sensitive in case prior specifications on both Ξ²1, Ξ²2 deviate simultaneously from the true values. Numerical study also shows that posterior mean of m is sensitive when prior specifications on both Ξ²1, Ξ²2
deviate simultaneously from the true values.
REFERENCES
Amaral Turkmann, M. A. (1990). Bayesian analysis of an
autoregressive process with exponential white noise.
Statistics, 4, 601-608.
Bell, C. B. and Smith, E. P. (1986): Inference for Non-negative
Auto-regressive Schemes. Commun. Statist.- Theory
Meth.,15(8), 2267-2293.
Broemeling, L.D. and Tsurumi, H. (1987). Econometrics and
structural change, Marcel Dekker, New York.
Calabria, R. and Pulcini, G. (1994c). βBayes credibility
intervals for the left-truncated exponential distribution,
βMicro electron Reliability, 34, 1897-1907.
Calabria, R. and Pulcini, G. (1996). βPoint estimation under
asymmetric loss functions for left-truncated exponential
samples,β Communication in Statistics (Theory and
Methods), 25(3), 585-600.
Chernoff, H. and Zacks, S. (1964). Estimating the current
mean of a normal distribution which is subjected to
changes in time. Annals Math. Statist., 35, 999-1018.
Ebrahimi, N. And Ghosh, S.K. (2001): Bayesian and
frequentist methods in change-point problems.
Handbook of statistic, VOL 20, Advance in Reliability,
777-787, (Eds. N. Balakrishna and C. R. Rao).
Ibazizen, M., Fellage, H. (2003). Bayesian estimation of AR (1)
process with exponential white noise. Statistics, 37,:5,365-
372.
Jani, P. N. and Pandya, M. (1999): Bayes estimation of shift
point in left Truncated Exponential Sequence
Communications in Statistics (Theory and Methods),
28(11), 2623-2639.
Kander, Z. and Zacks, S. (1966). Test procedure for possible
changes in parameters of statistical distribution occurring
at unknown time points. Annals Math. Statist., 37, 1196-
1210.
Pandya, M. and Jani, P.N. (2006). Bayesian Estimation of
Change Point in Inverse Weibull Sequence.
Communication in statistics- Theory and Methods, vol.
35 (12), 2223-2237.
Pandya, M. and Jadav, P. (2009). Bayes Estimation of Change
Point in Non-standard Mixture distribution of Inverse
Weibull with Unknown Proportions. Journal of the Indian
Statistical Association. Vol.47 (1), 41-62.
www.srl-journal.org Statistics Research Letters (SRL) Volume 2 Issue 2, May 2013
62
Pearl, R. (1925). Vital Statistics of National Academy of Science,
Proceeding of the National Academy of Science of the
U.S.A. 11, 752-768.
Smith, A. F. M. (1975). A Bayesian approach to inference
about a change point in a sequence of random variables.
Biometrika, 62, 407-416.
Varian, H.R. (1975). βA Bayesian approach to real estate
assessment,β Studies in Bayesian econometrics and
Statistics in Honor of Leonard J. Savage, (Feigner and
Zellner, Eds.) North Holland Amsterdam, 195-208.
Zacks, S. (1983). Survey of classical and Bayesian approaches
to the change point problem: fixed sample and sequential
procedures for testing and estimation. Recent advances
in statistics. Herman Chernoff Best Shrift, Academic
Press New-York, 1983, 245-269.