New confidence intervals for the AR(1) parametermyweb.astate.edu/ftunno/Documents/Involve...

14
inv lve a journal of mathematics msp New confidence intervals for the AR(1) parameter Ferebee Tunno and Ashton Erwin 2013 vol. 6, no. 1

Transcript of New confidence intervals for the AR(1) parametermyweb.astate.edu/ftunno/Documents/Involve...

Page 1: New confidence intervals for the AR(1) parametermyweb.astate.edu/ftunno/Documents/Involve Paper.pdfThis paper presents a new way to construct confidence intervals for the unknown

inv lvea journal of mathematics

msp

New confidence intervals for the AR(1) parameterFerebee Tunno and Ashton Erwin

2013 vol. 6, no. 1

Page 2: New confidence intervals for the AR(1) parametermyweb.astate.edu/ftunno/Documents/Involve Paper.pdfThis paper presents a new way to construct confidence intervals for the unknown

mspINVOLVE 6:1 (2013)

dx.doi.org/10.2140/involve.2013.6.53

New confidence intervals for the AR(1) parameterFerebee Tunno and Ashton Erwin

(Communicated by Robert B. Lund)

This paper presents a new way to construct confidence intervals for the unknownparameter in a first-order autoregressive, or AR(1), time series. Typically, onemight construct such an interval by centering it around the ordinary least-squaresestimator, but this new method instead centers the interval around a linear com-bination of a weighted least-squares estimator and the sample autocorrelationfunction at lag one. When the sample size is small and the parameter has magni-tude closer to zero than one, this new approach tends to result in a slightly thinnerinterval with at least as much coverage.

1. Introduction

Consider the causal stationary AR(1) time series given by

X t = φX t−1+ εt , t = 0,±1,±2, . . . , (1-1)

where |φ| < 1, E(X t) = 0 and {εt }iid∼ N (0, σ 2). We seek a new way to construct

confidence intervals for the unknown parameter φ.If X1, X2, . . . , Xn are sample observations from this process, then a point esti-

mate for φ is found by calculating

φp =

∑nt=2 St−1|X t−1|

p X t∑nt=2 |X t−1|p+1 ,

where p ∈ {0, 1, 2, . . . } and St is the sign function, defined by

St =

1 if X t > 0,0 if X t = 0,−1 if X t < 0.

The estimator φp can be thought of as a weighted least-squares estimator with form∑nt=2 Wt−1 X t−1 X t∑n

t=2 Wt−1 X2t−1

MSC2010: 60G10, 62F12, 62F99, 62M10.Keywords: confidence interval, autoregressive parameter, weighted least squares, linear combination.

53

Page 3: New confidence intervals for the AR(1) parametermyweb.astate.edu/ftunno/Documents/Involve Paper.pdfThis paper presents a new way to construct confidence intervals for the unknown

54 FEREBEE TUNNO AND ASHTON ERWIN

and weight Wt = |X t |p−1. Note that, when p= 1, we get the ordinary (unweighted)

least-squares estimator (OLSE), and when p= 0, we get what has come to be calledthe Cauchy estimator:

φ1 =

∑nt=2 X t−1 X t∑n

t=2 X2t−1

(OLSE), φ0 =

∑nt=2 St−1 X t∑nt=2 |X t−1|

(Cauchy).

The OLSE has been studied since the time of Gauss and its optimal propertiesfor linear models are well known. The eponymously named Cauchy estimator datesback to about the same time and is sometimes used as a surrogate for the OLSE.Traditionally, confidence intervals for φ have been centered around the OLSE,although So and Shin [1999] and Phillips, Park and Chang [Phillips et al. 2004]showed that the Cauchy estimator has certain advantages over the OLSE whendealing with a unit root autoregression. Gallagher and Tunno [2008] constructed aconfidence interval for φ centered around a linear combination of both estimators.

Another point estimate for φ comes from the sample autocorrelation functionof {X t } at lag one, given by

ρ(1)=∑n

t=2 X t−1 X t∑nt=1 X2

t.

The autocovariance function of {X t } at lag h for an AR(1) series is given by γ (h)=Cov(X t , X t+h) = φ

|h|σ 2/(1− φ2), which makes the true lag-one autocorrelationfunction equal to

ρ(1)=γ (1)γ (0)

=φσ 2/(1−φ2)

σ 2/(1−φ2)= φ.

Observe that the structure of ρ(1) is similar to that of the OLSE. In fact, for anAR(1) series, the Yule–Walker, maximum likelihood, and least-squares estimatorsfor φ are all approximately the same [Shumway and Stoffer 2006, Section 3.6].Note also that, in general, if {X t } is not mean-zero, we would subtract X from eachobservation when calculating things like ρ(h) and φp.

To get a feel for how φ0, φ1 and ρ(1) behave relative to one another, Figure 1shows their empirical bias and mean squared error (MSE) when φ ∈ (−1, 1) andn = 50. The Cauchy estimator has the lowest absolute bias, and ρ(1) has thesmallest MSE for parameter values (roughly) between −0.5 and 0.5, while theOLSE has the smallest MSE elsewhere. Other simulations not shown here revealthat the MSE and absolute bias of φp keep growing as p gets larger.

The goal of this paper is to construct a confidence interval for φ centered arounda linear combination of an arbitrary weighted least-squares estimator and the sampleautocorrelation function at lag one. That is, the center will take the form

a1φp + a2ρ(1), (1-2)

Page 4: New confidence intervals for the AR(1) parametermyweb.astate.edu/ftunno/Documents/Involve Paper.pdfThis paper presents a new way to construct confidence intervals for the unknown

NEW CONFIDENCE INTERVALS FOR THE AR(1) PARAMETER 55

Bias MSE

φ φ

Figure 1. Empirical bias (left) and mean squared error (right) ofφ0, φ1 and ρ(1) for φ ∈ (−1, 1); 10,000 simulations were run foreach parameter value, with distribution N (0, 1) and n = 50.

where a1+ a2 = 1 and p 6= 1. We first, however, need to take a brief look at howintervals centered around a single estimator behave in order to find a proper targetfor our new interval to outperform.

Theorem 2.1 from [Gallagher and Tunno 2008] states that for the AR(1) seriesgiven in (1-1), we have

√n(φp −φ)

D−→ N

(0,

σ 2 E(X2pt )

(E |X t |p+1)2

)(1-3)

for all p such that E(|X t |r ) <∞, where r =max(2p, p+1). Since the error terms

in our series are normal, the X t ’s have finite moments of all orders. Thus, thistheorem can be used to create confidence intervals for φ centered at φp for anychoice of p.

Specifically, if X1, X2, . . . , Xn are sample observations from (1-1), then anapproximate (1−α)× 100% confidence interval for φ has endpoints

φp ± zα/2√

Var(φp),

where

nVar(φp)=σ 2n−1∑n

t=2 X2pt−1(

n−1∑n

t=2 |X t−1|p+1)2

P−→

σ 2 E(X2pt−1)(

E |X t−1|p+1)2

and zα/2 is the standard normal critical value with area α/2 to its right.Similarly, we can create confidence intervals for φ centered at ρ(1). If we

think of ρ(1) as being nearly the equivalent of the OLSE, then an approximate(1−α)× 100% confidence interval for φ has endpoints

ρ(1)± zα/2√

Var(ρ(1)),

Page 5: New confidence intervals for the AR(1) parametermyweb.astate.edu/ftunno/Documents/Involve Paper.pdfThis paper presents a new way to construct confidence intervals for the unknown

56 FEREBEE TUNNO AND ASHTON ERWIN

Coverage Length

φ φ

Figure 2. Empirical coverage capability (left) and length (right)of 95% confidence intervals for φ centered at φ0, φ1 and ρ(1) forφ ∈ (−1, 1); 10,000 simulations were run for each parameter value,with distribution N (0, 1) and n = 50.

where

nVar(ρ(1))=nσ 2∑nt=1 X2

t

P−→

σ 2

E(X2t ).

Figure 2 shows the empirical coverage capability and length of 95% confidenceintervals for φ centered at φ0, φ1 and ρ(1) when φ ∈ (−1, 1) and n = 50. Thethinnest intervals occur when ρ(1) is used, although not by much. The OLSE alsohas the best overall coverage, except (roughly) for |φ| ≤ 0.5, which is where ρ(1)once again outperforms the OLSE. Other simulations not shown here reveal that thelength of intervals centered at φp keeps growing as p gets larger, while coveragecapability starts to break down for |φ| near 1.

In this paper, we will aim to construct intervals with center (1-2) that outperformthose centered at the OLSE. The next section shows the details of this construction,while Section 3 presents some simulations. Section 4 closes the paper with anapplication and some remarks.

2. Interval construction

Suppose for the moment that we wish to construct a confidence interval for φcentered at a linear combination of two weighted least-squares estimators. That is,instead of (1-2), the center would take the form

a1φp + a2φq , (2-1)

where a1+a2= 1 and p 6= q . Minimizing the variance of this quantity is equivalent

Page 6: New confidence intervals for the AR(1) parametermyweb.astate.edu/ftunno/Documents/Involve Paper.pdfThis paper presents a new way to construct confidence intervals for the unknown

NEW CONFIDENCE INTERVALS FOR THE AR(1) PARAMETER 57

to minimizing the length of the corresponding interval and occurs when

a1 =Var(φq)−Cov(φp, φq)

Var(φp − φq). (2-2)

Theorem 2.1. Let a1 + a2 = 1. If a1 is given by (2-2), then Var(a1φp + a2φq) isminimized and has upper bound Var(φq).

Proof. Let

f (a1)= Var(a1φp + (1− a1)φq

)= a2

1 Var(φp)+ (1− a1)2 Var(φq)+ 2a1(1− a1)Cov(φp, φq)

= a21 Var(φp − φq)+ 2a1

(Cov(φp, φq)−Var(φq)

)+Var(φq).

Then f ′(a1)= 2a1 Var(φp − φq)+ 2(Cov(φp, φq)−Var(φq)

)= 0

⇒ a1 =Var(φq)−Cov(φp, φq)

Var(φp − φq).

Since f ′′(a1)= 2 Var(φp− φq) > 0, then this critical value minimizes f . Note thatthis means

a2 = 1− a1 =Var(φp)−Cov(φp, φq)

Var(φp − φq),

where the choices of p and q determine the ranges of a1 and a2. Specifically, wehave

Var(φp) > Var(φq) ⇐⇒ a1 < 0.5 and a2 > 0.5,

Var(φq) > Var(φp) ⇐⇒ a1 > 0.5 and a2 < 0.5,

Var(φp)= Var(φq) ⇐⇒ a1 = a2 = 0.5.

Finally, since the critical value found above minimizes f , we have f (a1)≤ f (0),which is equivalent to saying

Var(a1φp + (1− a1)φq

)≤ Var(φq),

where the inequality is strict for a1 6= 0. �

We would like for the variance of a1φp+ a2φq to be less than or equal to that ofthe OLSE. Setting q = 1 makes this happen since Theorem 2.1 tells us that

Var(a1φp + a2φ1)≤ Var(φ1).

It turns out, however, that the window where these two variances are distinguishable

Page 7: New confidence intervals for the AR(1) parametermyweb.astate.edu/ftunno/Documents/Involve Paper.pdfThis paper presents a new way to construct confidence intervals for the unknown

58 FEREBEE TUNNO AND ASHTON ERWIN

may be brief since a1 goes to zero as the sample size increases. This in turn causesa1φp + a2φ1 to be asymptotically normal.

Theorem 2.2. Let a1+ a2 = 1. If a1 is given by (2-2) with q = 1, then

√n(a1φp + a2φ1−φ

) D−→ N

(0,

σ 2

E(X2t )

).

Proof. First, we note that

n Cov(φp, φq)P−→

σ 2 E |X t |p+q

E |X t |p+1 E |X t |

q+1 .

Then

a1 =Var(φ1)−Cov(φp, φ1)

Var(φp − φ1)

=n Var(φ1)− n Cov(φp, φ1)

n Var(φp)+ n Var(φ1)− 2n Cov(φp, φ1)

P−→

σ 2

E |X t |2 −

σ 2

E |X t |2

σ 2 E |X t |2p

(E |X t |p+1)2

+σ 2

E |X t |2 −

2σ 2

E |X t |2

=0

σ 2 E |X t |2p

(E |X t |p+1)2

−σ 2

E |X t |2

=: R.

The denominator of R is strictly positive since

plimn→∞

n Var(φp) > plimn→∞

n Var(φ1) for p 6= 1.

Thus, R = 0.Since a1

P−→ 0, we obtain a2

P−→ 1. Hence, a1φp + a2φ1 and φ1 have the same

asymptotic distribution. By (1-3), we have

√n(φ1−φ)

D−→ N

(0,

σ 2

E(X2t )

).

Thus,√

n(a1φp + a2φ1−φ)D−→ N

(0,

σ 2

E(X2t )

)as well. �

If X1, X2, . . . , Xn are sample observations from (1-1), then an approximate(1−α)× 100% confidence interval for φ centered at a1φp + a2φ1 has endpoints

a1φp + a2φ1± zα/2√

Var(a1φp + a2φ1).

Page 8: New confidence intervals for the AR(1) parametermyweb.astate.edu/ftunno/Documents/Involve Paper.pdfThis paper presents a new way to construct confidence intervals for the unknown

NEW CONFIDENCE INTERVALS FOR THE AR(1) PARAMETER 59

Letting

σ 2i =

σ 2∑nt=2 |X t−1|

2i(∑nt=2 |X t−1|i+1

)2 , σi j =σ 2∑n

t=2 |X t−1|i+ j∑n

t=2 |X t−1|i+1∑n

t=2 |X t−1| j+1 ,

a1 =σ 2

1 − σp1

σ 2p + σ

21 − 2σp1

, a2 =σ 2

p − σp1

σ 2p + σ

21 − 2σp1

,

we then have

nVar(a1φp + a2φ1)= n(a2

1Var(φp)+ a22Var(φ1)+ 2a2

1 a22Cov(φp, φ1)

)=

n(σ 21 σ

2p − σ

2p1)

σ 21 + σ

2p − 2σp1

P−→

σ 2

E(X2t ).

However, observe that σp1 = σ21 which implies that Var(a1φp + a2φ1)= Var(φ1).

Thus, our choice of asymptotic estimators when q = 1 has the unintended conse-quence of causing our interval to be equivalent to that of the OLSE.

Herein lies the motive to go with center (1-2) in lieu of center (2-1). By replac-ing φ1 with ρ(1), we avoid this asymptotic equivalence, while preserving some ofthe desirable properties associated with (2-1). In the upcoming simulations, wealso replace σ 2

1 = Var(φ1) with

Var(ρ(1))=σ 2∑n

t=1 X2t,

but retain σp1 = Cov(φp, φ1).

3. Simulations

We now look at the length and coverage capability of 95% confidence intervalsfor φ centered at the OLSE and a1φp + a2ρ(1) for various p 6= 1. Each figurereflects 10,000 simulation runs of n= 50 independent observations with distributionN (0, 1).

In Figure 3, top, we see that the a1φ0 + a2ρ(1) interval has at least as muchcoverage as the OLSE interval when (roughly) |φ| ≤ 0.5. The a1φ0 + a2ρ(1)interval is also slightly shorter over this same region. In Figure 3, bottom, p hasincreased to 2, but the coverage of the a1φ2+ a2ρ(1) interval has degenerated withno meaningful difference in interval lengths.

In Figure 4, p has increased to 3 (top two graphs) and 4 (middle row of graphs).The a1φ3+a2ρ(1) and a1φ4+a2ρ(1) intervals both return back to the performancelevel of the a1φ0+a2ρ(1) interval, with (roughly) |φ| ≤ 0.5 again being the domainof interest.

Page 9: New confidence intervals for the AR(1) parametermyweb.astate.edu/ftunno/Documents/Involve Paper.pdfThis paper presents a new way to construct confidence intervals for the unknown

60 FEREBEE TUNNO AND ASHTON ERWIN

Coverage Length

φ φ

Coverage Length

φ φ

Figure 3. Top row: Empirical coverage capability (left) and length(right) of 95% confidence intervals for φ centered at the OLSE anda1φ0 + a2ρ(1) for φ ∈ (−1, 1). Bottom row: Same, fora1φ2+ a2ρ(1).

In Figure 4, bottom, we create intervals whose centers are simply unweightedaverages of the OLSE and the sample correlation coefficient. That is, the intervals’endpoints take the form

0.5φ1+ 0.5ρ(1)± 1.96√

Var(0.5φ1+ 0.5ρ(1)

).

Using the fact that 2Cov(φ1, ρ(1))≈ Var(φ1)+ Var(ρ(1)), this is approximately

0.5φ1+ 0.5ρ(1)± 1.96√

0.5(Var(φ1)+ Var(ρ(1))

).

There is no significant difference between the 0.5φ1+ 0.5ρ(1) and OLSE intervals.

Page 10: New confidence intervals for the AR(1) parametermyweb.astate.edu/ftunno/Documents/Involve Paper.pdfThis paper presents a new way to construct confidence intervals for the unknown

NEW CONFIDENCE INTERVALS FOR THE AR(1) PARAMETER 61

Coverage Length

φ φ

Coverage Length

φ φ

Coverage Length

φ φ

Figure 4. Top row: Empirical coverage capability (left) and length(right) of 95% confidence intervals for φ centered at the OLSEand a1φ3 + a2ρ(1) for φ ∈ (−1, 1). Middle row: Same, fora1φ4+ a2ρ(1). Bottom row: Same, for 0.5φ1+ 0.5ρ(1).

Page 11: New confidence intervals for the AR(1) parametermyweb.astate.edu/ftunno/Documents/Involve Paper.pdfThis paper presents a new way to construct confidence intervals for the unknown

62 FEREBEE TUNNO AND ASHTON ERWIN

4. Closing remarks

The performance of the confidence interval centered at a1φp + a2ρ(1) presented inthis paper is modest, but not unimportant. For parameter values (roughly) between−0.5 and 0.5, its coverage tends to be at least as good as that of the OLSE intervalwhile having a slightly smaller margin of error. This interval also does not requirea large sample size, which can be good for certain practical purposes.

For example, consider the daily stock prices for Exxon Mobil Corporation duringthe fall quarter of 2011 (i.e., September 23 to December 21). A reasonable modelfor this time series is an ARIMA(1, 1, 0), where {X t } ∼ ARIMA(p, 1, q) implies{X t − X t−1} ∼ ARMA(p, q). Thus, if X t stands for the price at time t and Yt =

X t−X t−1, it follows that {Yt }∼AR(1)with estimated model Yt =−0.0444Yt−1+εt .Both the {X t } and {Yt } processes are shown in Figure 5.

Figure 5. The original (left) and differenced (right) stock pricesfor Exxon Mobil (XOM) from 9/23/11 to 12/21/11. The samplesizes are 63 and 62, respectively.

If we supplement this model with 95% confidence intervals for φ, we get thefollowing:

Center Interval Length

φ1 (−0.2089871, 0.1719117) 0.3808988ρ(1) (−0.2076522, 0.1710108) 0.3786630

0.5φ1+ 0.5ρ(1) (−0.2083205, 0.1714621) 0.3797825a1φ0+ a2ρ(1) (−0.2036185, 0.1749948) 0.3786133a1φ2+ a2ρ(1) (−0.2127095, 0.1658015) 0.3785110a1φ3+ a2ρ(1) (−0.2100079, 0.1686098) 0.3786177a1φ4+ a2ρ(1) (−0.2092005, 0.1694378) 0.3786383

Page 12: New confidence intervals for the AR(1) parametermyweb.astate.edu/ftunno/Documents/Involve Paper.pdfThis paper presents a new way to construct confidence intervals for the unknown

NEW CONFIDENCE INTERVALS FOR THE AR(1) PARAMETER 63

All seven intervals contain the point estimate φ = −0.0444, but the last four areslightly thinner than the first three.

One extension of the research presented in this paper would be to create con-fidence intervals centered around a linear combination of an arbitrary number ofweighted least-squares estimators. For example, it can be shown that the varianceof a1φp + a2φq + a3φr is minimized when

a1 =σ ∗qr (σ

2r − σpr )+M(σ 2

r − σqr )

σ ∗prσ∗qr −M2 , a2 =

σ ∗pr (σ2r − σqr )+M(σ 2

r − σpr )

σ ∗prσ∗qr −M2 ,

and a3 = 1− a1− a2, where σ 2i = Var(φi ), σi j = Cov(φi , φ j ), σ ∗i j = Var(φi − φ j ),

and M = σpr + σqr − σpq − σ2r . However, once the number of estimators in the

center goes beyond two, the work required to construct and analyze the intervalmay outweigh any benefits it would bestow.

Another (less tedious) extension would be to find a new sequence {a1,n} thatconverges to zero while yielding a linear combination of estimators with smallerMSE than the OLSE. This new combination would still have the same distributionallimit as the OLSE and could then serve as the center for another competitive intervalfor φ. Specifically, if we simply set the standard error equal to the square root of theasymptotic variance of the OLSE, the resulting interval should have length equal tothat of the OLSE, but with better coverage capability.

References

[Gallagher and Tunno 2008] C. Gallagher and F. Tunno, “A small sample confidence interval forautoregressive parameters”, J. Statist. Plann. Inference 138:12 (2008), 3858–3868. MR 2009m:62257Zbl 1146.62065

[Phillips et al. 2004] P. C. B. Phillips, J. Y. Park, and Y. Chang, “Nonlinear instrumental variableestimation of an autoregression”, J. Econometrics 118:1-2 (2004), 219–246. MR 2004j:62186Zbl 1033.62085

[Shumway and Stoffer 2006] R. H. Shumway and D. S. Stoffer, Time series analysis and its applica-tions, 2nd ed., Springer, New York, 2006. MR 2007a:62003 Zbl 1096.62088

[So and Shin 1999] B. S. So and D. W. Shin, “Cauchy estimators for autoregressive processes withapplications to unit root tests and confidence intervals”, Econometric Theory 15:2 (1999), 165–176.MR 2000f:62228 Zbl 0985.62072

Received: 2011-12-31 Revised: 2012-04-26 Accepted: 2012-05-10

[email protected] Department of Mathematics and Statistics,Arkansas State University, P.O. Box 70,State University, AK 72467, United States

[email protected] Department of Mathematics and Statistics,Arkansas State University, P.O. Box 70,State University, AK 72467, United States

mathematical sciences publishers msp

Page 13: New confidence intervals for the AR(1) parametermyweb.astate.edu/ftunno/Documents/Involve Paper.pdfThis paper presents a new way to construct confidence intervals for the unknown

involvemsp.org/involve

EDITORSMANAGING EDITOR

Kenneth S. Berenhaut, Wake Forest University, USA, [email protected]

BOARD OF EDITORSColin Adams Williams College, USA

[email protected] V. Baxley Wake Forest University, NC, USA

[email protected] T. Benjamin Harvey Mudd College, USA

[email protected] Bohner Missouri U of Science and Technology, USA

[email protected] Boston University of Wisconsin, USA

[email protected] S. Budhiraja U of North Carolina, Chapel Hill, USA

[email protected] Cerone Victoria University, Australia

[email protected] Chapman Sam Houston State University, USA

[email protected] N. Cooper University of South Carolina, USA

[email protected] N. Corcoran University of Colorado, USA

[email protected] Diagana Howard University, USA

[email protected] Dorff Brigham Young University, USA

[email protected] S. Dragomir Victoria University, Australia

[email protected] Emamizadeh The Petroleum Institute, UAE

[email protected] Foisy SUNY Potsdam

[email protected] W. Fulp Wake Forest University, USA

[email protected] Gallian University of Minnesota Duluth, USA

[email protected] R. Garcia Pomona College, USA

[email protected] Godbole East Tennessee State University, USA

[email protected] Gould Emory University, USA

[email protected] Granville Université Montréal, Canada

[email protected] Griggs University of South Carolina, USA

[email protected] Gupta U of North Carolina, Greensboro, USA

[email protected] Haglund University of Pennsylvania, USA

[email protected] Henderson Baylor University, USA

[email protected] Hoste Pitzer College

[email protected] Hritonenko Prairie View A&M University, USA

[email protected] H. Hurlbert Arizona State University,USA

[email protected] R. Johnson College of William and Mary, USA

[email protected]. B. Kulasekera Clemson University, USA

[email protected] Ladas University of Rhode Island, USA

[email protected]

David Larson Texas A&M University, [email protected]

Suzanne Lenhart University of Tennessee, [email protected]

Chi-Kwong Li College of William and Mary, [email protected]

Robert B. Lund Clemson University, [email protected]

Gaven J. Martin Massey University, New [email protected]

Mary Meyer Colorado State University, [email protected]

Emil Minchev Ruse, [email protected]

Frank Morgan Williams College, [email protected]

Mohammad Sal Moslehian Ferdowsi University of Mashhad, [email protected]

Zuhair Nashed University of Central Florida, [email protected]

Ken Ono Emory University, [email protected]

Timothy E. O’Brien Loyola University Chicago, [email protected]

Joseph O’Rourke Smith College, [email protected]

Yuval Peres Microsoft Research, [email protected]

Y.-F. S. Pétermann Université de Genève, [email protected]

Robert J. Plemmons Wake Forest University, [email protected]

Carl B. Pomerance Dartmouth College, [email protected]

Vadim Ponomarenko San Diego State University, [email protected]

Bjorn Poonen UC Berkeley, [email protected]

James Propp U Mass Lowell, [email protected]

Józeph H. Przytycki George Washington University, [email protected]

Richard Rebarber University of Nebraska, [email protected]

Robert W. Robinson University of Georgia, [email protected]

Filip Saidak U of North Carolina, Greensboro, [email protected]

James A. Sellers Penn State University, [email protected]

Andrew J. Sterge Honorary [email protected]

Ann Trenk Wellesley College, [email protected]

Ravi Vakil Stanford University, [email protected]

Antonia Vecchio Consiglio Nazionale delle Ricerche, [email protected]

Ram U. Verma University of Toledo, [email protected]

John C. Wierman Johns Hopkins University, [email protected]

Michael E. Zieve University of Michigan, [email protected]

PRODUCTIONSilvio Levy, Scientific Editor

See inside back cover or msp.org/involve for submission instructions. The subscription price for 2013 is US $105/year for the electronic version, and$145/year (+$35, if shipping outside the US) for print and electronic. Subscriptions, requests for back issues from the last three years and changesof subscribers address should be sent to MSP.

Involve (ISSN 1944-4184 electronic, 1944-4176 printed) at Mathematical Sciences Publishers, 798 Evans Hall #3840, c/o University of California,Berkeley, CA 94720-3840, is published continuously online. Periodical rate postage paid at Berkeley, CA 94704, and additional mailing offices.

Involve peer review and production are managed by EditFLOW® from Mathematical Sciences Publishers.

PUBLISHED BY

mathematical sciences publishersnonprofit scientific publishing

http://msp.org/© 2013 Mathematical Sciences Publishers

Page 14: New confidence intervals for the AR(1) parametermyweb.astate.edu/ftunno/Documents/Involve Paper.pdfThis paper presents a new way to construct confidence intervals for the unknown

inv lvea journal of mathematics

involve2013 vol. 6 no. 1

1Refined inertias of tree sign patterns of orders 2 and 3D. D. OLESKY, MICHAEL F. REMPEL AND P. VAN DEN DRIESSCHE

13The group of primitive almost pythagorean triplesNIKOLAI A. KRYLOV AND LINDSAY M. KULZER

25Properties of generalized derangement graphsHANNAH JACKSON, KATHRYN NYMAN AND LES REID

35Rook polynomials in three and higher dimensionsFERYAL ALAYONT AND NICHOLAS KRZYWONOS

53New confidence intervals for the AR(1) parameterFEREBEE TUNNO AND ASHTON ERWIN

65Knots in the canonical book representation of complete graphsDANA ROWLAND AND ANDREA POLITANO

83On closed modular colorings of rooted treesBRYAN PHINEZY AND PING ZHANG

99Iterations of quadratic polynomials over finite fieldsWILLIAM WORDEN

113Positive solutions to singular third-order boundary value problems on purely discretetime scales

COURTNEY DEHOET, CURTIS KUNKEL AND ASHLEY MARTIN

involve2013

vol.6,no.1