Jacob_MSE

27
Prediction of time series and nonstationary times, Maison des Sciences Economiques, Paris, February 10-11, 2012 Conditional Least Squares Estimation in nonlinear and nonstationary stochastic regression models Christine Jacob Applied Mathematics and Informatic unit, INRA, Jouy-en-Josas, France [email protected] 1

description

MSE

Transcript of Jacob_MSE

Page 1: Jacob_MSE

Prediction of time series and nonstationary times, Maison des Sciences Economiques, Paris, February 10-11, 2012

Conditional Least Squares Estimation in nonlinearand nonstationary stochastic regression models

Christine JacobApplied Mathematics and Informatic unit, INRA, Jouy-en-Josas, France

[email protected]

1

Page 2: Jacob_MSE

Model

• {Zn}: real stochastic process, may depend on an external process {Un},

E(Zn|Fn−1) = g(θ0, ν0, Fn−1)︸ ︷︷ ︸Model, Lipschitz in θ

= g(1)(θ0, Fn−1)︸ ︷︷ ︸Approximate model

+ g(2)(θ0, ν0, Fn−1)︸ ︷︷ ︸nuisance ”negligible” part

a.s.< ∞

V ar(Zn|Fn−1) = σ2(θ0, ν0, Fn−1)a.s.< ∞,

where Fn−1 := {Zn−1, Zn−2, . . . , Un, Un−1, . . .}: observations

θ0 ∈ Θ0 ⊂ Rp, p < ∞: parameter of interest

ν0 ∈ Rq, q ≤ ∞: nuisance parameter

If q = ∞, νn := g(2)(θ0, ν0, Fn−1), if q = 0, g(2)(θ0, ν0, Fn−1) = 0

2

Page 3: Jacob_MSE

Goal

• Estimate θ0 from one observed trajectory of {Zn, Un} by Conditional Least Squares:

Zn = g(θ0, ν0, Fn−1) + en, E(en|Fn−1)a.s.= 0, E(e2

n|Fn−1) := σ2(θ0, ν0, Fn−1))

θn := arg minθ

Sn(θ, ν), Sn(θ, ν) :=

n∑

k=1

(Zk − g(θ, ν, Fk−1))2λ(Fk−1)

or, if q < ∞, (θn, νn) := arg minθ,ν

Sn(θ, ν), Sn(θ, ν) :=

n∑

k=1

(Zk − g(θ, ν, Fk−1))2λ(Fk−1)

• Asymptotic properties of θn, as n → ∞: strong consistency, asymptotic distribution

Difficulties: stochasticity, nonstationarity, nonlinearity, no explicit expression of θn!

Literature: for the strong consistency, existence of sufficient conditions forbidding many types ofnonstationarity, asymptotic distribution in very particular cases

3

Page 4: Jacob_MSE

• Optimal properties of θn for a finite n

For q = 0 (no nuisance parameter), optimality if λ(Fk−1) ∝ σ−2(θ0, Fk−1) (Heyde, 1997):

Ex. p = 1 =⇒ optimality if Sn(θ0) minimum with Sn(θ0) maximum

=⇒(E(S2

n(θ0)))(

E(Sn(θ0)))−2

minimum for λ(Fk−1) ∝ σ−2(θ0, Fk−1)

=⇒ Take λ(Fk−1) ∝ σ−2(θ0, Fk−1)

Note: If λ(Fk−1) depends on θ, then the consistency of θn is generally not ensured

4

Page 5: Jacob_MSE

Usefulness of g(2)(θ0,ν0, Fn−1) := g(θ0,ν0, Fn−1)︸ ︷︷ ︸E(Zn|Fn−1)

− g(1)(θ0, Fn−1)︸ ︷︷ ︸Approximate model

Assumption: g(2)(θ, ν, Fn−1) is A.N. (Asymptotically Negligible):

∀δ > 0, limn

sup‖θ−θ0‖≥δ |g(2)(θ, ν, Fn−1) − g(2)(θ0, ν0, Fn−1)|inf‖θ−θ0‖≥δ |g(1)(θ, Fn−1) − g(1)(θ0, Fn−1)|

a.s.= 0

5

Page 6: Jacob_MSE

• g(θ0, ν0, Fn−1) is more realistic than the simpler model g(1)(θ0, ν0, Fn−1),but νn may have no asymptotic properties =⇒ study θn

• Zn is observed with an observation error ηn, E(ηn|Fn−1) = 0

=⇒ Zηn = Zn + ηn, =⇒ E(Zη

n|Fn−1) = E(Zn|Fn−1) = g(θ0, ν0, Fn−1)

g(θ0, ν0, Fn−1) = g(θ0, ν0, Fηn−1) + g(θ0, ν0, Fn−1) − g(θ0, ν0, F

ηn−1)

= g(1)(θ0, Fηn−1)︸ ︷︷ ︸

gη(1)(θ0,Fηn−1)

+ g(2)(θ0, ν0, Fηn−1) + g(θ0, ν0, Fn−1) − g(θ0, ν0, F

ηn−1)︸ ︷︷ ︸

gη(2)(θ0,ν0,Fn−1,Fηn−1)=:νn

6

Page 7: Jacob_MSE

• Zn depends on the past errors en−1, en−2, . . .

Ex. ARMA(p, q) : Zn =

p∑

j=1

αjZn−j +

q∑

j=1

βjen−j + en ⇐⇒ A(L)Zn = B(L)en,

A(L) := 1 −p∑

j=1

αjLj, B(L) := 1 +

q∑

j=1

βjLj, LZn = Zn−1

=⇒ Zn =

p∑

j=1

αjZn−j + (B(L) − 1)L−1 en−1︸︷︷︸(B(L])−1A(L)Zn−1

+en

=n−1∑

j=0

γn−jZj

︸ ︷︷ ︸Approximate model g(1)(.)

+

−n0∑

j=−1

γn−jZj

︸ ︷︷ ︸Nuisance part g(2)(.)

+en

where γl is function of {αj} and {βj} (for p = 1, q = 1, γj = (−1)j−1(α1 + β1)βj−11 ),

Zn = 0, n < −n0 ≤ 0

7

Page 8: Jacob_MSE

Extension: Sn(θ,ν) :=∑n

k=1

(Ψk(Zk − g(θ, ν, Fk−1))

)2

λ(Fk−1),E(Ψk(Zk − g(θ0, ν0, Fk−1))|Fk−1) = 0

• Missing data=⇒ Ψk(Zk − g(θ, ν, Fk−1)) = 1{Zk∈Iobs

k}(Zk − g(θ, ν, Fk−1))

• E(Zn|Fn−1) ≤ ∞, E(Z2n|Fn−1) ≤ ∞

=⇒ Ψk(Zk − g(θ, ν, Fk−1)) = 1{Zk∈Ik}(Zk − g(θ, ν, Fk−1)), limk Ika.s.= ∞

• Existence of outliers (Heyde, 1997)=⇒ λ(Fk−1) =

∑kh=1 λh,kΨk−h(Zk−h − g(θ, ν, Fk−h−1)),

Ψk(x) = Ψ(x) = x, if |x| < m, Ψ(x) = m sign(x), if |x| ≥ m (Huber’s function)

8

Page 9: Jacob_MSE

Extension: Sn(θ,ν) :=∑n

k=1

(Ψk(Zk − g(θ,ν, Fk−1))

)2

λ(Fk−1) + pen(θ, p, Fn−1),E(Ψk(Zk − g(θ0, ν0, Fk−1))|Fk−1) = 0, where p is the nber of θi 6= 0

=⇒ Penalize large values of p

Extension: Multivariate stochastic regression model:Sn(θ, ν) =

∑nk=1(Zk−g(θ,ν, Fk−1))

TΣ−1k (Zk−g(θ,ν, Fk−1)), Σ−1

k Fk−1-measurable

=⇒ Sn(θ, ν) =n∑

k=1

(Zk − g(θ, ν, Fk−1))TUkΛkU

−1k (Zk − g(θ, ν, Fk−1)), UkΛkU

−1k = Σ−1

k , Σ−1k Fk−1

=n∑

k=1

1/2

k U−1k (Zk − g(θ,ν , Fk−1))

)T (Λ

1/2k U−1

k (Zk − g(θ, ν, Fk−1)))

︸ ︷︷ ︸denoted by Yk−fk(θ,ν,Fk−1)

=d∑

j=1

n∑

k=1

(Yk,j − fk,j(θ, ν, Fk−1))2,

9

Page 10: Jacob_MSE

Examples of models

• Classical Regression with stochastic regressors

• Time series: ARMAX(p, q, b) model, nonlinear time series• Financial models: GARCH(p, q) model, nonlinear financial models

ξn = sn(θ0)Un, {Un} i.i.d. (0, 1) =⇒ Zn := ξ2n = s2

n(θ0) + s2n(θ0)(U

2n − 1)︸ ︷︷ ︸

en

s2n(θ) = α0 +

p∑

j=1

βjs2n−j(θ) +

q∑

j=1

αjξ2n−j (volatility),

• Branching processes

Nn =

Nn−1∑

i=1

Xn,i, {Xn,i}i|Fn−1 i.i.d. (mθ0,ν0(Fn−1), σ2θ0,ν0

(Fn−1))

=⇒ Zn := Nn = mθ0,ν0(Fn−1)Nn−1 +√

Nn−1en, E(e2n|Fn−1) = σ2

θ0,ν0(Fn−1)

10

Page 11: Jacob_MSE

Asymptotic properties for q = 0 (no nuisance parameter)

θn = arg minθ

Sn(θ), Sn(θ) :=n∑

k=1

(Zk − g(θ, Fk−1))2λ(Fk−1) =:

n∑

k=1

(Yk − f(θ, Fk−1))2

Let ek := Yk − f(θ0, Fk−1), σ2k := V ar(ek|Fk−1). Assume

A1 : ∀δ > 0, sup‖θ−θ0‖≥δ

∞∑

k=1

σ2kd

2k(θ)

( ∑kh=1 d2

h(θ))2

a.s.< ∞, dk(θ) := f(θ, Fk−1) − f(θ0, Fk−1)

11

Page 12: Jacob_MSE

• Strong consistency (Jacob, 2010)

Let Dn(θ) =∑n

k=1

(f(θ, Fk−1) − f(θ0, Fk−1)

)2

(identifiability criterion). Assume A1. Then

A2 : ∀δ > 0, limn inf‖θ−θ0‖≥δ

Dn(θ)a.s.= ∞ =⇒ lim

nθn

a.s.= θ0

Linear case: f(θ, Fn−1) = θT0 Wn−1 =⇒ θn − θ0 =

( ∑nk=1 Wk−1W

Tk−1

)−1 ∑nk=1 ekWk−1,

Dn(θ) =(θ − θ0

)T

Wn−1WTn−1

(θ − θ0

)

=⇒ limn inf‖θ−θ0‖≥δ

Dn(θ)a.s.= ∞ ⇐⇒ lim

nλmin

( n∑

k=1

Wk−1WTk−1

)a.s.= ∞

Note. In the literature, existence of additional sufficient conditions:Dn(θ) has to tend to ∞ at some rateEx. Linear case f(θ, Fn−1) = θT

0 Wn−1, the additional condition islimn[ln(λmax{

∑nk=1 WkW

Tk }]ρ][λmin{

∑nk=1 WkW

Tk }]−1 = 0 (Lai & Wei, 1982)

12

Page 13: Jacob_MSE

• Proof

1. Wu’s Lemma (1981):

limn inf‖θ−θ0‖≥δ

(Sn(θ) − Sn(θ0))a.s.> 0, ∀δ > 0 =⇒ lim

nθn

a.s.= θ0.

2. Sn(θ) − Sn(θ0) = Dn(θ) + 2Ln(θ)

=⇒ inf‖θ−θ0‖≥δ

Sn(θ) − Sn(θ0) ≥ inf‖θ−θ0‖≥δ

Dn(θ)(1 − 2 sup

‖θ−θ0‖≥δ

|Ln(θ)|Dn(θ)

)

Ln(θ)

Dn(θ):=

∑nk=1 ek

(f(θ, Fk−1) − f(θ0, Fk−1)

)

∑nk=1

(f(θ, Fk−1) − f(θ0, Fk−1)

)2

3. Prove that sup‖θ−θ0‖≥δ |Ln(θ)|(Dn(θ)

)−1

converges a.s. to 0

13

Page 14: Jacob_MSE

Linear case f(θ, Fn−1) = θT0 Wn−1

=⇒ |Ln(θ)|Dn(θ)

=|(θ − θ0)

T∑

k ekWk−1|(θ − θ0)T

∑nk=1 Wk−1W

Tk−1(θ − θ0)

=⇒ sup‖θ−θ0‖≥δ

|Ln(θ)|Dn(θ)

=|Ln(θn)|Dn(θn)

, θn depends on en, Fn−1

p = 1 =⇒ |Ln(θn)|Dn(θn)

≤ |(θn − θ0)−1||

∑k ekWk−1∑nk=1 W 2

k−1

| =⇒ SLLNM (Hall & Heyde, 1980)

General nonlinear case with p ≥ 1?∑n

k=1 ek

(f(θn,Fk−1)−f(θ0,Fk−1)

)

∑kh=1

(f(θn,Fh−1)−f(θ0,Fh−1)

)2 is not a martingale!

14

Page 15: Jacob_MSE

Strong Law of Large Numbers for SubMartingales (Jacob, 2010)

dk(θ) := f(θ, Fk−1) − f(θ0, Fk−1) Fk−1-measurable and Lipschitz in θ,E(ek|Fk−1) = 0, E(e2

k|Fk−1) =: σ2k

limn

infθ

n∑

k=1

d2k(θ)

a.s;= ∞ and sup

θ

∞∑

k=1

σ2kd

2k(θ)

( ∑kh=1 d2

h(θ))2

a.s.< ∞ =⇒ lim

nsup

θ

|∑n

k=1 ekdk(θ)∑nk=1 d2

k(θ)| a.s.

= 0

Proof: supθ |∑n

k=1 ekdk(θ)( ∑k

h=1 d2h(θ)

)−1

| is a submartingale=⇒ use submartingale properties (Hall & Heyde, 1980), and analytical lemma’s

Note. SLLNM: limn

∑nk=1 ekdk(θ)∑nk=1 d2

k(θ)

a.s.= 0

Markov’s theorem: limn

∑nk=1

σ2kd2

k(θ)

n2

a.s.= 0 =⇒ limn

∑nk=1 ekdk(θ)

n

P= 0

15

Page 16: Jacob_MSE

• Asymptotic distribution (Jacob, 2010)

1. Taylor’s expansion of Sn(θn) at θ0

=⇒ Sn(θn) = Sn(θ0) + Sn(θn)(θn − θ0) =⇒ (θn − θ0) = −(Sn(θn)

)−1

Sn(θ0)

2. Find a p × p deterministic matrix Φn such that

Φ1/2n (θn − θ0) = −Φ−1/2

n

(Sn(θn)Φ

−1n

)−1

Sn(θ0)

with limn Sn(θn)Φ−1n

P= I, and limn Φ−1/2

n Sn(θ0) exists in distribution

Sn(θ0) = −2

n∑

k=1

ekf(θ0, Fk−1) =⇒ Φ−1/2n = O

( n∑

k=1

f(θ0, Fk−1)fT (θ0, Fk−1)

)−1/2

Sn(θn) = 2

n∑

k=1

f(θn, Fk−1)fT (θn, Fk−1)

︸ ︷︷ ︸∼Φn

−2

n∑

k=1

ek f(θn, Fk−1)

︸ ︷︷ ︸6=0 in the nonlinear case

Use the SLLNSM for proving that limn

∑nk=1 ekf(θn, Fk−1)Φ

−1n

a.s.= 0

16

Page 17: Jacob_MSE

Examples

• Polymerase Chain Reaction: replication in vitro of a population of N0 DNA(Lalam, Jacob & Jagers, 2004)

Nn =

Nn−1∑

i=1

(1 + Xn,i), {Xn,i}i|Fn−1 i.i.d. Ber(pθ0(Nn−1))

pθ0(Nn−1) := P (Xn,i = 1|Nn−1) =K0

K0 + NS0,n−1

(1 + exp(−C0(S

−10 NS0,n−1 − 1))

)

2

where NS0,n−1 = S0, if Nn−1 < S0, and NS0,n−1 = Nn−1, if Nn−1 ≥ S0

Nn increases exponentially when Nn−1 < S0 (BGW branching process),Nn increases linearly (limn Nnn

−1 a.s.= K0/2) when Nn−1 ≥ S0

17

Page 18: Jacob_MSE

Zn = Nn + ηn =(1 +

K0

K0 + NS0,n−1

(1 + exp(−C0(S

−10 NS0,n−1 − 1))

)

2

)Nn−1 + en + ηn

n large= Zn−1 +

K0Zn−1

2(K0 + Zn−1)︸ ︷︷ ︸g(1)(θ0,Zn−1)

+ O(

exp(−C0(S−10 Zn−1 − 1))

)

︸ ︷︷ ︸g(2)(θ0,Zn−1,ηn)

+en

Non asymptotic identifiability of (K, C, S) because of C, S

Strong identifiability of K given (Cn, Sn)

=⇒ limn Kn|(Cn, Sn)a.s.= K0, Φ

1/2n (Kn − K0)|(Cn, Sn)

D= N (0, K/2)

18

Page 19: Jacob_MSE

5 10 15 20 25 30 35

-1

-0.5

0.5

1

1.5

2

Figure 1: Efficiency {pθ0(Nk−1)}k≤n calculated from a simulated trajectory of the branching process (K0 = 4.00311.1010, S0 = 1010, C0 = 0)

In dashed line: p(Zk−1) = ZkZ−1

k−1− 1, k ≤ n (empirical efficiency), in continuous line: pbθn

(Zk−1), k ≤ n (estimated efficiency)

5 10 15 20 25 30 35 40

-1

-0.5

0.5

1

1.5

2

Figure 2: Real-time PCR, well 21 of data set 1, efficiency {pθ0(Nk−1)}k≤n. In dashed line: {p(Zk−1) = ZkZ−1

k−1− 1}k≤n (empirical efficiency), in continuous line:

{pbθn

(Zk−1)}k≤n (estimated efficiency) with ns = 23 (saturation threshold cycle), Kh,n = 0.38055, Sh,n = 0.070553, Ch,n = 0.6

19

Page 20: Jacob_MSE

PCR: another model (Jacob, 2010)

Nn =

Nn−1∑

i=1

(1 + Xn,i)

P (Xn,i = 2|Nn−1) = (K0

K0 + NS0,n−1)(1 + Sα0

0 N−α0S0,n−1)

2, α > 0

n large= Zn−1 +

K0Zn−1

2(K0 + Zn−1)+

K0Sα00 Z1−α0

n−1

2(K0 + Zn−1)+ O

(ηn

)+ en

(K, S, α) non asymptotically identifiable due to α =⇒ assume α0 known with 0 ≤ 2α0 ≤ 1

Let θ = (K, Sα0). Then

limn

Φ1/2n (θn − θ0)

d= N (0, (K/2)I), Φn =

1

4

(n Kn1−α0

Kn1−α0 K2an(α0)

)

20

Page 21: Jacob_MSE

• GARCH(1, 1)

Zn := ξ2n = s2

n(θ0)︸ ︷︷ ︸g(θ0,ν0,Fn−1)

+ s2n(θ0)(U

2n − 1)︸ ︷︷ ︸

en

, E(U2n|Fn−1) = 1

s2n(θ) = α0 + α1ξ

2n−1 + β1s

2n−1(θ) (1)

=⇒ s2n(θ) = α0(

n−1∑

l=0

βl1) + α1

n−1∑

l=0

βl−11 ξ2

n−1−l) + βn1 s2

0

= α0(∞∑

l=0

βl1) + α1

n−1∑

l=0

βl−11 ξ2

n−1−l

︸ ︷︷ ︸g(1)(θ,ν,Fn−1)

+ βn1 (s2

0 − α0

∞∑

l=0

βl1)

︸ ︷︷ ︸g(2)(θ,ν,Fn−1)=:νn

A.N. of g(2)(θ0, ν0, Fn−1) = 0, for β10 < 1 =⇒ take νn := g(2)(θ0, ν0, Fn−1) = 0

(1) =⇒ =⇒ E(s2n(θ0))γ

−n0 = α00

∑nk=1 γ−k

0 + s20(θ0), γ0 := α10 + β10

=⇒ γ0 < 1 =⇒ limn E(s2n(θ0)) = α00(1 − γ0)

−1

=⇒ γ0 > 1 =⇒ limn s2n(θ0)γ

−n0

a.s.= W , E(W ) < ∞

=⇒ γ0 = 1 =⇒ E(s2n(θ0)) = nα00 + s2

0(θ0)

21

Page 22: Jacob_MSE

0 2000 4000 6000 8000 100000

100

200

300

400

500

600

700

800

900

1000

0 50 100 150 2000

100

200

300

400

500

600

700

0 2000 4000 6000 8000 100000

0.5

1

1.5

2

2.5

3

3.5

4x 10

14

0 50 100 150 2000

500

1000

1500

2000

2500

0 2000 4000 6000 8000 100000

0.5

1

1.5

2

2.5x 10

293

0 50 100 150 2000

0.5

1

1.5

2

2.5

3x 10

8

Figure 3: Simulations with {U2

n} i.i.d. exp(1). Red line: {ξ2

n}, blue line: {s2

n(θ)}On the first line, θ0 = (α00, α10, β10) = (10, 0.1, 0.8); on the second line, θ0 = (10, 0.22, 0.8); on the third line, θ0 = (10, 0.3, 0.8)

22

Page 23: Jacob_MSE

Conditional Least Squares estimator of θ0 := (C0, α10, β10), β10 < 1, C0 := α00(1 − β10)−1

Zn := ξ2n = s2

n(θ0)︸ ︷︷ ︸g(θ0,ν0,Fn−1)

+ s2n(θ0)(U

2n − 1)︸ ︷︷ ︸

en

, E(U2n|Fn−1) = 1

=: g(θ0, ν0, Fn−1) + en

= C0 + α10

n−1∑

l=0

βl10ξ

2n−1−l

︸ ︷︷ ︸g(1)(θ,Fk−1)

+ g(2)(θ0, ν0, Fn−1)︸ ︷︷ ︸νn

+en, E(e2n|Fn−1) ∝ s4

n(θ0)

θn|ν = 0 := arg minθ∈θ

Sn(θ,0), Sn(θ,0) :=n∑

k=1

(Zk − g(1)(θ, Fk−1))2λ(Fk−1)

Optimality if λ(Fn−1) ∝ (V ar(Zn|Fn−1))−1 ∝ s−4

n (θ0)

=⇒ take λ(Fn−1) = (g(1)(θ∗n, Fn−1))−2 =

(1 + α∗

∑n−1l=0 βl

∗ξ2n−1−l

)−2

, β∗ < 1

23

Page 24: Jacob_MSE

Strong Consistency of θn if A1 and A2 checked• A1 checked for all α∗ > 0, 0 < β∗ < 1

• For θ = (C, α10, β10), A2 ⇐⇒ ∑∞k=1 λ(Fk−1)

a.s.= ∞

∞∑

k=1

λ(Fk−1) =

∞∑

k=1

(1 + α∗

k−1∑

l=0

βk−1−l∗ ξ2

l

)−2

≥∞∑

k=1

(1 + α∗(1 − β∗)

−1M ξk−1

)−2

, M ξk−1 := sup

l≤k−1{ξ2

l }

=⇒ ∑∞k=1 λ(Fk−1) ≥

∑m(Lm+1 − Lm)

(1 + α∗(1 − β∗)−1ξ2

Lm

)−2

, ξ2Lm

:= mth record of {ξ2n}

=⇒ ∑∞k=1 λ(Fk−1) = ∞ if (Lm+1 − Lm)(ξ2

Lm)−2 does not tend to 0 too quickly (γ0 < 1)

=⇒ for γ0 > 1,∑∞

k=1 λ(Fk−1) ≃∑∞

k=1

(1 + α∗Wγk−1

0

∑k−1l=0 (β∗/γ0)

k−1−lU2l

)−2

< ∞, a.s.

• For θ = (C0, α1, β10) or θ = (C0, α10, β1), A2 checked for all γ0

Asymptotic distribution of θn, θ = (α1, β1), γ0 > 1:limn

√n(θn−θ0) = N (0,Σ), Σ dependent on θ0 and θ∗, V ar(αn) and V ar(βn) minimum for θ∗ = θ0

24

Page 25: Jacob_MSE

Simulations of {ξ2n}N

n=1, {U2n} i.i.d. exp(1), s2

1(θ0) = 0, calculus of θN , for different values of θ0 = (α00, α10, β10) and NFor each value of θ0 and of N , two graphics are given.The first one represents ξ2

1 , . . . , ξ2N (erratic line) with s2

1(θ0), . . . , s2N(θ0) (“smooth” line).

The second one represents Sn(θ, 0) calculated with θ∗ = (1, 1, 0.999), θ ∈ [α00 −0.1, α00 +0.05]× [α10 −0.1, α10 +0.05]× [β10 −0.1, β10 +0.05].

20 40 60 80 1000

0.2

0.4

0.6

0.8

1

500 1000 1500 2000 2500 3000

0.3

0.31

0.32

0.33

0.34

0.35

0.36

0.37

0.38

0.39

50 100 150 200 250 3000

0.2

0.4

0.6

0.8

1

1.2

500 1000 1500 2000 2500 3000

0.15

0.16

0.17

0.18

0.19

0.2

0.21

0.22

0.23

0.24

100 200 300 400 500 6000

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

500 1000 1500 2000 2500 3000

0.18

0.2

0.22

0.24

0.26

0.28

0.3

0.32

Figure 4: θ0 = (0.2, 0.2, 0.1).1rst line, N = 100: minθ Sn(θ) = Sn(0.16, 0.11, 0.15) = 0.2913, and Sn(θ0) = 0.3129.2nd line, N = 300: minθ Sn(θ) = Sn(0.24, 0.11, 0.09) = 0.1460, and Sn(θ0) = 0.1564.3rd line, N = 600, minθ Sn(θ) = Sn(0.25, 0.25, 0.07) = 0.1612, and Sn(θ0) = 0.1889.

25

Page 26: Jacob_MSE

20 40 60 80 1000

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

x 106

500 1000 1500 2000 2500 30004.3

4.4

4.5

4.6

4.7

4.8

4.9

5

5.1

5.2

5.3

50 100 150 200 250 3000

0.5

1

1.5

2

2.5

3

3.5

4

x 1017

500 1000 1500 2000 2500 30009.5

10

10.5

11

11.5

12

12.5

100 200 300 400 500 6000

2

4

6

8

10

12

14

16

18

x 1040

500 1000 1500 2000 2500 3000

24.5

25

25.5

26

26.5

27

27.5

28

28.5

29

Figure 5: θ0 = (0.2, 0.3, 0.9).1rst line, N = 100: minθ Sn(θ) = Sn(0.23, 0.22, 0.95) = 4.2846, and Sn(θ0) = 4.32972nd line, N = 300: minθ Sn(θ) = Sn(0.25, 0.25, 0.92) = 9.4485, and Sn(θ0) = 9.6351.3rd line, N = 600, minθ Sn(θ) = Sn(0.12, 0.35, 0.86) = 24.0845, and Sn(θ0) = 24.3542.

General comments:For each value of θ0 and of N , the minimum value of Sn(θ) is quite close to Sn(θ0)Assuming C0 known or setting θ∗ = θ0 does not improve significantly the results.

26

Page 27: Jacob_MSE

Conclusion

Indirect way of proof (Wu’s Lemma) + SLLNSM=⇒ The difficulties (stochasticity, nonstationarity, nonlinearity, no explicit expression of θn)are removed

=⇒ Strong consistency, Asymptotic distribution of θn

Thank you for your attention!

Main Reference:

JACOB, C. (2010) Conditional Least Squares Estimation in nonstationary nonlinear stochasticregression models.Ann. Statist., 38(1), 566–597.

27