Post on 12-Sep-2021
Higher Order MSE of Jackknife 2SLS
Jinyong Hahn
Brown University
Jerry Hausman
MIT
Guido Kuersteiner
MIT
April, 2001
Abstract
In this paper we consider parameter estimation in a simple linear simultaneous equations model.
It is well known that two stage least squares (2SLS) estimators perform poorly when the instru-
ments are weak. In this case 2SLS tends to suffer from substantial small sample biases. It is also
known that LIML and Nagar-type estimators are less biased than 2SLS but suffer from large
small sample variability. We construct a bias corrected version of 2SLS based on the Jackknife
principle. Using higher order expansions we show that the MSE of our Jackknife 2SLS estimator
is approximately the same as the MSE of the Nagar-type estimator. Monte Carlo simulations
show that even in relatively large samples the MSE of LIML and Nagar can be substantially
larger than for Jackknife 2SLS.
Keywords: weak instruments, higher order expansions, bias reduction, Jackknife, 2SLS
JEL C13,C21,C31,C51
1 Introduction
There has been a renewed interest in Þnite sample properties of econometric estimators. Most of
the related research activities in this area are concentrated in the investigation of Þnite sample
properties of instrumental variables (IV) estimators. It has been found that standard large
sample inference based on 2SLS can be quite misleading in small samples when the endogenous
regressor is only weakly correlated with the instrument. A partial list of such research activities
is Nelson and Startz (1990), Maddala and Jeong (1992), Staiger and Stock (1997), and Hahn
and Hausman (2000).
A general result is that controlling for bias can be quite important in small sample situations.
Anderson and Sawa (1979), Morimune (1983), Bekker (1994), Angrist, Imbens, and Krueger
(1995), and Donald and Newey (1998) found that IV estimators with smaller bias typically have
better risk properties in Þnite sample. For example, it has been found that the LIML, the
JIVE, or Nagar�s (1959) estimator tend to have much better risk properties than 2SLS. One
might conjecture that such results may well generalize to situations other than the simultaneous
equations models. In other words, one may conjecture that bias reduced version of an estimator
would in general have a better risk property than the original estimator. Donald and Newey
(1999) and Newey and Smith (2000) may be understood as an endeavor to obtain a bias reduced
version of the GMM estimator in order to improve the Þnite sample risk properties. In this
paper, we contribute to this approach by considering the higher order risk properties of the
Jackknife 2SLS.
Such an exercise is of interest for several reasons. First, we believe that higher order MSE
calculation of the Jackknife estimator has in general not been available in the literature. Most
papers simply verify the consistency of the Jackknife bias estimator. See Shao and Tu (1995,
Section 2.4) for a typical discussion of such type. Akahira (1983), who showed that the Jackknife
MLE is second order equivalent to MLE, is closest in spirit to our exercise here, although a third
order expansion is necessary in order to calculate the higher order MSE. Our proof strategy
can in principle be generalized to non-IV estimators. Second, the Jackknife 2SLS may prove to
be a reasonable competitor to the LIML or Nagar�s estimator. It is well-known that the LIML
and Nagar have the �moment� problem: With normally distributed error terms, it is known
that LIML and Nagar do not possess any moments. See Mariano and Sawa (1972) or Sawa
(1972). LIML and Nagar�s estimator have better higher order risk properties than 2SLS, as
shown by Rothenberg (1979) or Donald and Newey (1998). The moment problem would not
pose any practical concern if the problem were concentrated in the extreme end of the tails.
Unfortunately, in Hahn and Hausman (2000) we found in Monte Carlo that LIML and Nagar�s
estimator tend to have bigger spread (measured in terms of interquartile range, etc) than 2SLS
1
for some parameter combinations. It seems possible that the moment problem, which in principle
should be a mere object of theoretical curiosity, presents itself in the form of undesirable Þnite
sample risk properties despite the prediction based on higher order expansions. On the other
hand, it can be shown that Jackknife 2SLS is known to have moments up to the degree of
overidentiÞcation. If Jackknife 2SLS has a higher order MSE comparable to LIML or Nagar�s
estimator, we can then conjecture that its actual Þnite sample properties may be more stable.
2 MSE of Jackknife 2SLS
The model we focus on is the simplest model speciÞcation with one right hand side (RHS) jointly
endogenous variable so that the left hand side variable (LHS) depends only on the single jointly
endogenous RHS variable. This model speciÞcation accounts for other RHS predetermined (or
exogenous) variables, which have been �partialled out� of the speciÞcation. We will assume that
yi = xiβ + εi,
xi = fi + ui = z0iπ + ui i = 1, . . . , n
Here, xi is a scalar variable, and zi is a K-dimensional nonstochastic column vector. The Þrst
equation is the equation of interest, and the right hand side variable xi is possibly correlated with
εi. The second equation represents the �Þrst stage regression�, i.e., the reduced form between
the endogenous regressor xi and the instruments zi. By writing fi ≡ E [xi| zi] = z0iπ, we are
ruling out a nonparametric speciÞcation of the Þrst stage regression. Note that the Þrst equation
does not include any other exogenous variable. It will be assumed throughout the paper that
all the error terms are homoscedastic.
We focus on the 2SLS estimator b given by
b =x0Pyx0Px
= β +x0Pεx0Px
,
where P ≡ Z (Z 0Z)−1 Z0. Here, y denotes (y1, . . . , yn)0. We deÞne x, ε, u, and Z similarly. 2SLS
is a special case of the k-class estimator given by
x0Py − κ · x0Myx0Px− κ · x0Mx,
where M ≡ I − P and κ is a scalar. For κ = 0, we obtain 2SLS. For κ equal to the smallest
eigenvalue of the matrix W 0PW (W 0MW )−1, where W ≡ [y, x], we obtain LIML. For κ =K−2n
± ¡1− K−2
n
¢, we obtain B2SLS, which is Donald and Newey�s (1998) modiÞcation of Nagar�s
(1959) estimator.
Donald and Newey (1998) computed higher order mean squared errors (MSE) of the k-class
estimators. They showed that n times the MSE of 2SLS, LIML, and B2SLS are approximately
2
equal to
σ2ε
H+K2
n
σ2uε
H2,
σ2ε
H+K
n
σ2uσ
2ε − σ2
uε
H2,
σ2ε
H+K
n
σ2uσ
2ε + σ2
uε
H2,
where we deÞne H ≡ f 0fn . The Þrst term, which is common in all three expressions, is the usual
asymptotic variance obtained under the Þrst order asymptotics. Finite sample properties are
captured by the second terms. For 2SLS, the second term is easy to understand. As discussed
in, e.g., Hahn and Hausman (2001), 2SLS has an approximate bias equal to KσuεnH . Therefore,
the approximate expectation for√n (b− β) ignored in the usual Þrst order asymptotics is equal
to Kσuε√nH, which contributes
³Kσuε√nH
´2= K2
nσ2uεH2 in the higher order MSE calculation. The second
terms for LIML and B2SLS do not reßect higher order biases. Rather, they reßect higher order
variance that can be understood from Rothenberg�s (1983) or Bekker�s (1994) asymptotics.
Higher order MSE comparison alone would suggest that LIML and B2SLS should be preferred
to 2SLS. Unfortunately, it is well-known that the LIML and Nagar have the �moment� problem.
If (εi, ui) has a bivariate normal distribution, it is known that LIML and B2SLS do not possess
any moments. On the other hand, it is known that 2SLS does not have a moment problem.
See Mariano and Sawa (1972) or Sawa (1972). This theoretical property implies that LIML and
B2SLS have thicker tails than 2SLS. It would be nice if the moment problem could be dismissed
as a mere academic curiosity. Unfortunately, we found in Monte Carlo that LIML and B2SLS
tend to have bigger spread (measured in terms of interquartile range, etc) than 2SLS for some
parameter combinations. In this sense, 2SLS can still be viewed as a reasonable contender to
LIML and B2SLS.
Given that the poor higher order MSE property of 2SLS is based on its bias, we may hope to
improve 2SLS by eliminating its Þnite sample bias through the jackknife. Jackknife 2SLS may
turn out to be a reasonable contender given that it can be expressed as a linear combination of
2SLS, and hence, free of the moment problem. This is because the jackknife estimator of the
bias is given by
n− 1
n
Xi
Ãbπ0(i)Pj 6=i zjyjbπ0(i)Pj 6=i zjxj− bπ0Pi ziyibπ0Pi zixi
!=n− 1
n
Xi
Ãbπ0(i)Pj 6=i zjεjbπ0(i)Pj 6=i zjxj− x0Pεx0Px
!(1)
and the corresponding jackknife estimator is given by
bJ =bπ0Pi ziyibπ0Pi zixi
− n− 1
n
Xi
Ãbπ0(i)Pj 6=i zjyjbπ0(i)Pj 6=i zjxj− bπ0Pi ziyibπ0Pi zixi
!
= nbπ0Pi ziyibπ0Pi zixi
− n− 1
n
Xi
bπ0(i)Pj 6=i zjyjbπ0(i)Pj 6=i zjxj
Here, bπ denotes the OLS estimator of the Þrst stage coefficient π, and bπ(i) denotes such OLS
3
estimator based on every observation except the ith. Observe that bJ is a linear combination of
bπ0Pi ziyibπ0Pi zixi,bπ0(1)
Pj 6=i zjyjbπ0(1)
Pj 6=i zjxj
, . . . ,bπ0(n)
Pj 6=i zjyjbπ0(n)
Pj 6=i zjxj
and all of them have Þnite moments if the degree of overidentiÞcation is sufficiently large (K > 2).
See, e.g., Mariano (1972). Therefore, bJ would have Þnite second moments. if the degree of
overidentiÞcation is large.
We show that, for large K, the approximate MSE for the jackknife 2SLS is the same as
in Nagar�s estimator or JIVE. As in Donald and Newey (1998), we let h ≡ f 0εn . We impose
following assumptions. First, we assume normality1:
Condition 1 (i) (εi, ui)0 i = 1, . . . , n are i.i.d.; (ii) (εi, ui)
0 has a bivariate normal distribution
with mean equal to zero.
We also assume that zi is a sequence of nonstochastic column vectors satisfying
Condition 2 maxPii = O¡
1n
¢, where Pii denotes the (i, i)-element of P ≡ Z (Z 0Z)−1 Z0.
Condition 3 (i) max |fi| = max |z0iπ| = O¡n1/r
¢for some r sufficiently large (r > 3); (ii)
1n
Pi f
6i = O (1).2
After some algebra, it can be shown that
bπ0(i)Xj 6=izjεj = x0Pε+ δ1i, bπ0(i)X
j 6=izjxj = x0Px+ δ2i,
where
δ1i ≡ −xiεi + (1− Pii)−1 (Mx)i (Mε)i , δ2i ≡ −x2i + (1− Pii)−1 (Mx)2
i .
Here, (Mx)i denotes the ith element of Mx, and M ≡ I − P . We may therefore write thejackknife estimator of the bias as
n− 1
n
Xi
µx0Pε+ δ1i
x0Px+ δ2i− x0Pεx0Px
¶=n− 1
n
Xi
µ1
x0Pxδ1i − x0Pε
(x0Px)2 δ2i − 1
(x0Px)2 δ1iδ2i +x0Pε
(x0Px)3 δ22i
¶+Rn
1We expect that our result would remain valid under the symmetry assumption as in Donald and Newey
(1998), although such generalization is expected to be substantially complicated.2 If {fi} is a realization of a sequence of i.i.d. random variables such that E [|fi|r] <∞ for r sufficiently large,
Condition 3 (i) may be justiÞed in probabilistic sense. See Lemma 1 in Appendix.
4
where
Rn ≡ n− 1
n4
1¡1nx
0Px¢2
Xi
δ1iδ22i
1nx
0Px+ 1nδ2i
− n− 1
n4
1nx
0Pε¡1nx
0Px¢3
Xi
δ32i
1nx
0Px+ 1nδ2i
.
By Lemma 2 in Appendix, we have
n3/2Rn = Op
Ã1
n√n
Xi
¯̄δ1iδ
22i
¯̄+
1
n√n
Xi
¯̄δ3
2i
¯̄!= op (1) ,
and can ignore it from our further computation.
We now examine the resultant bias corrected estimator (1) ignoring Rn:
H√n
Ãx0Pεx0Px
− n− 1
n
Xi
µx0Pε+ δ1i
x0Px+ δ2i− x0Pεx0Px
¶+Rn
!
= H√nx0Pεx0Px
−n− 1
n
H1nx
0Px
Ã1√n
Xi
δ1i
!
+n− 1
n
H1nx
0Px
à 1√nx0Pε
1nx
0Px
!Ã1
n
Xi
δ2i
!
+n− 1
n
1
H
ÃH
1nx
0Px
!2Ã1
n√n
Xi
δ1iδ2i
!
−n− 1
n
1
H
ÃH
1nx
0Px
!2Ã 1√nx0Pε
1nx
0Px
!Ã1
n2
Xi
δ22i
!(2)
Theorem 1 below is obtained by squaring and taking expectation of the RHS of (2):
Theorem 1 Assume that Conditions 1, 2, and 3 are satisÞed. Then, the approximate MSE of√n (bJ − β) for the jackknife estimator up to O
¡Kn
¢is given by
σ2ε
H+K
n
σ2uσ
2ε + σ2
uε
H2.
Proof. See Appendix.
Theorem 1 indicates that the higher order MSE of Jackknife 2SLS is equivalent to that of
Nagar�s (1959) estimator if the number of instruments is sufficiently large. See Donald and
Newey (1998). Therefore, the Jackknife does not increase variance too much. Although it
has long been known that Jackknife does reduce the bias, the literature has been hesitant in
recommending its use primarily because of the concern that variance may increase too much
due to Jackknife bias reduction. See Shao and Tu (1995, p. 65), for example.
5
Theorem 1 also indicates that the higher order MSE of Jackknife 2SLS is bigger than that
of LIML. In some sense, this result is not surprising. In Hahn and Hausman (2000), we demon-
strated that LIML is approximately equivalent to the optimal linear combination of two Nagar�s
estimators based on forward and reverse speciÞcations. Jackknife 2SLS is solely based on for-
ward 2SLS, and ignores the information contained in reverse 2SLS. Therefore, it is quite natural
to have LIML dominating Jackknife 2SLS.
3 Monte Carlo
We generated
yi = xiβ + εi, xi = z0iπ + ui i = 1, . . . , n
such that zi ∼ N (0, IK), Var (εi) = Var (ui) = 1, and β = 0. We let π = (π, . . . ,π), such that
R2 ≡ π0E [ziz0i]π
π0E [ziz0i]π + Var (ui)=
Kπ2
Kπ2 + 1
Here, R2 denotes the theoretical R2 in the Þrst stage regression. We considered combinations of
the following parameters:
n = 100, 500, 1000
K = 5, 10, 30
R2 = .001, .1, .3
ρ = Cov (εi, ui) = 0, .5, .9
Results based 5000 Monte Carlo runs are summarized in Tables 1 - 3.
We Þrst discuss the sample size 100 case in Table 1. In the upper panel the �moment problem�
appears for both LIML and the Nagar estimator with both the mean and RMSE considerably
larger than for J2SLS. However, for low R2 = .001, which corresponds to the weak instrument
setup, J2SLS does considerably better than either LIML or Nagar. The interquartile range for
J2SLS is about 12 as large as for the other estimators. As the R2 increases, the superiority
of J2SLS is not as great. However, it is typically better than the other estimators for the
interquartile range. When LIML does better than J2SLS, it is only by a very small amount. In
the middle and lower panels of Table 1 as the number of instruments increases which exacerbates
the weak instrument problem, the superiority of J2SLS increases with respect to the interquartile
range. Now for the low R2 situation, its interquartile range is approximately 14 as large as LIML
or the Nagar estimator. However, the most interesting Þnding may be that the �classical� 2SLS
estimator typically does the best of any of the second order unbiased estimators in terms of
6
the interquartile range. Thus, while LIML and the Nagar estimator demonstrate their expected
superiority in terms of lower median bias, the Þnite sample performance of 2SLS in terms of the
interquartile range is striking.
In Table 2 we increase the sample size to 500 while the other parameters remain the same.
In terms of the interquartile range we again Þnd that J2SLS is often superior to LIML and the
Nagar estimator. In no situation does LIML have a signiÞcant superiority to J2SLS although
it is slightly better in a few cases. Once again, classical 2SLS does better than the other 3
estimators in terms of interquartile range, especially when R2 is very low. Thus, in the weak
instrument situations of low R2 and high K, regular 2SLS has much to recommend it. Lastly, in
Table 3 we increase the sample size to 1000, again keeping the other parameters constant. Now,
only in the low R2 case does J2SLS do better than LIML or the Nagar estimator. In the other
situations, LIML does as well as J2SLS or slightly better. However, LIML never demonstrates
a marked superiority in terms of the interquartile range. Once again, regular 2SLS does best in
terms of the interquartile range.34
Summing up, even for sample sizes of 1000 the superior performance of LIML with respect
to median unbiasedness is counteracted by the �moment� problem. The moment problem often
leads to high RMSE and a large interquartile range, especially when R2 is low, the number
of instruments is high, or the correlation between the two equations stochastic disturbances is
large. All of these situations are characteristic of the weak instrument situation as discussed
by Hahn and Hausman (2000). Thus, we suggest caution in using either LIML or the Nagar
estimator in the weak instrument situation. J2SLS or regular 2SLS may offer better properties
depending on the (implicit) Þnite sample risk function in use. We also recommend the use of
the Hahn-Hausman (2000) speciÞcation test as a means of ascertaining the degree of reliance
appropriate for the large sample approximations being used.3 In order to conÞrm that our result is not speciÞc to the β = 0 case, we considered the case where β = −1,
n = 100, K = 30, R2 = .001, ρ = .5. We found that the mean biases of LIML, Nagar, 2SLS, and J2SLS are
equal to 0.5544, 2.3186, 1.3002, 1.2965. The median biases were equal to 1.2729, 1.3111, 1.3005, 1.2948, and
the RMSE were equal to 42.3779 60.4197 1.3103 1.3450. Finally, interquartile ranges were equal to 1.7120,
1.1807, 0.2144, 0.4365. We repeated the Monte Carlo for the case where β = −1.8, n = 100, K = 30, R2 = .001,
ρ = .9. The mean biases were equal to 8.0161, 1.2714, 0.8995, 0.8976, and the median biases were equal to 0.8920,
0.9024, 0.8984, 0.8978. RMSE were equal to 514.8346, 27.0408, 0.9033, 0.9162, and the interquartile ranges were
equal to 0.8811, 0.6009, 0.1080, 0.2190. These numbers suggest that our results quite generally hold.4We also examined properties of Angrist, Imbens, and Krueger�s (1995) JIVE by a small scale Monte Carlo
experiment. For the case where β = 0, n = 100, K = 30, R2 = .001, ρ = .9, similar to Table 1, we found that the
JIVE has a mean bias equal to 0.6183, median bias equal to 0.8970, RMSE equal to 17.9269, and interquartile
range equal to 0.6346. For the case where β = 0, n = 100, K = 30, R2 = .1, ρ = .5, we found that the JIVE has
a mean bias equal to 0.2938, median bias equal to 0.1639, RMSE equal to 13.0949, and interquartile range equal
to 0.8962. These numbers suggest that the JIVE may well have a �moment� problem similar to LIML and the
Nagar estimator.
7
Appendix
A Higher Order Expansion
We Þrst present two Lemmas:
Lemma 1 Let υi be a smample of n independent random variables with maxiE [|υi|r] < cr <∞for some constant 0 < c <∞ and some 1 < r <∞. Then maxi |υi| = Op
¡n1/r
¢.
Proof. By Jensen�s inequality, we have
E
·maxi|υi|¸≤µE
·maxi|υi|r
¸¶1/r
≤ÃX
i
E [|υi|r]!1/r
≤µnmax
iE [|υi|r]
¶1/r
= n1/r
µmaxiE [|υi|r]
¶1/r
≤ n1/rc
The conclusion follows by Markov inequality.
Lemma 2 Assume that Conditions 2 and 3 are satisÞed. Further assume that E [|εi|r] < ∞and E [|ui|r] <∞ for r sufficiently large (r > 3). We then have (i) n−1/6 max |δ1i| = op (1) and
n−1/6 max |δ2i| = op (1); and (ii) 1n√n
Pi
¯̄δ1iδ
22i
¯̄= op (1) and 1
n√n
Pi
¯̄δ3
2i
¯̄= op (1).
Proof. Note that
max |δ1i| ≤ (max |fi|+ max |ui|) ·max |εi|+ max (1− Pii)−1 · (max |ui|+ max |(Pu)i|) · (max |εi|+ max |(Pε)i|) ,
We have (max |fi|+ max |ui|) · max |εi| = Op¡n2/r
¢by Lemma 1. Because max |(Pu)i|2 ≤
maxPii · u0u, and maxPii = O¡
1n
¢, we also have max |(Pu)i| = Op (1). Similarly, max |(Pε)i| =
Op (1). Therefore, we obtain we obtain max |δ1i| = op¡n1/6
¢. That max |δ2i| = op
¡n1/6
¢can
be established similarly. It then easily follows that 1n√n
Pi
¯̄δ1iδ
22i
¯̄ ≤ 1√n
max |δ1i|max |δ2i|2 =
op (1), and 1n√n
Pi
¯̄δ3
2i
¯̄ ≤ 1√n
max |δ2i|3 = op (1).
We note from Donald and Newey (1998) that we have the following expansion5:
H√nx0Pεx0Px
=7Xj=1
Tj + op
µK
n
¶, (3)
5Our representation of Donald and Newey�s result reßects our simplifying assumption that the Þrst stage is
correctly speciÞed.
8
where
T1 = h = Op (1) , T2 = W1 = Op
µK√n
¶, T3 = −W3
1
Hh = Op
µ1√n
¶,
T4 = 0, T5 = −W41
Hh = Op
µK
n
¶, T6 = −W3
1
HW1 = Op
µK
n
¶,
T7 = W 23
1
H2h = Op
µ1
n
¶,
and
h =f 0ε√n
= Op (1) , W1 =u0Pε√n
= Op
µK√n
¶,
W3 = 2f 0un
= Op
µ1√n
¶, W4 =
u0Pun
= Op
µK
n
¶.
We now expand H1nx0Px and
³H
1nx0Px
´2up to Op
¡1n
¢. Because 1
nx0Px = H +W3 +W4, we
haveH
1nx
0Px=
H
H +W3 +W4= 1− 1
HW3 − 1
HW4 +
1
H2W 2
3 + op
µK
n
¶, (4)Ã
H1nx
0Px
!2
= 1− 21
HW3 − 2
1
HW4 + 3
1
H2W 2
3 + op
µK
n
¶(5)
We now expand 1√n
Pi δ1i. Observe that
1√n
Xi
δ1i = − 1√n
Xi
xiεi +1√n
Xi
(1− Pii)−1 (Mx)i (Mε)i
= −h− 1√nu0ε+
1√n
(Mu)0³I − eP´−1
(Mε)
= −h− 1√nu0ε+
1√nu0Mε+
1√n
(Mu)0 P (Mε)
= −h− 1√nu0Pε+
1√nu0Pε− 1√
nu0PPε− 1√
nu0PPε+
1√nu0PPPε
= −h− 1√nu0C0ε− 1√
nu0PPε+
1√nu0PPPε, (6)
where, as in Donald and Newey (1998), we let
C ≡ P − P (I − P ) = P − PM, P ≡ eP ³I − eP´−1,
and eP is a diagonal matrix with element Pii on the diagonal. Now, note that, by Cauchy-
Schwartz,¯̄u0PPε
¯̄ ≤ √u0upε0PP 2Pε. Because u0u = Op (n), and ε0PP 2
Pε ≤ max³
Pii1−Pii
´2ε0Pε =
O¡
1n2
¢Op (K), we obtain¯̄̄̄
u0PPε√n
¯̄̄̄≤ 1√
n
√u0uqε0PP 2
Pε =1√n
qOp (n)
sO
µ1
n2
¶Op (K) = Op
Ã√K
n
!,
¯̄̄̄u0PPPε√
n
¯̄̄̄≤ 1√
n
√u0Pu
qε0PP 2
Pε =1√n
qOp (K)
sO
µ1
n2
¶Op (K) = Op
µK
n3/2
¶= op
µK
n
¶.
9
To conclude, we can write
1√n
Xi
δ1i = −h+W5 +W6 + op
µK
n
¶, (7)
where
W5 ≡ − 1√nu0C0ε = Op
Ã√K√n
!
W6 ≡ 1√nu0PPε = Op
Ã√K
n
!.
We now expand³
H1nx0Px
´³1√n
Pi δ1i
´using (4) and (7):Ã
H1nx
0Px
!Ã1√n
Xi
δ1i
!
=
µ1− 1
HW3 − 1
HW4 +
1
H2W 2
3
¶(−h+W5 +W6) + op
µK
n
¶= −h+W3
1
Hh+W4
1
Hh−W 2
3
1
H2h+W5 +W6 − 1
HW3W5 + op
µK
n
¶= −T1 − T3 − T5 − T7 + T8 + T9 + T10 + op
µK
n
¶(8)
where
T8 ≡ W5 = − 1√nu0C0ε = Op
Ã√K√n
!,
T9 ≡ W6 =1√nu0PPε = Op
Ã√K
n
!,
T10 ≡ − 1
HW3W5 = W3
1
H
1√nu0C0ε = Op
Ã√K
n
!.
We now expand H1nx0Px
µ1√nx0P ε
1nx0Px
¶¡1n
Pi δ2i
¢. We begin with expansion of 1
n
Pi δ2i. As in
(6), we can show that
1
n
Xi
δ2i = −H − 2
nf 0u− 1
nu0C0u− 1
nu0PPu+
1
nu0PPPu
Because¯̄u0PPPu
¯̄ ≤ max
µPii
1− Pii
¶· u0Pu = Op
µK
n
¶,
¯̄u0PPu
¯̄ ≤√u0uqu0PP 2
Pu ≤qOp (n)
smax
µPii
1− Pii
¶2
· u0Pu = Op
ÃrK
n
!,
10
we may write
1
n
Xi
δ2i = −H −W3 −W7 + op
µK
n
¶(9)
where
W7 ≡ 1
nu0C0u = Op
Ã√K
n
!.
Combining (4) and (9), we obtain
H1nx
0Px
Ã1
n
Xi
δ2i
!=
µ1− 1
HW3 − 1
HW4 +
1
H2W 2
3
¶(−H −W3 −W7) + op
µK
n
¶= −H +W3 +W4 − 1
HW 2
3 −W3 +1
HW 2
3 −W7 + op
µK
n
¶= −H +W4 −W7 + op
µK
n
¶which, combined with (3), yields
H1nx
0Px
à 1√nx0Pε
1nx
0Px
!Ã1
n
Xi
δ2i
!=
1
H
7Xj=1
Tj
(−H +W4 −W7) + op
µK
n
¶
= −7Xj=1
Tj +W41
Hh−W7
1
Hh+ op
µK
n
¶
= −7Xj=1
Tj − T5 + T11 + op
µK
n
¶(10)
where
T11 ≡ −W71
Hh = Op
Ã√K
n
!.
We now examine 1H
³H
1nx0Px
´2 ³1
n√n
Pi δ1iδ2i
´. Later in Section B.2.1, it is shown that
1
n√n
Xi
δ1iδ2i = op
µ1√n
¶Therefore, we should have
1
H
ÃH
1nx
0Px
!2Ã1
n√n
Xi
δ1iδ2i
!= T12 + op
µK
n
¶(11)
where
T12 ≡ 1
H
1
n√n
Xi
δ1iδ2i = op
µ1√n
¶
11
Now, we examine 1H
³H
1nx0Px
´2µ
1√nx0P ε
1nx0Px
¶¡1n2
Pi δ
22i
¢. Later in Section B.2.3, it is shown that
1
n2
Xi
δ22i = Op
µ1
n
¶.
Therefore, we have
1
H
ÃH
1nx
0Px
!2Ã 1√nx0Pε
1nx
0Px
!Ã1
n2
Xi
δ22i
!= T14 + op
µK
n
¶(12)
where
T14 ≡ 1
H2h
1
n2
Xi
δ22i = Op
µ1
n
¶Combining (2), (3), (8), (10), (11), and (12), we obtain
H√n
Ãx0Pεx0Px
− n− 1
n
Xi
µx0Pε+ δ1i
x0Px+ δ2i− x0Pεx0Px
¶+Rn
!
= T1 + T3 + T7 − T8 − T9 − T10 + T11 + T12 − T14 + op
µK
n
¶. (13)
B Approximate MSE Calculation
In computing the (approximate) mean squared error, we keep terms up to Op¡
1n
¢. From (13),
we can see that the MSE of the jackknife estimator approximately equal to
E£T 2
1
¤+E
£T 2
3
¤+E
£T 2
8
¤+E
£T 2
12
¤+2E [T1T3] + 2E [T1T7]− 2E [T1T8]− 2E [T1T9]− 2E [T1T10] + 2E [T1T11]
+2E [T1T12]− 2E [T1T14]− 2E [T3T8] (14)
Combining (14) with (15), (16), (17), (18), (19), (20), (21), (22), (23), (26), (37), and (38) in
the next two subsections, it can shown that the approximate MSE up to Op¡
1n
¢is given by
σ2εH +
K
n
¡σ2uσ
2ε + σ2
uε
¢+
µ−12
H
¶σ2uσ
2ε
n+ 20
σ2uε
n+
12
H
σ4uσ
2ε
n,
which proves Theorem 1.
12
B.1 Approximate MSE Calculation: Intermediate Results That Only Re-quire Symmetry
From Donald and Newey (1998), we can see that
E£T 2
1
¤= σ2
εH (15)
E£T 2
3
¤=
4
n
¡σ2uσ
2ε + 2σ2
uε
¢+ o
µ1
n
¶(16)
E [T1T3] = 0 (17)
E [T1T7] =4
n
¡σ2uσ
2ε + 2σ2
uε
¢+ o
µ1
n
¶(18)
E£T 2
8
¤=
K
n
¡σ2uσ
2ε + σ2
uε
¢+ o
µK
nsupPii
¶(19)
Also, by symmetry, we have
E [T1T8] = 0 (20)
E [T1T9] = 0. (21)
It remains to compute E£T 2
12
¤, E [T1T10], E [T1T11], E [T1T12], E [T1T14], and E [T3T8]. We will
take care of E£T 2
12
¤, E [T1T12], and E [T1T14] in the next section.
Note that
E [T1T10] = E [T3T8] = E
·2f 0un
1
H
1√nu0C 0ε
f 0ε√n
¸=
2
n2HE£u0f 0fε · u0C0ε¤
Using equation (18) of Donald and Newey (1998), we obtain
E£u0f 0fε · u0C 0ε¤ =
nXi=1
E£u2i ε
2i f
2i C
0ii
¤+
nXi=1
Xj 6=iE£uiεiujεjf
2i C
0jj
¤+
nXi=1
Xj 6=iE£u2i ε
2jfifjC
0ij
¤+
nXi=1
Xj 6=iE£uiεjujεififjC
0ji
¤= σ2
uσ2ε
nXi=1
Xj 6=ififjC
0ij + σ2
uε
nXi=1
Xj 6=ififjC
0ji
= σ2uσ
2εf0C 0f + σ2
uεf0Cf
Therefore, we have
E [T1T10] = E [T3T8] =2
nH
σ2uσ
2εf0C0f + σ2
uεf0Cf
n
=2
nH
µσ2uσ
2εH + σ2
uεH + o
µ1
n
¶¶=
2
n
¡σ2uσ
2ε + σ2
uε
¢+ o
µ1
n2
¶, (22)
where the second equality is based on equation (20) of Donald and Newey (1998).
13
Now, note that
E [T1T11] = − 1
n2HE£u0Cu · ε0ff 0ε¤
and
E£ε0ff 0ε · u0Cu¤ =
nXi=1
E£u2i ε
2i f
2i Cii
¤+
nXi=1
Xj 6=iE£ε2i u
2jf
2i Cjj
¤+
nXi=1
Xj 6=iE [εiεjuiujfifjCij ] +
nXi=1
Xj 6=iE [εiεjujuififjCji]
= σ2uεf
0C 0f + σ2uεf
0Cf
Because Cf = Pf − P (I − P ) f = PZπ − P (I − P )Zπ = Zπ = f , we obtain
E [T1T11] = −2σ2uε
n. (23)
B.2 Approximate MSE Calculation: Intermediate Results Based On Nor-mality
Note that
δ1iδ2i = x3i εi + (1− Pii)−2 (Mu)3
i (Mε)i
− (1− Pii)−1 xiεi (Mu)2i − (1− Pii)−1 x2
i (Mu)i (Mε)i
= f3i εi + 3f2
i uiεi + 3fiu2i εi + u3
i εi
+ (1− Pii)−2 (Mu)3i (Mε)i − (1− Pii)−1 fiεi (Mu)2
i
− (1− Pii)−1 uiεi (Mu)2i − (1− Pii)−1 f2
i (Mu)i (Mε)i
−2 (1− Pii)−1 fiui (Mu)i (Mε)i − (1− Pii)−1 u2i (Mu)i (Mε)i (24)
and
δ22i =
³−f2
i − 2fiui − u2i + (1− Pii)−1 (Mu)2
i
´2
= f4i + 6f2
i u2i + u4
i + (1− Pii)−2 (Mu)4i
+4f3i ui − 2f2
i (1− Pii)−1 (Mu)2i + 4fiu
3i
−4fiui (1− Pii)−1 (Mu)2i − 2 (1− Pii)−1 u2
i (Mu)2i (25)
14
B.2.1 E£T 2
12
¤We Þrst compute E
£T 2
12
¤noting that
H2E£T 2
12
¤ ≤ 10
n3
Xi
f6i Eh(εi)
2i
+10
n3
Xi
9f4i Eh(uiεi)
2i
+10
n3
Xi
9f2i Eh¡u2i εi¢2i
+10
n3
Xi
Eh¡u3i εi¢2i
+10
n3
Xi
(1− Pii)−4Eh(Mu)6
i (Mε)2i
i+
10
n3
Xi
(1− Pii)−2 f2i Ehε2i (Mu)4
i
i+
10
n3
Xi
(1− Pii)−2Ehu2i ε
2i (Mu)4
i
i+
10
n3
Xi
(1− Pii)−2 f4i Eh(Mu)2
i (Mε)2i
i+
10
n3
Xi
4 (1− Pii)−2 f2i Ehu2i (Mu)2
i (Mε)2i
i+
10
n3
Xi
(1− Pii)−2Ehu4i (Mu)2
i (Mε)2i
iUnder the assumption that 1
n
Pi f
6i = O (1), the Þrst four terms are all o
¡1n
¢. Below, we
characterize orders of the rest of the terms.
We now compute 10n3
Pi (1− Pii)−4E
h(Mu)6
i (Mε)2i
i. We write
εi ≡ σuεσ2u
ui + vi,
where vi is independent of ui. Because
(1− Pii)−4Eh(Mu)6
i (Mε)2i
i= (1− Pii)−4
µσ2uε
σ4u
105 Var ((Mu)i)4 + 15 Var ((Mu)i)
3 Var ((Mv)i)
¶= (1− Pii)−4
µ105
σ2uε
σ4u
(1− Pii)4 σ8u + 15 (1− Pii)3 σ6
u (1− Pii)µσ2ε −
σ2uε
σ2u
¶¶= 15σ2
εσ6u + 90σ2
uεσ4u,
we have
10
n3
Xi
(1− Pii)−4Eh(Mu)6
i (Mε)2i
i=
10
n3
Xi
¡15σ2
εσ6u + 90σ2
uεσ4u
¢= o
µ1
n
¶.
15
We now compute 10n3
Pi (1− Pii)−2 f2
i Ehε2i (Mu)4
i
i. Because
(1− Pii)−2Ehε2i (Mu)4
i
i= (1− Pii)−2E
h(Mu)4
i
³(Pε)2
i + (Mε)2i
´i= (1− Pii)−2 · 3 Var ((Mu)i)
2 ·Var ((Pε)i)
+ (1− Pii)−2
µσ2uε
σ4u
15 Var ((Mu)i)3 + 3 Var ((Mu)i)
2 Var ((Mv)i)
¶= 3Piiσ
2εσ
4u + 15 (1− Pii)σ2
uεσ2u + 3 (1− Pii)
¡σ2εσ
4u − σ2
uεσ2u
¢= 3σ2
εσ4u + 12 (1− Pii)σ2
uεσ2u,
we have
10
n3
Xi
(1− Pii)−2 f2i Ehε2i (Mu)4
i
i≤ ¡3σ2
εσ4u + 12σ2
uεσ2u
¢ 10
n3
Xi
f2i = o
µ1
n
¶
We now compute 10n3
Pi (1− Pii)−2E
hu2i ε
2i (Mu)4
i
i. Because
Ehu2i ε
2i (Mu)4
i
i= E
h(Mu)4
i
³(Pu)2
i + (Mu)2i
´³(Pε)2
i + (Mε)2i
´i= E
h(Mu)4
i
iEh(Pu)2
i (Pε)2i
i+E
h(Mu)6
i
iEh(Pε)2
i
i+E
h(Mu)4
i (Mε)2i
iEh(Pu)2
i
i+E
h(Mu)6
i (Mε)2i
i= 3 (1− Pii)2 P 2
iiσ4u
¡σ2uσ
2ε + 2σ2
uε
¢+15 (1− Pii)3 Piiσ
6uσ
2ε
+ (1− Pii)3 Pii¡12σ2
uεσ2u + 3σ2
εσ4u
¢σ2u
+ (1− Pii)4 ¡15σ2εσ
6u + 90σ2
uεσ4u
¢,
it easily follows that
10
n3
Xi
(1− Pii)−2Ehu2i ε
2i (Mu)4
i
i= o
µ1
n
¶.
We now compute 10n3
Pi (1− Pii)−2 f4
i Eh(Mu)2
i (Mε)2i
i. Because
Eh(Mu)2
i (Mε)2i
i= (1− Pii)2 ¡σ2
uσ2ε + 2σ2
uε
¢,
it easily follows that
10
n3
Xi
(1− Pii)−2 f4i Eh(Mu)2
i (Mε)2i
i= o
µ1
n
¶.
We now compute 10n3
Pi 4 (1− Pii)−2 f2
i Ehu2i (Mu)2
i (Mε)2i
i. Because
Ehu2i (Mu)2
i (Mε)2i
i= E
h³(Mu)2
i + (Pu)2i
´(Mu)2
i (Mε)2i
i= (1− Pii)3 ¡12σ2
uεσ2u + 3σ2
εσ4u
¢+ Pii (1− Pii)2 σ2
u
¡σ2uσ
2ε + 2σ2
uε
¢,
16
it easily follows that
10
n3
Xi
4 (1− Pii)−2 f2i Ehu2i (Mu)2
i (Mε)2i
i=
¡12σ2
uεσ2u + 3σ2
εσ4u
¢ 40
n3
Xi
f2i (1− Pii) + σ2
u
¡σ2uσ
2ε + 2σ2
uε
¢ 40
n3
Xi
f2i Pii (1− Pii)2
= o
µ1
n
¶.
We Þnally compute 10n3
Pi (1− Pii)−2E
hu4i (Mu)2
i (Mε)2i
i. Because
Ehu4i (Mu)2
i (Mε)2i
i= E
h³(Mu)4
i + 2 (Mu)2i (Pu)2
i + (Pu)4i
´(Mu)2
i (Mε)2i
i= E
h(Mu)6
i (Mε)2i
i+ 2E
h(Pu)2
i
iEh(Mu)4
i (Mε)2i
i+E
h(Pu)4
i
iEh(Mu)2
i (Mε)2i
i= (1− Pii)4 ¡15σ2
εσ6u + 90σ2
uεσ4u
¢+ 2Pii (1− Pii)3 σ2
u
¡12σ2
uεσ2u + 3σ2
εσ4u
¢+3P 2
ii (1− Pii)2 σ4u
¡σ2uσ
2ε + 2σ2
uε
¢,
it easily follows that
10
n3
Xi
(1− Pii)−2Ehu4i (Mu)2
i (Mε)2i
i= o
µ1
n
¶.
To summarize, we have
E£T 2
12
¤= o
µ1
n
¶. (26)
B.2.2 E [T1T12]
We now compute E [T1T12]. We compute the expectation of the product of each term on the
right side of (24) with f 0ε.
E£¡f 0ε¢ ¡f3i εi¢¤
= f4i σ
2ε (27)
E£¡f 0ε¢ ¡
3f2i uiεi
¢¤= 0 (28)
E£¡f 0ε¢ ¡
3fiu2i εi¢¤
= 3f2i
¡σ2uσ
2ε + 2σ2
uε
¢(29)
E£¡f 0ε¢ ¡u3i εi¢¤
= 0 (30)
Now note that
E£Mu
¡f 0u¢¤
= σ2uMf = 0, E
£Mu
¡f 0ε¢¤
= σuεMf = 0,
E£Mε
¡f 0u¢¤
= σuεMf = 0, E£Mε
¡f 0ε¢¤
= σ2εMf = 0,
17
which implies independence. Therefore, we have
Eh¡f 0ε¢ · (1− Pii)−2 (Mu)3
i (Mε)i
i= 0 (31)
Lemma 3 Suppose that A,B,C are zero mean normal random variables. Also suppose that A
and B are independent of each other. Then E£A2BC
¤= Cov (B,C) Var (A).
Proof. Write
C =Cov (A,C)
Var (A)A+
Cov (B,C)
Var (B)B + v
where v is independent of A and B. Conclusion easily follows.
Using Lemma 3, we obtain
Eh¡f 0ε¢ · ³− (1− Pii)−1 fiεi (Mu)2
i
´i= − (1− Pii)−1 Cov
¡f 0ε, fiεi
¢Var ((Mu)i)
= − (1− Pii)−1 f2i σ
2ε (1− Pii)σ2
u
= −f2i σ
2εσ
2u (32)
Symmetry implies
Eh¡f 0ε¢ · ³− (1− Pii)−1 uiεi (Mu)2
i
´i= 0 (33)
and
Eh¡f 0ε¢ · ³− (1− Pii)−1 f2
i (Mu)i (Mε)i
´i= 0 (34)
Lemma 4 Suppose that A,B,C,D are zero mean normal random variables. Also suppose that
(A,B) and C are independent of each other. Then E [ABCD] = Cov (A,B) Cov (C,D)
Proof. Write D = ξ1A+ ξ2B + ξ3C + v, where ξs denote regression coefficients. Note that
ξ3 = Cov (C,D) /Var (C) by independence. We then have
ABCD = ξ1A2BC + ξ2AB
2C + ξ3ABC2 +ABCv
from which the conclusion follows.
Using Lemma 4, we obtain
Eh¡f 0ε¢ · ³−2 (1− Pii)−1 fiui (Mu)i (Mε)i
´i= −2 (1− Pii)−1 Cov ((Mu)i , (Mε)i) Cov
¡f 0ε, fiui
¢= −2 (1− Pii)−1 (1− Pii)σuεf2
i σuε
= −2σ2uεf
2i (35)
18
Finally, using symmetry again, we obtain
Eh¡f 0ε¢ · ³− (1− Pii)−1 u2
i (Mu)i (Mε)i
´i= 0 (36)
Combining (27) - (36), we obtain
E£¡f 0ε¢ · (δ1iδ2i)
¤= f4
i σ2ε + 2f2
i
¡σ2uσ
2ε + 2σ2
uε
¢,
from which we obtain
E [T1T12] =1
H
1
n2
Xi
E£¡f 0ε¢ · (δ1iδ2i)
¤=
1
n
σ2ε
H
Ã1
n
Xi
f4i
!+ 2
σ2uσ
2ε + 2σ2
uε
n. (37)
B.2.3 1n2
Pni=1 δ
22i
We compute E£
1n2
Pni=1 δ
22i
¤and characterize its order of magnitude. From (25), we can obtain
E£δ2
2i
¤= f4
i + 4f2i σ
2u + 4Piiσ
4u,
and hence, it follows that
E
"1
n2
nXi=1
δ22i
#=
1
n
Ã1
n
nXi=1
f4i
!+
4Hσ2u
n+ o
µK
n
¶.
B.2.4 E [T1T14]
We compute the expectation of the product of each term on the right hand side of (25) with
(f 0ε)2, noting independence between (Mu)i and f0ε. We have
Eh¡f 0ε¢2 · f4
i
i= f 0fσ2
εf4i = nHσ2
εf4i ,
Eh¡f 0ε¢2 · 6f2
i u2i
i= 6f2
i
³¡f 0fσ2
ε
¢σ2u + 2 (fiσuε)
2´
= 6nHσ2εσ
2uf
2i + 12σ2
uεf4i ,
Eh¡f 0ε¢2 · u4
i
i= 12f2
i σ2uεσ
2u + 3f 0fσ2
εσ4u
= 3nHσ2εσ
4u + 12f2
i σ2uεσ
2u,
Eh¡f 0ε¢2 · (1− Pii)−2 (Mu)4
i
i=¡f 0fσ2
ε
¢ · 3σ4u = 3nHσ2
εσ2u,
Eh¡f 0ε¢2 · ¡4f3
i ui¢i
= 0,
Eh¡f 0ε¢2 ·
³−2f2
i (1− Pii)−1 (Mu)2i
´i= −2f2
i f0fσ2
εσ2u = −2nHf2
i σ2εσ
2u,
Eh¡f 0ε¢2 · ¡4fiu3
i
¢i= 0,
19
Eh¡f 0ε¢2 ·
³−4fiui (1− Pii)−1 (Mu)2
i
´i= 0,
Eh¡f 0ε¢2 ·
³−2 (1− Pii)−1 u2
i (Mu)2i
´i= −4f2
i σ2uεσ
2u − 4 (1− Pii)nHσ2
εσ4u − 2nHσ2
εσ4u.
Therefore,
E
"¡f 0ε¢2
nXi=1
δ22i
#= n2Hσ2
ε
Ã1
n
nXi=1
f4i
!+ 6n2H2σ2
εσ2u + 12nσ2
uε
Ã1
n
nXi=1
f4i
!+3n2Hσ2
εσ2u + 12nHσ2
uεσ2u + 3n2Hσ2
εσ2u − 2n2H2σ2
εσ2u
−4nHσ2uεσ
2u − 4 (n−K)nHσ2
εσ4u − 2n2Hσ2
εσ4u,
and therefore, we have
E [T1T14] =1
H2n3E
"¡f 0ε¢2
nXi=1
δ22i
#
=1
n
1
Hσ2ε
Ã1
n
nXi=1
f4i
!+
1
n
µ6
Hσ2εσ
2u + 4σ2
εσ2u −
6
Hσ2εσ
4u
¶+ o
µ1
n
¶. (38)
20
References
[1] Akahira, M. (1983), �Asymptotic DeÞciency of the Jackknife Estimator�, Australian
Journal of Statistics 25, 123 - 129.
[2] Anderson, T.W. and T. Sawa (1979), �Evaluation of the Distribution Function of the
Two Stage Least Squares Estimator�, Econometrica 47, 163 - 182.
[3] Angrist, J.D., G.W. Imbens, and A. Krueger (1995), �Jackknife Instrumental Vari-
ables Estimation�, NBER Technical Working Paper No. 172.
[4] Bekker, P.A. (1994), �Alternative Approximations to the Distributions of Instrumental
Variable Estimators�, Econometrica 92, 657 � 681.
[5] Donald, S. G. and W. K. Newey (1998), �Choosing the Number of Instruments�,
mimeo.
[6] Donald, S.G., and W.K. Newey (1999), �A Jackknife Interpretation of the Continuous
Updating Estimator�, mimeo.
[7] Hahn, J., and J. Hausman (2000), �A New SpeciÞcation Test of the Validity of Instru-
mental Variables�, forthcoming in Econometrica.
[8] Maddala, G.S., and J. Jeong (1992), �On the Exact Small Sample Distribution of the
Instrumental Variables Estimator�, Econometrica 60, 181 - 183.
[9] Mariano, R. S (1972), �The Existence of Moments of the Ordinary Least Squares and
Two-Stage Least Squares Estimators�, Econometrica 40, 643 - 652.
[10] Mariano, R.S. AND T. Sawa (1972), �The Exact Finite-Sample Distribution of the
Limited-Information Maximum Likelihood Estimator in the Case of Two Included Endoge-
nous Variables�, Journal of the American Statistical Association 67, 159 � 163.
[11] Morimune, K. (1983), �Approximate Distributions of the k-Class Estimators when the
Degree of OveridentiÞability is Large Compared with the Sample Size�, Econometrica 51,
821 - 841.
[12] Nagar, A.L. (1959), �The Bias and Moment Matrix of the General k-Class Estimators of
the Parameters in Simultaneous Equations�, Econometrica 27, 575 - 595.
[13] Newey, W.K., and R.J. Smith (2000), �Asymptotic Equivalence of Empirical Likelihood
and the Continuous Updating Estimator�, mimeo.
21
[14] Rothenberg, T.J. (1983), �Asymptotic Properties of Some Estimators In Structural Mod-
els�, in Studies in Econometrics, Time Series, and Multivariate Statistics.
[15] Sawa, T. (1972), �Finite-Sample Properties of the k-Class Estimators�, Econometrica 40,
653-680.
[16] Shao, J., and D. Tu (1995), The Jackknife and Bootstrap. New York: Springer-Verlag.
[17] Staiger, D., and J. Stock (1997), �Instrumental Variables Regression with Weak In-
struments�, Econometrica 65, 557 - 586.
22
Tabl
e 1:
Fin
ite S
ampl
e C
ompa
rison
of I
V E
stim
ator
s
DG
P M
ean
RM
SE
Med
ian
Inte
rQua
rtile
Ran
ge
n K
R2
ρ LI
ML
Nag
ar
2SLS
Ja
ckkn
ife
LIM
L N
agar
2S
LS
Jack
knife
LIM
L N
agar
2S
LS
Jack
knife
LIM
L N
agar
2S
LS
Jack
knife
10
0 5
0.00
1 0
1.86
70
-0.2
704
-0.0
037
-0.0
238
188.
5526
29
.939
60.
5778
2.
2778
-0
.002
40.
0227
0.
0064
0.00
89 1
.947
4 1.
3153
0.6
328
1.08
99
100
5 0.
1 0
-0.0
488
-1.3
125
-0.0
020
0.00
01
3.79
74
94.3
252
0.28
74
0.42
39
-0.0
037
-0.0
019
-0.0
009
-0.0
031
0.48
41 0
.441
1 0.
3442
0.
4206
10
0 5
0.3
0 -0
.000
7 -0
.001
0 -0
.000
9-0
.001
0 0.
1717
0.
1645
0.15
15
0.16
40
-0.0
002
0.00
04
0.00
150.
0011
0.2
124
0.20
51 0
.192
4 0.
2045
10
0 5
0.00
1 0.
5 0.
6353
0.
4666
0.
4927
0.53
35
27.5
698
21.9
502
0.70
16
2.95
08
0.45
830.
4681
0.
4860
0.47
99 1
.697
6 1.
1589
0.5
541
0.94
09
100
5 0.
1 0.
5 -0
.052
8 -0
.008
4 0.
1197
0.00
67
2.16
40
2.11
090.
2923
0.
5745
0.
0030
0.04
70
0.13
440.
0615
0.4
691
0.44
17 0
.322
0 0.
4119
10
0 5
0.3
0.5
-0.0
149
-0.0
024
0.03
37-0
.000
8 0.
1709
0.
1682
0.15
22
0.16
64
-0.0
007
0.01
10
0.04
270.
0119
0.2
086
0.20
80 0
.189
8 0.
2088
10
0 5
0.00
1 0.
9 2.
5881
-1
.670
7 0.
8785
0.85
33
110.
5458
18
5.57
350.
9163
1.
6862
0.
8423
0.87
49
0.87
660.
8638
0.9
801
0.62
94 0
.297
1 0.
5026
10
0 5
0.1
0.9
-0.2
033
-0.0
854
0.22
070.
0340
2.
8753
45
.166
90.
3119
0.
4124
-0
.001
00.
0910
0.
2483
0.11
93 0
.425
0 0.
4215
0.2
575
0.37
45
100
5 0.
3 0.
9 -0
.024
4 -0
.003
2 0.
0617
-0.0
004
0.17
26
0.17
520.
1535
0.
1709
0.
0001
0.02
24
0.07
810.
0226
0.2
032
0.21
00 0
.180
2 0.
2062
10
0 10
0.
001
0 0.
8233
0.
5163
0.
0001
0.00
80
56.4
218
17.7
069
0.34
78
0.82
26
-0.0
106
-0.0
136
-0.0
038
-0.0
035
1.98
74 1
.389
1 0.
4307
0.
8164
10
0 10
0.
1 0
0.09
48
-0.0
187
0.00
210.
0081
4.
8792
4.
3281
0.24
01
0.38
99
0.00
920.
0075
0.
0041
0.00
71 0
.582
2 0.
5268
0.3
009
0.42
90
100
10
0.3
0 0.
0045
0.
0034
0.
0016
0.00
29
0.20
22
0.18
110.
1451
0.
1737
0.
0060
0.00
61
0.00
410.
0065
0.2
336
0.22
59 0
.189
3 0.
2205
10
0 10
0.
001
0.5
0.21
97
1.58
22
0.49
560.
4890
25
.311
1 10
3.43
630.
5827
0.
8730
0.
4878
0.50
78
0.49
440.
4903
1.7
190
1.22
52 0
.391
8 0.
7249
10
0 10
0.
1 0.
5 0.
1976
0.
1746
0.
2251
0.09
32
12.0
014
8.37
420.
3148
0.
3926
0.
0220
0.07
87
0.23
470.
1330
0.5
444
0.51
85 0
.279
5 0.
4074
10
0 10
0.
3 0.
5 -0
.013
0 -0
.001
8 0.
0840
0.01
22
0.18
58
0.19
060.
1624
0.
1768
0.
0060
0.01
77
0.09
240.
0283
0.2
279
0.23
27 0
.181
9 0.
2233
10
0 10
0.
001
0.9
0.99
84
1.38
64
0.89
090.
8757
42
.793
4 41
.826
30.
9049
0.
9690
0.
8578
0.89
42
0.89
040.
8815
0.9
046
0.61
79 0
.196
8 0.
3726
10
0 10
0.
1 0.
9 -0
.032
7 -0
.926
0 0.
4044
0.16
65
8.79
10
85.9
999
0.43
59
0.37
21
0.01
240.
1349
0.
4155
0.22
78 0
.459
2 0.
5205
0.1
998
0.34
32
100
10
0.3
0.9
-0.0
221
-0.0
062
0.15
000.
0197
0.
1780
0.
2104
0.19
30
0.17
94
0.00
320.
0290
0.
1622
0.04
52 0
.213
9 0.
2359
0.1
581
0.22
01
100
30
0.00
1 0
-0.3
916
0.57
87
-0.0
020
-0.0
020
33.4
457
59.7
482
0.18
56
0.40
95
-0.0
161
0.00
20
0.00
26-0
.002
7 1.
9727
1.3
446
0.24
67
0.50
40
100
30
0.1
0 -1
.590
9 0.
2781
-0
.000
8-0
.000
9 19
4.02
82
12.3
847
0.15
92
0.30
46
-0.0
050
0.00
50
0.00
080.
0017
0.8
120
0.71
60 0
.209
3 0.
3636
10
0 30
0.
3 0
-0.0
092
-0.0
023
0.00
050.
0012
5.
1629
0.
3048
0.11
89
0.18
10
0.00
170.
0022
0.
0018
0.00
31 0
.285
6 0.
2691
0.1
552
0.22
21
100
30
0.00
1 0.
5 -0
.388
0 0.
7288
0.
4944
0.49
22
44.3
700
20.7
709
0.52
08
0.61
01
0.46
180.
4858
0.
4943
0.48
94 1
.744
2 1.
2106
0.2
120
0.43
91
100
30
0.1
0.5
0.09
90
-0.6
021
0.36
190.
2635
13
.428
8 79
.573
60.
3902
0.
3840
0.
0635
0.16
98
0.35
970.
2695
0.7
444
0.70
90 0
.187
8 0.
3358
10
0 30
0.
3 0.
5 -0
.028
1 -0
.079
4 0.
2027
0.08
14
0.53
45
2.72
920.
2315
0.
1933
0.
0032
0.02
50
0.20
360.
0907
0.2
645
0.28
60 0
.145
2 0.
2185
10
0 30
0.
001
0.9
1.60
61
0.84
67
0.89
590.
8931
25
.664
3 22
.118
70.
8998
0.
9121
0.
8889
0.88
90
0.89
600.
8935
0.9
178
0.58
87 0
.108
7 0.
2209
10
0 30
0.
1 0.
9 -0
.147
6 -0
.187
1 0.
6533
0.47
39
6.34
75
24.1
535
0.66
04
0.51
16
0.02
050.
2731
0.
6535
0.49
00 0
.520
7 0.
6709
0.1
248
0.23
13
100
30
0.3
0.9
-0.0
263
-0.0
054
0.36
440.
1443
0.
1931
3.
4821
0.37
46
0.21
38
0.00
370.
0439
0.
3658
0.15
93 0
.220
5 0.
3184
0.1
114
0.19
68
Not
e:
n de
note
s the
sam
ple
size
. K d
enot
es th
e nu
mbe
r of i
nstru
men
ts. R
2 den
otes
the
theo
retic
al R
2 of t
he fi
rst s
tage
regr
essi
on. ρ
den
otes
the
cova
rianc
e be
twee
n th
e fir
st st
age
and
the
seco
nd st
age
erro
r ter
ms.
All
resu
lts a
re b
ased
on
5000
Mon
te C
arlo
runs
.
Tabl
e 2:
Fin
ite S
ampl
e C
ompa
rison
of I
V E
stim
ator
s
DG
P M
ean
RM
SE
Med
ian
Inte
rQua
rtile
Ran
ge
n K
R2
ρ LI
ML
Nag
ar
2SLS
Ja
ckkn
ifeLI
ML
Nag
ar
2SLS
Ja
ckkn
ifeLI
ML
Nag
ar
2SLS
Ja
ckkn
ifeLI
ML
Nag
ar
2SLS
Ja
ckkn
ife
500
5 0.
001
0 4.
8289
3.
8475
0.
0000
0.
0101
38
2.19
7426
6.05
310.
5562
2.20
540.
0109
0.00
14
0.00
540.
0076
1.7
504
1.26
23 0
.622
3 1.
0081
50
0 5
0.1
0 0.
0001
0.
0004
0.
0004
0.
0004
0.
1421
0.13
860.
1307
0.13
820.
0018
0.00
15
0.00
150.
0006
0.1
910
0.18
74 0
.177
8 0.
1866
50
0 5
0.3
0 0.
0002
0.
0003
0.
0003
0.
0003
0.
0686
0.06
820.
0673
0.06
820.
0011
0.00
10
0.00
130.
0013
0.0
948
0.09
46 0
.093
0 0.
0942
50
0 5
0.00
1 0.
5 -3
.843
0 1.
4358
0.
4489
0.
3598
17
7.61
2687
.460
20.
6689
2.59
490.
3780
0.42
59
0.45
240.
4277
1.5
844
1.09
07 0
.547
4 0.
8936
50
0 5
0.1
0.5
-0.0
097
-0.0
001
0.02
72
0.00
14
0.14
280.
1396
0.13
060.
1387
0.00
210.
0118
0.
0362
0.01
23 0
.189
40.
1869
0.1
726
0.18
63
500
5 0.
3 0.
5 -0
.002
1 0.
0003
0.
0073
0.
0004
0.
0688
0.06
820.
0671
0.06
820.
0008
0.00
32
0.01
000.
0032
0.0
939
0.09
36 0
.091
6 0.
0941
50
0 5
0.00
1 0.
9 1.
2619
0.
6209
0.
8142
0.
7581
55
.225
131
.076
00.
8617
1.41
290.
6407
0.77
75
0.81
120.
7638
1.0
935
0.65
06 0
.307
7 0.
5118
50
0 5
0.1
0.9
-0.0
166
-0.0
002
0.04
90
0.00
25
0.14
440.
1438
0.13
230.
1418
0.00
160.
0186
0.
0629
0.02
00 0
.185
30.
1850
0.1
623
0.18
31
500
5 0.
3 0.
9 -0
.003
9 0.
0004
0.
0130
0.
0005
0.
0691
0.06
860.
0673
0.06
850.
0007
0.00
45
0.01
720.
0048
0.0
935
0.09
24 0
.089
2 0.
0923
50
0 10
0.
001
0 22
.820
9 -2
.003
2 -0
.016
4 -0
.038
0 17
12.2
477
138.
4924
0.35
060.
8410
-0.0
376
-0.0
049
-0.0
132
-0.0
218
1.89
321.
3601
0.4
402
0.80
29
500
10
0.1
0 -0
.004
3 -0
.003
8 -0
.003
3 -0
.003
7 0.
1546
0.15
030.
1287
0.14
70-0
.003
4-0
.003
0 -0
.003
0-0
.003
7 0.
1992
0.19
31 0
.170
0 0.
1902
50
0 10
0.
3 0
-0.0
013
-0.0
012
-0.0
012
-0.0
012
0.07
070.
0703
0.06
770.
0702
-0.0
019
-0.0
018
-0.0
015
-0.0
015
0.09
420.
0932
0.0
904
0.09
34
500
10
0.00
1 0.
5 1.
2878
1.
0645
0.
4752
0.
4469
59
.374
051
.461
80.
5641
0.87
060.
4096
0.47
93
0.47
400.
4545
1.7
331
1.20
04 0
.385
5 0.
6966
50
0 10
0.
1 0.
5 -0
.015
0 -0
.004
9 0.
0631
0.
0047
0.
1541
0.15
440.
1377
0.14
73-0
.003
80.
0081
0.
0692
0.01
58 0
.196
20.
1931
0.1
596
0.18
71
500
10
0.3
0.5
-0.0
036
-0.0
009
0.01
74
-0.0
003
0.07
060.
0704
0.06
880.
0703
-0.0
016
0.00
15
0.01
910.
0019
0.0
931
0.09
37 0
.088
0 0.
0931
50
0 10
0.
001
0.9
0.15
46
4.80
27
0.86
38
0.83
36
40.3
894
272.
5208
0.88
000.
9305
0.72
700.
8393
0.
8636
0.83
80 1
.134
60.
6598
0.2
090
0.38
54
500
10
0.1
0.9
-0.0
202
-0.0
043
0.11
73
0.01
26
0.14
870.
1613
0.16
050.
1491
-0.0
031
0.01
78
0.12
790.
0308
0.1
848
0.19
51 0
.141
8 0.
1853
50
0 10
0.
3 0.
9 -0
.005
2 -0
.000
3 0.
0326
0.
0007
0.
0701
0.07
110.
0722
0.07
07-0
.001
20.
0040
0.
0360
0.00
51 0
.092
40.
0936
0.0
848
0.09
32
500
30
0.00
1 0
-4.8
498
3.45
96
0.00
08
0.00
05
431.
8658
229.
8122
0.19
110.
3980
0.02
090.
0215
0.
0012
0.00
46 1
.940
81.
4264
0.2
536
0.49
25
500
30
0.1
0 -0
.000
5 0.
0021
0.
0014
0.
0017
0.
1909
0.17
580.
1100
0.14
960.
0044
0.00
42
0.00
190.
0027
0.2
309
0.22
23 0
.149
8 0.
1984
50
0 30
0.
3 0
0.00
09
0.00
10
0.00
10
0.00
10
0.07
300.
0726
0.06
400.
0716
0.00
130.
0013
0.
0017
0.00
12 0
.099
40.
0988
0.0
867
0.09
70
500
30
0.00
1 0.
5 0.
6264
1.
1437
0.
4934
0.
4860
28
.438
267
.744
40.
5197
0.59
400.
4724
0.51
90
0.49
610.
4912
1.6
492
1.15
80 0
.216
3 0.
4126
50
0 30
0.
1 0.
5 -0
.013
2 -0
.007
3 0.
1717
0.
0583
0.
1738
0.19
040.
1990
0.15
620.
0031
0.01
57
0.17
420.
0676
0.2
185
0.22
42 0
.131
8 0.
1868
50
0 30
0.
3 0.
5 -0
.001
6 0.
0010
0.
0594
0.
0076
0.
0720
0.07
370.
0855
0.07
220.
0015
0.00
52
0.06
170.
0110
0.0
979
0.09
76 0
.081
4 0.
0956
50
0 30
0.
001
0.9
0.77
76
0.81
71
0.88
53
0.87
09
10.0
471
69.8
382
0.88
950.
8891
0.78
020.
8710
0.
8854
0.87
09 1
.006
30.
6385
0.1
143
0.22
43
500
30
0.1
0.9
-0.0
167
-0.0
165
0.30
80
0.10
38
0.15
170.
2542
0.31
760.
1711
0.00
300.
0252
0.
3111
0.11
94 0
.193
10.
2398
0.1
010
0.16
85
500
30
0.3
0.9
-0.0
032
0.00
11
0.10
62
0.01
29
0.06
970.
0767
0.12
010.
0738
0.00
170.
0077
0.
1092
0.01
90 0
.093
60.
0993
0.0
736
0.09
41
Not
e:
n de
note
s the
sam
ple
size
. K d
enot
es th
e nu
mbe
r of i
nstru
men
ts. R
2 den
otes
the
theo
retic
al R
2 of t
he fi
rst s
tage
regr
essi
on. ρ
den
otes
the
cova
rianc
e be
twee
n th
e fir
st st
age
and
the
seco
nd st
age
erro
r ter
ms.
All
resu
lts a
re b
ased
on
5000
Mon
te C
arlo
runs
.
Tabl
e 3:
Fin
ite S
ampl
e C
ompa
rison
of I
V E
stim
ator
s
DG
P M
ean
RM
SE
Med
ian
Inte
rQua
rtile
Ran
ge
n K
R2
ρ LI
ML
Nag
ar
2SLS
Ja
ckkn
ifeLI
ML
Nag
ar
2SLS
Ja
ckkn
ifeLI
ML
Nag
ar
2SLS
Ja
ckkn
ife
LIM
L N
agar
2S
LS
Jack
knife
10
00
50.
001
0 11
3.11
91 -
0.12
73
0.00
610.
0275
79
52.3
330
13.5
684
0.51
61
2.84
380.
0119
0.
0052
0.
0041
0.01
10 1
.568
41.
1697
0.5
791
0.91
93
1000
5
0.1
0 0.
0004
0.
0004
0.
0004
0.00
04
0.09
81
0.09
71 0
.094
4 0.
0971
-0.0
017
-0.0
018
-0.0
019
-0.0
017
0.12
730.
1263
0.1
229
0.12
68
1000
5
0.3
0 0.
0001
0.
0001
0.
0001
0.00
01
0.04
86
0.04
85 0
.048
1 0.
0485
-0.0
008
-0.0
008
-0.0
007
-0.0
007
0.06
440.
0643
0.0
638
0.06
43
1000
5
0.00
1 0.
5 -0
.442
8 -1
.104
9 0.
4107
0.32
43
36.4
286
81.0
943
0.61
34
2.20
800.
2910
0.
3862
0.
4131
0.36
63 1
.392
81.
0468
0.5
163
0.80
99
1000
5
0.1
0.5
-0.0
044
0.00
03
0.01
390.
0007
0.
0980
0.
0975
0.0
945
0.09
74-0
.002
0 0.
0034
0.
0168
0.00
40 0
.127
80.
1277
0.1
238
0.12
78
1000
5
0.3
0.5
-0.0
010
0.00
02
0.00
370.
0002
0.
0486
0.
0485
0.0
481
0.04
85-0
.000
7 0.
0005
0.
0040
0.00
06 0
.064
40.
0647
0.0
640
0.06
46
1000
5
0.00
1 0.
9 -1
.184
2 0.
8814
0.
7419
0.61
54
100.
6250
18
.281
9 0.
7995
1.
3749
0.43
44
0.68
58
0.74
010.
6552
1.0
140
0.70
44 0
.318
6 0.
5006
10
00
50.
1 0.
9 -0
.008
0 0.
0002
0.
0247
0.00
08
0.09
81
0.09
87 0
.095
0 0.
0985
-0.0
015
0.00
73
0.03
070.
0079
0.1
267
0.12
84 0
.119
6 0.
1277
10
00
50.
3 0.
9 -0
.002
0 0.
0002
0.
0065
0.00
02
0.04
85
0.04
86 0
.048
1 0.
0486
-0.0
007
0.00
19
0.00
800.
0018
0.0
640
0.06
49 0
.063
6 0.
0650
10
00 1
00.
001
0 0.
0625
-0.
0754
0.
0054
0.00
70
61.9
989
20.3
426
0.33
61
0.75
720.
0238
0.
0208
0.
0074
0.01
33 1
.655
31.
3018
0.4
189
0.75
89
1000
10
0.1
0 0.
0008
0.
0008
0.
0007
0.00
08
0.09
99
0.09
89 0
.091
9 0.
0984
0.00
20
0.00
19
0.00
180.
0018
0.1
319
0.13
09 0
.122
2 0.
1307
10
00 1
00.
3 0
0.00
02
0.00
02
0.00
020.
0002
0.
0488
0.
0487
0.0
478
0.04
870.
0011
0.
0011
0.
0009
0.00
09 0
.064
90.
0645
0.0
636
0.06
45
1000
10
0.00
1 0.
5 0.
2726
0.
3051
0.
4512
0.40
36
30.6
185
40.4
696
0.53
93
0.78
320.
3435
0.
4513
0.
4535
0.41
93 1
.501
71.
1287
0.3
605
0.64
98
1000
10
0.1
0.5
-0.0
042
0.00
00
0.03
470.
0024
0.
0994
0.
1002
0.0
964
0.09
930.
0021
0.
0061
0.
0390
0.00
77 0
.130
10.
1293
0.1
172
0.12
87
1000
10
0.3
0.5
-0.0
010
0.00
01
0.00
940.
0003
0.
0487
0.
0489
0.0
484
0.04
880.
0009
0.
0019
0.
0109
0.00
19 0
.064
60.
0645
0.0
626
0.06
42
1000
10
0.00
1 0.
9 -1
.280
5 0.
5336
0.
8143
0.73
56
92.9
170
13.2
870
0.83
21
0.85
100.
5220
0.
7746
0.
8149
0.75
57 1
.092
80.
6951
0.2
117
0.37
82
1000
10
0.1
0.9
-0.0
081
-0.0
009
0.06
160.
0034
0.
0987
0.
1033
0.1
053
0.10
130.
0018
0.
0081
0.
0679
0.01
20 0
.127
20.
1315
0.1
103
0.12
95
1000
10
0.3
0.9
-0.0
021
0.00
00
0.01
660.
0002
0.
0487
0.
0493
0.0
497
0.04
920.
0007
0.
0024
0.
0187
0.00
24 0
.063
90.
0648
0.0
617
0.06
47
1000
30
0.00
1 0
1.07
38
2.19
29 -
0.00
05-0
.000
7 10
4.11
62
191.
0581
0.1
869
0.38
52-0
.002
1 -0
.005
8 -0
.000
7-0
.000
7 1.
7626
1.36
78 0
.247
0 0.
4760
10
00 3
00.
1 0
-0.0
006
-0.0
006
-0.0
005
-0.0
006
0.10
95
0.10
82 0
.084
9 0.
1031
-0.0
015
-0.0
019
-0.0
013
-0.0
018
0.14
090.
1405
0.1
120
0.13
39
1000
30
0.3
0 -0
.000
3 -0
.000
3 -0
.000
3-0
.000
3 0.
0496
0.
0495
0.0
464
0.04
93-0
.000
6 -0
.000
6 -0
.000
6-0
.000
8 0.
0661
0.06
59 0
.062
0 0.
0656
10
00 3
00.
001
0.5
7.05
66
0.73
95
0.48
190.
4649
57
1.69
37
18.2
415
0.50
82
0.57
200.
3813
0.
4732
0.
4807
0.46
49 1
.567
11.
1775
0.2
126
0.40
49
1000
30
0.1
0.5
-0.0
062
-0.0
029
0.10
160.
0191
0.
1067
0.
1116
0.1
292
0.10
42-0
.001
4 0.
0021
0.
1024
0.02
24 0
.138
00.
1476
0.1
075
0.13
68
1000
30
0.3
0.5
-0.0
015
-0.0
004
0.03
060.
0014
0.
0493
0.
0499
0.0
548
0.04
96-0
.000
4 0.
0010
0.
0313
0.00
24 0
.065
50.
0662
0.0
606
0.06
60
1000
30
0.00
1 0.
9 1.
8229
0.
1098
0.
8706
0.84
20
49.6
333
58.7
159
0.87
49
0.86
050.
6427
0.
8464
0.
8696
0.84
12 1
.119
70.
6737
0.1
129
0.21
64
1000
30
0.1
0.9
-0.0
091
-0.0
046
0.18
320.
0350
0.
0998
0.
1194
0.1
953
0.10
72-0
.000
8 0.
0075
0.
1857
0.04
37 0
.129
10.
1547
0.0
925
0.13
54
1000
30
0.3
0.9
-0.0
024
-0.0
004
0.05
530.
0029
0.
0483
0.
0508
0.0
701
0.05
01-0
.000
8 0.
0014
0.
0566
0.00
44 0
.063
70.
0688
0.0
589
0.06
76
Not
e:
n de
note
s the
sam
ple
size
. K d
enot
es th
e nu
mbe
r of i
nstru
men
ts. R
2 den
otes
the
theo
retic
al R
2 of t
he fi
rst s
tage
regr
essi
on. ρ
den
otes
the
cova
rianc
e be
twee
n th
e fir
st st
age
and
the
seco
nd st
age
erro
r ter
ms.
All
resu
lts a
re b
ased
on
5000
Mon
te C
arlo
runs
.