Formula Sheet
-
Upload
rishab-khandelwal -
Category
Documents
-
view
1 -
download
0
description
Transcript of Formula Sheet
-
The Rules of Summation
!n
i1xi x1 x2 # # # xn
!n
i1a na
!n
i1axi a !
n
i1xi
!n
i1xi yi !
n
i1xi !
n
i1yi
!n
i1axi byi a !
n
i1xi b !
n
i1yi
!n
i1a bxi na b !
n
i1xi
x !n
i1xin
x1 x2 # # # xnn
!n
i1xi & x 0
!2
i1!3
j1f xi; yj !
2
i1f xi; y1 f xi; y2 f xi; y3 (
f x1; y1 f x1; y2 f x1; y3 f x2; y1 f x2; y2 f x2; y3
Expected Values & Variances
EX x1 f x1 x2 f x2 # # # xn f xn !
n
i1xi f xi !
xx f x
E gX ( !xgx f x
E g1X g2X ( !xg1x g2x ( f x
!xg1x f x !
xg2x f x
E g1X ( E g2X (E(c) cE(cX ) cE(X )E(a cX ) a cE(X )var(X ) s2 E[X & E(X )]2 E(X2) & [E(X )]2var(a cX ) E[(a cX)&E(a cX)]2 c2var(X )
Marginal and Conditional Distributions
f x !yf x; y for each value X can take
f y !xf x; y for each value Y can take
f xjy P X xjY y ( f x; yf y
If X and Y are independent random variables, then
f (x,y) f (x)f (y) for each and every pair of valuesx and y. The converse is also true.
If X and Y are independent random variables, then the
conditional probability density function of X given that
Y y is f xjy f x; yf y
f x f yf y f x
for each and every pair of values x and y. The converse is
also true.
Expectations, Variances & Covariances
covX;Y EX&EX(Y&EY ((!
x!yx& EX ( y& EY ( f x; y
r covX;YffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffivarXvarYp
E(c1X c2Y ) c1E(X ) c2E(Y )E(X Y ) E(X ) E(Y )var(aX bY cZ ) a2var(X) b2var(Y ) c2var(Z ) 2abcov(X,Y ) 2accov(X,Z ) 2bccov(Y,Z )If X, Y, and Z are independent, or uncorrelated, random
variables, then the covariance terms are zero and:
varaX bY cZ a2varX b2varY c2varZ
Normal Probabilities
If X ) N(m, s2), then Z X & ms
)N0; 1If X ) N(m, s2) and a is a constant, then
PX * a P Z * a& ms
" #If X )Nm;s2 and a and b are constants; then
Pa + X + b P a&ms
+ Z + b& ms
$ %
Assumptions of the Simple Linear Regression
Model
SR1 The value of y, for each value of x, is y b1 b2x e
SR2 The average value of the random error e is
E(e) 0 sincewe assume thatE(y) b1 b2xSR3 The variance of the random error e is var(e)
s2 var(y)SR4 The covariance between any pair of random
errors, ei and ej is cov(ei, ej) cov(yi, yj) 0SR5 The variable x is not random and must take at
least two different values.
SR6 (optional) The values of e are normally dis-
tributed about their mean e ) N(0, s2)
Least Squares Estimation
If b1 and b2 are the least squares estimates, then
y^i b1 b2xie^i yi & y^i yi & b1 & b2xi
The Normal Equations
Nb1 Sxib2SyiSxib1 Sx2i b2 Sxiyi
Least Squares Estimators
b2 Sxi & xyi & yS xi & x2
b1 y& b2x
-
Elasticity
h percentage change in ypercentage change in x
Dy=yDx=x
DyDx
# xy
h DEy=EyDx=x
DEyDx
# xEy b2 #
x
EyLeast Squares Expressions Useful for Theory
b2 b2 Swiei
wi xi & xSxi & x2
Swi 0; Swixi 1; Sw2i 1=Sxi & x2
Properties of the Least Squares Estimators
varb1 s2 Sx2i
NSxi & x2" #
varb2 s2
Sxi & x2
covb1; b2 s2 &xSxi & x2" #
Gauss-Markov Theorem: Under the assumptionsSR1SR5 of the linear regression model the estimators
b1 and b2 have the smallest variance of all linear and
unbiased estimators of b1 and b2. They are the BestLinear Unbiased Estimators (BLUE) of b1 and b2.
If we make the normality assumption, assumptionSR6, about the error term, then the least squares esti-
mators are normally distributed.
b1 ) N b1;s2 ! x2i
NSxi & x2 !
; b2 ) N b2;s2
Sxi & x2 !
Estimated Error Variance
s^2 Se^2i
N & 2
Estimator Standard Errors
seb1 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffibvarb1q ; seb2 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffibvarb2q
t-distribution
If assumptions SR1SR6of the simple linear regressionmodel hold, then
t bk & bksebk ) tN&2; k 1; 2
Interval Estimates
P[b2 & tcse(b2) + b2 + b2 tcse(b2)] 1 & aHypothesis Testing
Components of Hypothesis Tests
1. A null hypothesis, H02. An alternative hypothesis, H13. A test statistic
4. A rejection region
5. A conclusion
If the null hypothesis H0 : b2 c is true, then
t b2 & cseb2 ) tN&2
Rejection rule for a two-tail test: If the value of thetest statistic falls in the rejection region, either tail of
the t-distribution, then we reject the null hypothesis
and accept the alternative.
Type I error: The null hypothesis is true and we decide
to reject it.
Type II error: The null hypothesis is false andwe decide
not to reject it.
p-value rejection rule:When the p-value of a hypoth-esis test is smaller than the chosen value of a, then thetest procedure leads to rejection of the null hypothesis.
Prediction
y0 b1 b2x0 e0; y^0 b1 b2x0; f y^0 & y0bvar f s^2 1 1
N x0 & x
2
Sxi & x2" #
; se f ffiffiffiffiffiffiffiffiffiffiffiffiffibvar f q
A (1 & a) , 100% confidence interval, or predictioninterval, for y0
y^0 - tcse f Goodness of Fit
Syi & y2 Sy^i & y2 Se^2iSST SSR SSER2 SSR
SST 1& SSE
SST corry; y^2
Log-Linear Model
lny b1b2x e;bln y b1 b2x100, b2 . % change in y given a one-unit change in x:y^n expb1 b2xy^c expb1 b2xexps^2=2Prediction interval:
exp blny & tcse f h i; exp bln y tcse f h iGeneralized goodness-of-fit measureR2gcorry; y^n2
Assumptions of theMultiple RegressionModel
MR1 yi b1 b2xi2 # # # bKxiK eiMR2 E(yi)b1b2xi2 # # # bKxiK , E(ei) 0.MR3 var(yi) var(ei) s2MR4 cov(yi, yj) cov(ei, ej) 0MR5 The values of xik are not random and are not
exact linear functions of the other explanatory
variables.
MR6 yi ) Nb1 b2xi2 # # # bKxiK;s2(, ei ) N0;s2
Least Squares Estimates in MR Model
Least squares estimates b1, b2, . . . , bK minimizeSb1, b2, . . . , bK !yi& b1& b2xi2& # # # & bKxiK2
Estimated Error Variance and Estimator
Standard Errors
s^2 ! e^2i
N & K sebk ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffibvarbkq
-
Hypothesis Tests and Interval Estimates for Single Parameters
Use t-distribution t bk & bksebk ) tN&K
t-test for More than One Parameter
H0 : b2 cb3 a
When H0 is true t b2 cb3 & aseb2 cb3 ) tN&K
seb2 cb3 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffibvarb2 c2bvarb3 2c,bcovb2; b3q
Joint F-tests
To test J joint hypotheses,
F SSER & SSEU=JSSEU=N & K
To test the overall significance of the model the null and alternative
hypotheses and F statistic are
H0 : b2 0; b3 0; : : : ; bK 0H1 : at least one of the bk is nonzero
F SST & SSE=K & 1SSE=N & K
RESET: A Specification Test
yi b1b2xi2b3xi3 ei y^i b1 b2xi2 b3xi3yi b1b2xi2b3xi3g1y^2i ei; H0 : g1 0yi b1b2xi2b3xi3g1y^2i g2y^3i ei; H0 : g1 g2 0Model Selection
AIC ln(SSE=N) 2K=NSC ln(SSE=N) K ln(N)=NCollinearity and Omitted Variables
yi b1 b2xi2 b3xi3 eivarb2 s
2
1& r223! xi2 & x22
When x3 is omitted; biasb/2 Eb/2 & b2 b3bcovx2; x3bvarx2
Heteroskedasticity
var(yi) var(ei) si2General variance function
s2i expa1 a2zi2 # # # aSziSBreusch-Pagan and White Tests for H0: a2 a3 # # # aS 0
When H0 is true x2 N , R2 ) x2S&1
Goldfeld-Quandt test for H0 :s2M s2R versus H1 : s2M 6 s2RWhen H0 is true F s^2M=s^2R ) FNM&KM ;NR&KR
Transformed model for varei s2i s2xiyi=
ffiffiffiffixi
p b1 1=ffiffiffiffixi
p b2 xi=ffiffiffiffixi
p ei= ffiffiffiffixipEstimating the variance function
lne^2i lns2i vi a1 a2zi2 # # # aSziS viGrouped data
varei s2i s2M i 1; 2; . . . ; NMs2R i 1; 2; . . . ; NR
(Transformed model for feasible generalized least squares
yi
. ffiffiffiffiffis^i
p b1 1
. ffiffiffiffiffis^i
p" # b2 xi
. ffiffiffiffiffis^i
p" # ei
. ffiffiffiffiffis^i
p
Regression with Stationary Time Series Variables
Finite distributed lag model
yt a b0xt b1xt&1 b2xt&2 # # # bqxt&q vtCorrelogram
rk ! yt & yyt&k & y=! yt & y2For H0 : rk 0; z
ffiffiffiffiT
prk ) N0; 1
LM test
yt b1b2xt re^t&1 v^t Test H0 :r 0 with t-teste^t g1g2xt re^t&1 v^t Test using LM T ,R2AR(1) error yt b1b2xt et et ret&1 vtNonlinear least squares estimation
yt b11& r b2xt ryt&1 & b2rxt&1 vtARDL(p, q) model
yt d d0xt dlxt&1 # # # dqxt&q ulyt&1 # # # upyt&p vt
AR(p) forecasting model
yt d ulyt&1 u2yt&2 # # # upyt&p vtExponential smoothing y^t ayt&1 1& ay^t&1Multiplier analysis
d0 d1L d2L2 # # # dqLq 1& u1L& u2L2 & # # # & upLp, b0 b1L b2L2 # # #
Unit Roots and Cointegration
Unit Root Test for Stationarity: Null hypothesis:H0 : g 0
Dickey-Fuller Test 1 (no constant and no trend):Dyt gyt&1 vt
Dickey-Fuller Test 2 (with constant but no trend):Dyt a gyt&1 vt
Dickey-Fuller Test 3 (with constant and with trend):Dyt a gyt&1 lt vt
Augmented Dickey-Fuller Tests:
Dyt a gyt&1 !m
s1asDyt&s vt
Test for cointegrationDe^t ge^t&1 vt
Random walk: yt yt&1 vtRandom walk with drift: yt a yt&1 vtRandom walk model with drift and time trend:
yt a dt yt&1 vtPanel Data
Pooled least squares regressionyit b1 b2x2it b3x3it eit
Cluster robust standard errors cov(eit, eis) ctsFixed effects model
yit b1i b2x2it b3x3it eit b1i not randomyit & yi b2x2it & x2i b3x3it & x3i eit & ei
Random effects model
yit b1ib2x2itb3x3iteit bit b1ui randomyit&ayib11&ab2x2it&ax2ib3x3it&ax3iv/it
a 1&se& ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
Ts2us2eq
Hausman test
t bFE;k & bRE;k&bvarbFE;k&bvarbRE;kh i1=2
-
Additional Equations
JB =N
6
S2 +
(K 3)24
Jarque-Bera Test (1)
var() =
@
@1
2var(b1) +
@
@2
2var(b2) + 2
@
@1
@
@2
cov(b1, b2) (2)
DD = (yTreatment,After yControl, After) (yTreatment,Before yControl, Before) (3)SST = (N 1)s2y (4)
-
A p p e n d i xD
Example:P(Z 1.73) ! (1.73) ! 0.9582
Standard Normal Distribution
"4 "3 "2 "1 0z
1 2 3 4
Ta b l e 1 Cumulative Probabilities for the Standard Normal DistributionF(z) P(Z " z)z 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09
0.0 0.5000 0.5040 0.5080 0.5120 0.5160 0.5199 0.5239 0.5279 0.5319 0.5359
0.1 0.5398 0.5438 0.5478 0.5517 0.5557 0.5596 0.5636 0.5675 0.5714 0.57530.2 0.5793 0.5832 0.5871 0.5910 0.5948 0.5987 0.6026 0.6064 0.6103 0.6141
0.3 0.6179 0.6217 0.6255 0.6293 0.6331 0.6368 0.6406 0.6443 0.6480 0.65170.4 0.6554 0.6591 0.6628 0.6664 0.6700 0.6736 0.6772 0.6808 0.6844 0.68790.5 0.6915 0.6950 0.6985 0.7019 0.7054 0.7088 0.7123 0.7157 0.7190 0.7224
0.6 0.7257 0.7291 0.7324 0.7357 0.7389 0.7422 0.7454 0.7486 0.7517 0.75490.7 0.7580 0.7611 0.7642 0.7673 0.7704 0.7734 0.7764 0.7794 0.7823 0.78520.8 0.7881 0.7910 0.7939 0.7967 0.7995 0.8023 0.8051 0.8078 0.8106 0.8133
0.9 0.8159 0.8186 0.8212 0.8238 0.8264 0.8289 0.8315 0.8340 0.8365 0.83891.0 0.8413 0.8438 0.8461 0.8485 0.8508 0.8531 0.8554 0.8577 0.8599 0.8621
1.1 0.8643 0.8665 0.8686 0.8708 0.8729 0.8749 0.8770 0.8790 0.8810 0.88301.2 0.8849 0.8869 0.8888 0.8907 0.8925 0.8944 0.8962 0.8980 0.8997 0.90151.3 0.9032 0.9049 0.9066 0.9082 0.9099 0.9115 0.9131 0.9147 0.9162 0.9177
1.4 0.9192 0.9207 0.9222 0.9236 0.9251 0.9265 0.9279 0.9292 0.9306 0.93191.5 0.9332 0.9345 0.9357 0.9370 0.9382 0.9394 0.9406 0.9418 0.9429 0.9441
1.6 0.9452 0.9463 0.9474 0.9484 0.9495 0.9505 0.9515 0.9525 0.9535 0.95451.7 0.9554 0.9564 0.9573 0.9582 0.9591 0.9599 0.9608 0.9616 0.9625 0.96331.8 0.9641 0.9649 0.9656 0.9664 0.9671 0.9678 0.9686 0.9693 0.9699 0.9706
1.9 0.9713 0.9719 0.9726 0.9732 0.9738 0.9744 0.9750 0.9756 0.9761 0.97672.0 0.9772 0.9778 0.9783 0.9788 0.9793 0.9798 0.9803 0.9808 0.9812 0.98172.1 0.9821 0.9826 0.9830 0.9834 0.9838 0.9842 0.9846 0.9850 0.9854 0.9857
2.2 0.9861 0.9864 0.9868 0.9871 0.9875 0.9878 0.9881 0.9884 0.9887 0.98902.3 0.9893 0.9896 0.9898 0.9901 0.9904 0.9906 0.9909 0.9911 0.9913 0.9916
2.4 0.9918 0.9920 0.9922 0.9925 0.9927 0.9929 0.9931 0.9932 0.9934 0.99362.5 0.9938 0.9940 0.9941 0.9943 0.9945 0.9946 0.9948 0.9949 0.9951 0.99522.6 0.9953 0.9955 0.9956 0.9957 0.9959 0.9960 0.9961 0.9962 0.9963 0.9964
2.7 0.9965 0.9966 0.9967 0.9968 0.9969 0.9970 0.9971 0.9972 0.9973 0.99742.8 0.9974 0.9975 0.9976 0.9977 0.9977 0.9978 0.9979 0.9979 0.9980 0.99812.9 0.9981 0.9982 0.9982 0.9983 0.9984 0.9984 0.9985 0.9985 0.9986 0.9986
3.0 0.9987 0.9987 0.9987 0.9988 0.9988 0.9989 0.9989 0.9989 0.9990 0.9990
Source: This table was generated using the SAS1 function PROBNORM.
742
-
Example:
P(t(30) 1.697) = 0.95P(t(30) > 1.697) = 0.05
t4 3 2 1 0 1 2 3 4
Ta b l e 2 Percentiles of the t-distribution
df t0:90,df t0:95,df t0:975,df t0:99,df t0:995,df1 3.078 6.314 12.706 31.821 63.657
2 1.886 2.920 4.303 6.965 9.925
3 1.638 2.353 3.182 4.541 5.841
4 1.533 2.132 2.776 3.747 4.604
5 1.476 2.015 2.571 3.365 4.032
6 1.440 1.943 2.447 3.143 3.707
7 1.415 1.895 2.365 2.998 3.499
8 1.397 1.860 2.306 2.896 3.355
9 1.383 1.833 2.262 2.821 3.250
10 1.372 1.812 2.228 2.764 3.169
11 1.363 1.796 2.201 2.718 3.106
12 1.356 1.782 2.179 2.681 3.055
13 1.350 1.771 2.160 2.650 3.012
14 1.345 1.761 2.145 2.624 2.977
15 1.341 1.753 2.131 2.602 2.947
16 1.337 1.746 2.120 2.583 2.921
17 1.333 1.740 2.110 2.567 2.898
18 1.330 1.734 2.101 2.552 2.878
19 1.328 1.729 2.093 2.539 2.861
20 1.325 1.725 2.086 2.528 2.845
21 1.323 1.721 2.080 2.518 2.831
22 1.321 1.717 2.074 2.508 2.819
23 1.319 1.714 2.069 2.500 2.807
24 1.318 1.711 2.064 2.492 2.797
25 1.316 1.708 2.060 2.485 2.787
26 1.315 1.706 2.056 2.479 2.779
27 1.314 1.703 2.052 2.473 2.771
28 1.313 1.701 2.048 2.467 2.763
29 1.311 1.699 2.045 2.462 2.756
30 1.310 1.697 2.042 2.457 2.750
31 1.309 1.696 2.040 2.453 2.744
32 1.309 1.694 2.037 2.449 2.738
33 1.308 1.692 2.035 2.445 2.733
34 1.307 1.691 2.032 2.441 2.728
35 1.306 1.690 2.030 2.438 2.724
36 1.306 1.688 2.028 2.434 2.719
37 1.305 1.687 2.026 2.431 2.715
38 1.304 1.686 2.024 2.429 2.712
39 1.304 1.685 2.023 2.426 2.708
40 1.303 1.684 2.021 2.423 2.704
50 1.299 1.676 2.009 2.403 2.678
1 1.282 1.645 1.960 2.326 2.576Source: This table was generated using the SAS1 function TINV
-
Example:P(2 9.488) ! 0.95P(2
> 9.488) ! 0.05
0 10 20
(4)(4)
2
Ta b l e 3 Percentiles of the Chi-square Distribution
df x20.90;df x20:95;df x20:975;df x20:99;df x20:995;df1 2.706 3.841 5.024 6.635 7.879
2 4.605 5.991 7.378 9.210 10.597
3 6.251 7.815 9.348 11.345 12.838
4 7.779 9.488 11.143 13.277 14.860
5 9.236 11.070 12.833 15.086 16.750
6 10.645 12.592 14.449 16.812 18.548
7 12.017 14.067 16.013 18.475 20.278
8 13.362 15.507 17.535 20.090 21.955
9 14.684 16.919 19.023 21.666 23.589
10 15.987 18.307 20.483 23.209 25.188
11 17.275 19.675 21.920 24.725 26.757
12 18.549 21.026 23.337 26.217 28.300
13 19.812 22.362 24.736 27.688 29.819
14 21.064 23.685 26.119 29.141 31.319
15 22.307 24.996 27.488 30.578 32.801
16 23.542 26.296 28.845 32.000 34.267
17 24.769 27.587 30.191 33.409 35.718
18 25.989 28.869 31.526 34.805 37.156
19 27.204 30.144 32.852 36.191 38.582
20 28.412 31.410 34.170 37.566 39.997
21 29.615 32.671 35.479 38.932 41.401
22 30.813 33.924 36.781 40.289 42.796
23 32.007 35.172 38.076 41.638 44.181
24 33.196 36.415 39.364 42.980 45.559
25 34.382 37.652 40.646 44.314 46.928
26 35.563 38.885 41.923 45.642 48.290
27 36.741 40.113 43.195 46.963 49.645
28 37.916 41.337 44.461 48.278 50.993
29 39.087 42.557 45.722 49.588 52.336
30 40.256 43.773 46.979 50.892 53.672
35 46.059 49.802 53.203 57.342 60.275
40 51.805 55.758 59.342 63.691 66.766
50 63.167 67.505 71.420 76.154 79.490
60 74.397 79.082 83.298 88.379 91.952
70 85.527 90.531 95.023 100.425 104.215
80 96.578 101.879 106.629 112.329 116.321
90 107.565 113.145 118.136 124.116 128.299
100 118.498 124.342 129.561 135.807 140.169
110 129.385 135.480 140.917 147.414 151.948
120 140.233 146.567 152.211 158.950 163.648
Source: This table was generated using the SAS1 function CINV.
744 APPENDIX D
-
Example:P(F(4,30) 2.69) ! 0.95P(F(4,30) > 2.69) ! 0.05
0 1 2 3 4F
65
Ta b l e 4 95th Percentile for the F-distribution
v2=v1 1 2 3 4 5 6 7 8 9 10 12 15 20 30 60 11 161.45 199.50 215.71 224.58 230.16 233.99 236.77 238.88 240.54 241.88 243.91 245.95 248.01 250.10 252.20 254.31
2 18.51 19.00 19.16 19.25 19.30 19.33 19.35 19.37 19.38 19.40 19.41 19.43 19.45 19.46 19.48 19.50
3 10.13 9.55 9.28 9.12 9.01 8.94 8.89 8.85 8.81 8.79 8.74 8.70 8.66 8.62 8.57 8.53
4 7.71 6.94 6.59 6.39 6.26 6.16 6.09 6.04 6.00 5.96 5.91 5.86 5.80 5.75 5.69 5.63
5 6.61 5.79 5.41 5.19 5.05 4.95 4.88 4.82 4.77 4.74 4.68 4.62 4.56 4.50 4.43 4.36
6 5.99 5.14 4.76 4.53 4.39 4.28 4.21 4.15 4.10 4.06 4.00 3.94 3.87 3.81 3.74 3.67
7 5.59 4.74 4.35 4.12 3.97 3.87 3.79 3.73 3.68 3.64 3.57 3.51 3.44 3.38 3.30 3.23
8 5.32 4.46 4.07 3.84 3.69 3.58 3.50 3.44 3.39 3.35 3.28 3.22 3.15 3.08 3.01 2.93
9 5.12 4.26 3.86 3.63 3.48 3.37 3.29 3.23 3.18 3.14 3.07 3.01 2.94 2.86 2.79 2.71
10 4.96 4.10 3.71 3.48 3.33 3.22 3.14 3.07 3.02 2.98 2.91 2.85 2.77 2.70 2.62 2.54
15 4.54 3.68 3.29 3.06 2.90 2.79 2.71 2.64 2.59 2.54 2.48 2.40 2.33 2.25 2.16 2.07
20 4.35 3.49 3.10 2.87 2.71 2.60 2.51 2.45 2.39 2.35 2.28 2.20 2.12 2.04 1.95 1.84
25 4.24 3.39 2.99 2.76 2.60 2.49 2.40 2.34 2.28 2.24 2.16 2.09 2.01 1.92 1.82 1.71
30 4.17 3.32 2.92 2.69 2.53 2.42 2.33 2.27 2.21 2.16 2.09 2.01 1.93 1.84 1.74 1.62
35 4.12 3.27 2.87 2.64 2.49 2.37 2.29 2.22 2.16 2.11 2.04 1.96 1.88 1.79 1.68 1.56
40 4.08 3.23 2.84 2.61 2.45 2.34 2.25 2.18 2.12 2.08 2.00 1.92 1.84 1.74 1.64 1.51
45 4.06 3.20 2.81 2.58 2.42 2.31 2.22 2.15 2.10 2.05 1.97 1.89 1.81 1.71 1.60 1.47
50 4.03 3.18 2.79 2.56 2.40 2.29 2.20 2.13 2.07 2.03 1.95 1.87 1.78 1.69 1.58 1.44
60 4.00 3.15 2.76 2.53 2.37 2.25 2.17 2.10 2.04 1.99 1.92 1.84 1.75 1.65 1.53 1.39
120 3.92 3.07 2.68 2.45 2.29 2.18 2.09 2.02 1.96 1.91 1.83 1.75 1.66 1.55 1.43 1.25
1 3.84 3.00 2.60 2.37 2.21 2.10 2.01 1.94 1.88 1.83 1.75 1.67 1.57 1.46 1.32 1.00Source: This table was generated using the SAS1 function FINV.
745
CoverTitle PageCopyrightBrief ContentsPrefaceContentsChapter 1 An Introduction to Econometrics1.1 Why Study Econometrics?1.2 What Is Econometrics About?1.2.1 Some Examples
1.3 The Econometric Model1.4 How Are Data Generated?1.4.1 Experimental Data1.4.2 Nonexperimental Data
1.5 Economic Data Types1.5.1 Time-Series Data1.5.2 Cross-Section Data1.5.3 Panel or Longitudinal Data
1.6 The Research Process1.7 Writing An Empirical Research Paper1.7.1 Writing a Research Proposal1.7.2 A Format for Writing a Research Report
1.8 Sources of Economic Data1.8.1 Links to Economic Data on the Internet1.8.2 Interpreting Economic Data1.8.3 Obtaining the Data
Probability PrimerLearning ObjectivesKeywordsP.1 Random VariablesP.2 Probability DistributionsP.3 Joint, Marginal and Conditional ProbabilitiesP.3.1 Marginal DistributionsP.3.2 Conditional ProbabilityP.3.3 Statistical Independence
P.4 A Digression: Summation NotationP.5 Properties of Probability DistributionsP.5.1 Expected Value of a Random VariableP.5.2 Conditional ExpectationP.5.3 Rules for Expected ValuesP.5.4 Variance of a Random VariableP.5.5 Expected Values of Several Random VariablesP.5.6 Covariance Between Two Random Variables
P.6 The Normal DistributionP.7 Exercises
Chapter 2 The Simple Linear Regression ModelLearning ObjectivesKeywords2.1 An Economic Model2.2 An Econometric Model2.2.1 Introducing the Error Term
2.3 Estimating the Regression Parameters2.3.1 The Least Squares Principle2.3.2 Estimates for the Food Expenditure Function2.3.3 Interpreting the Estimates2.3.4 Other Economic Models
2.4 Assessing the Least Squares Estimators2.4.1 The Estimator b2.4.2 The Expected Values of b and b2.4.3 Repeated Sampling2.4.4 The Variances and Covariance of b and b
2.5 The GaussMarkov Theorem2.6 The Probability Distributions of the Least Squares Estimators2.7 Estimating the Variance of the Error Term2.7.1 Estimating the Variances and Covariance of the Least Squares Estimators2.7.2 Calculations for the Food Expenditure Data2.7.3 Interpreting the Standard Errors
2.8 Estimating Nonlinear Relationships2.8.1 Quadratic Functions2.8.2 Using a Quadratic Model2.8.3 A Log-Linear Function2.8.4 Using a Log-Linear Model2.8.5 Choosing a Functional Form
2.9 Regression with Indicator Variables2.10 Exercises2.10.1 Problems2.10.2 Computer Exercises
Appendix 2A Derivation of the Least Squares EstimatesAppendix 2B Deviation from the Mean Form of bAppendix 2C b Is a Linear EstimatorAppendix 2D Derivation of Theoretical Expression for bAppendix 2E Deriving the Variance of bAppendix 2F Proof of the GaussMarkov TheoremAppendix 2G Monte Carlo Simulation2G.1 The Regression Function2G.2 The Random Error2G.3 Theoretically True Values2G.4 Creating a Sample of Data2G.5 Monte Carlo Objectives2G.6 Monte Carlo Results
Chapter 3 Interval Estimation and Hypothesis TestingLearning ObjectivesKeywords3.1 Interval Estimation3.1.1 The t-Distribution3.1.2 Obtaining Interval Estimates3.1.3 An Illustration3.1.4 The Repeated Sampling Context
3.2 Hypothesis Tests3.2.1 The Null Hypothesis3.2.2 The Alternative Hypothesis3.2.3 The Test Statistic3.2.4 The Rejection Region3.2.5 A Conclusion
3.3 Rejection Regions for Specific Alternatives3.3.1 One-Tail Tests with Alternative Greater Than (>)3.3.2 One-Tail Tests with Alternative Less Than ()C.6.3 One-Tail Tests with Alternative Less Than (