作者unmolk (UJ)
看板NTU-Exam
标题[试题] 110-2 郭汉豪 计量经济学专题 期中考
时间Thu Jun 9 02:43:50 2022
课程名称︰经济学与计量经济学专题
课程性质︰经济系、所选修
课程教师︰郭汉豪
开课学院:社科院
开课系所︰经济系
考试日期(年月日)︰111.04.13
考试时限(分钟):180
试题 :
部分数学式用TeX语法编写。
1. Ordinary and Generalized Least Squares (OLS and GLS) (60 points)
Suppose we have the following linear model:
y_i = x_i'\beta + e_i.
The scalar y_i is the dependent (endogenous) variable. The schalar e_i is the
unobserved exogenous variable (or the regression error). The k*1 column x_i is
a vector of independent (exogeneous) variables. The k*1 column \beta is a vect-
or of the parameters of interest. The cross-sectional index i denotes i-th obs-
ervation, where i=1,...,n. n is the sample size.
Y is the n*1 column whose i-th entry is y_i. X is the n*k matrix whose i-th row
is x_i'. E is the n*1 column whose i-th entry is e_i. The weighted sum of squa-
red residual (WSSR) ise defined by
WSSR(\hat{\beta})\equiv(Y-X\hat{\beta})'\hat{W}(Y-X\hat{\beta}),
where \hat{W} is the weighted matrix, which is symmetric and positive definite.
Also, \hat{W} converges in probability to W, which is also symmetric and posit-
ive definite.
1.1 GLS Estimator (15 points)
Derive the GLS estimator by minimizing the WSSR. Show that the GLS sampling err-
or is equal to
\hat{\beta}-\beta = (X'\hat{W}X)^{-1}X'\hat{W}E.
Assume that E(e_i|X)=0. Prove that the GLS estimator is unbiased (in the sense
of taking expectation conditional on all x_i's).
1.2 OLS Asymptotics (15 points)
For this and next parts of question, assume \hat{W} and W are equal to the ide-
ntity matrix, so that the GLS estimator is the OLS estimator.
Assume that the observations are identically and independently distributed
(iid) over the cross-sectional index i. Also, E(x_ie_i)=0. Prove that the OLS
estimator is consistent and asymptotically normal. Then derive its asymptotic
covariance matrix. Please write your argument clearly.
1.3 OLS Prediction Error (30 points)
Assume that E(e_i|X)=0, E(e_i^2|X)=\sigma^2, and E(e_ie_j|X)=0. Define
P \equiv X(X'X)^{-1}X' and M \equiv I_n-X(X'X)^{-1}X', where I_n is the n*n id-
entity matrix. Note that P and M are symmetric.
First, show that P'P=P and M'M=M. Second, show that the within-sample predicti-
on error or the minimized SSR is (n-k)\sigma^2.
Third, show that the out-of-sample prediction error
\sum_{i=1}^{n}(y_i^{out}-x_i'\hat{\beta})^2
= (Y^{out}-X\hat{\beta})'(Y-X\hat{\beta})
is equal to (n+k)\sigma^2. y_i^{out} is the out-of-sample dependent variable
whose corresponding independent variable column is x_i. That is,
y_i^{out} = x_i'\beta+e_i^{out}, where e_i^{out} and e_i are uncorrelated.
Y^{out} is the n*1 column whose i-th entry is y_i^{out}.
Fourth, discuss how the within-sample and out-of-sample prediction errors depe-
nd on the number of independent variables, k. Which one would be a better model
selection or variable selection criterion?
2. Generalized Method of Moments (GMM) (40 points)
Suppose we have the following linear model:
y_i = z_i'\delta+e_i,
where i=1,...,n. The scalar y_i is the dependent (endogenous) variable. The sc-
alar e_i is the unobserved exogenous variables (or the regression error). The
l*1 column z_i is a vector of independent variables (which are potential endog-
enous). The l*1 column \delta is a vector of the parameters of interest. Suppo-
se x)i is an k*1 vector of instrument (exogenous variables) such that
E(g_i) = 0
where g_i\equiv x_ie_i.
Denote \sigma_{xy}\equiv\sum_{i=1}^{n}x_iy_i,
\sum_{xz}\equiv\frac{1}{n}\sum_{i=1}^{n}x_iz_i',
s_{xy}\equiv\frac{1}{n}\sum_{i=1}^{n}x_iy_i, and
S_{xz}\equiv\frac{1}{n}\sum_{i=1}^{n}x_iz_i'.
You may use the following notation (but it may not be necesaary). Y is the n*1
column whose i-th entry is y_i. Z is the n*k matrix whose i-th row is z_i'. X
is the n*k matrix whose i-th row is x_i'. E is the n*1 columns whose i-th entry
is e_i.
2.1 GMM Estimator (10 points)
Define g(w_i;\hat{\delta})\equivx_i(y_i-z_i'\hat{\delta}) where
w_i=(y_1,x_i,z_i), and
g_n(\hat{\delta}) = \frac{1}{n}\sum_{i=1}^{n}g(w_i;\hat{\delta}).
First, derive the GMM estimator by minimizing
g_n(\hat{\delta})'\hat{W}g_n(\hat{\delta}).
The n*n matrix \hat{W} is symmetric and positive definite. \hat{W} converges in
probability to W, which is also symmeteric and positive definite. Second, write
down the GMM sampling error \hat{\delta}-\delta.
2.2 GMM Asymptotics (20 points)
Assume that the observations are identically and independently distributed
(iid) over the cross-sectional index i. Argue that the GMM estimator is consis-
tent and asymptotically normal. Then derive its asymptotic covariance matrix.
Please write your argument clearly.
2.3 Social Interactions Models (10 points)
Consider the following simple linear social interactions model:
Y = aAY+bV+cAV+E.
Y is the n*1 column of dependent (endogenous) variables. V is the n*1 column of
independent (exogenous) variables. E is the n*1 column regression errors. a, b,
and c are the parameters of interest. A is an observed n*n network matrix.
Since the column AY is endogenous, we will need instruments for consistently
estimate the paramters. Please suggest how to construct the valid instruments
for AY and then how to apply the GMM estimator to consistently estimate the pa-
rameters. Please write down your reasons clearly.
--
※ 发信站: 批踢踢实业坊(ptt.cc), 来自: 140.112.73.222 (台湾)
※ 文章网址: https://webptt.com/cn.aspx?n=bbs/NTU-Exam/M.1654713833.A.E36.html