作者unmolk (UJ)
看板NTU-Exam
標題[試題] 111-1 郭漢豪 計量經濟理論一 期末考
時間Tue Jun 13 13:06:52 2023
課程名稱︰計量經濟理論一B
課程性質︰經研所必修
課程教師︰郭漢豪 Hon Ho Kwok
開課學院:社科院
開課系所︰經濟系
考試日期(年月日)︰111.12.22
考試時限(分鐘):180
試題 :
註:部分數學式以LaTeX語法撰寫
1. Constraint Least Squares (20 points)
This question is about the statistical properties of constraint least square (C
LS) estimators. Suppose we have the following linear model:
y_i = x'_i\beta + e_i
with E(x_ie_i)=0. y_i is the endogenous variable. x_i is the k*1 column of exo-
genous variables. \beta is the k*1 column of parameters of interest. The CLS e-
stimator is the solution of minimizing
SSE(\beta) = \sum_{i=1}^n (y_i-x'_i\beta)^2
subject to the constraint
R'\beta = c.
R is a k*q matrix. c is a q*1 column. Define X as the n*k matrix whose i-th row
is x'_i. Define y as the n*1 column whose i-th entry is y_i.
First, derive the CLS estimator, \tilde{\beta}_{CLS}, by solving the constrain-
ed minimization problem. Recall that the ordinary lest squaire (OLS) estimator,
\hat{\beta}_{OLS}, is (X'X)^{-1}X'y. Show that
\tilde{\beta}_{CLS} = \hat{\beta}_{OLS} -
(X'X)^{-1}R[R'(X'X)^{-1}R]^{-1}(R'\hat{\beta}_{OLS}-c).
Second, verify that the CLS estimator satisfies the constraint R'\beta=c/
Third, prove that the CLS estimator is consistent and asymptotically normal.
2. M-estimators (40 points)
The data consists of a sequence of observed vectors w_i = (y_i,x'_i)' where i=
1,...,n. The scalar y_i denotes the dependent (endogenous) variable. The column
x_i is the k*1 vector of dependent (exogenous) variables. The scalar
m_i=m(w_i;\theta) is a function of w_i, where \theta is the p*1 vector of para-
meters. The true parameter vector is denoted by \theta_0. The function
m(w_i;\theta) is twice continuously differentiable with respect to \theta for
all w_i.
An extremum estimator is an M-estimator if the criterion function is a sample
average:
Q_n = \frac{1}{n}\sum_{i=1}^n m(w_i;\theta).
We have the following notation for first and second derivatives:
s_i = s(w_i;\theta) = \frac{\partial m(w_i;\theta)}{\partial\theta}
and
H_i = H(w_i;\theta)
= \frac{\partial^2 m(w_i;\theta)}{\partial\theta\partial\theta'}.
2.1 M-Estimator Asymptotics (20 points)
The M-estimator \hat{\theta} is the maximizer of Q_n(\theta) (assume it is uni-
que). We have the following two assumptions. First, the M-estimator
\hat{\theta} is a consistent estimator of \theta_0. Second,
\frac{1}{\sqrt{n}}\sum_{i=1}^n s(w_i;\theta_0)
converges to Normal(0,\Sigma) in distribution where \Sigma is p*p and positive
definite. Show that the M-estimator is asymptotically normal and write down the
asymptotic variance of the M-estimator.
2.2 Maximum Likelihood (20 points)
Let f = f(y_i|x_i;\theta) be the conditional density of y_i given x_i and
\theta. Then the maximum likelihood estimator of \theta_0 is a special case of
the M-estimator where m_i = \log f_i. It is assumed that f(y_i|x_i;\theta)>0
for all (y_i,x_i) and \theta, so it is legitimate to take logs of the density
function.
We consider the following linear regression model:
y_i = x'_i\beta + e_i
where the scalar e_i is normally distributed with zero mean and variance
\sigma^2. The log conditional density for observation i is
\log f(y_t|x_t;\beta,\sigma^2) = -\frac{1}{2}\log(2\pi)
- \frac{1}{2}\log(\sigma^2)
- \frac{(y_t-x'_t\beta)^2}{2\sigma^2}.
Derive the conditional ML estimator of the parameters and write down the asymp-
totic variance of the estimator. Is the conditional ML estimator of \sigma^2
unbiased?
3. Generalized Method of Moments (GMM) (20 points)
The data consists of a sequence of observed vectors w_i=(y_i,x'_i,z'_i)' where
i=1,...,n. The scalar y_i denotes the dependent (endogenous) variable. The col-
umn x_i is the k*1 vector of independent variables (which are potentially endo-
genous). The column z_i denotes the l*1 vector of instruments (which are exoge-
nous). The l*1 vector g_i=g(w_i;\theta) is a function of w_i, where \theta is
the p*1 vector of parameters. The true parameter vector is denoted by \theta_0.
We have the following l moment conditions
E(g_i) = 0.
The GMM criterion function is
Q_n(\theta) = \frac{1}{2} g_n(\theta)'\hat{W}g_n(\theta)
where
g_n(\theta) = \frac{1}{n}\sum_{i=1}^n g(w_i;\theta),
and \hat{W} is a l*l matrix which is symmetric and positive definite. The weig-
ht matrix \hat{W} converges in probability to W, which is also l*l and positive
definite. The covariance matrix of g_i is denoted by S = E(g_ig'_i).
In this question, we consider the following linear model:
y_i = x'_i\beta + e_i,
and the moment conditions arer
E(g_i) = E(z_ie_i) = 0
where the scalar e_i is the unobserved exogenous variable (or the regression e-
rror) and i=1,...,n.
You may use the following notation. Y is the n*1 columns whose i-th entry is
y_i. X is the n*k matrix whose i-th row is x'_i. Z is the n*l matrix whose i-th
row is z'_i. E is the n*1 column whose i-th entry is e_i. The population and
sample covariance matrices are denoted as follows:
\sigma_{xy} = E(x_iy_i)
\Sigma_{xz} = E(x_iz'_i)
s_{xy} = \frac{1}{n}\sum_{i=1}^n x_iy_i, and
S_{xz} = \frac{1}{n}\sum_{i=1}^n x_iz'_i.
First, please derive the GMM sample error \hat{\theta} - \theta. Second, prove
that the GMM estimator is consistent. Third, prove that the GMM estimator is
asymptotically normal and derive the asymptotic covariance matrix of the effic-
ient GMM estimator. Please writhe your arguments clearly.
4. Minimum Distance Estimators (20 points)
Suppose \theta is the p*1 vector of the parameters of interest. \theta_0 denot-
es the true parameter column. The s*1 vector \pi is a column of the reduced fo-
rm parameters and we know that \pi=h(\theta), where h is a known continuously
differentiable function. h is not a function of sample size n.
\pi_0=h(\theta_0) denotes the true value of \pi. \hat{\pi} is a consistent est-
imator of \pi_0. And we know that \sqrt{n}(\hat{\pi}-\pi_0) converges in distr-
ibution to N(0,\Xi_0). \hat{\Xi} is a consistent estimator of \Xi.
The criterion function for the minimum distance estimator is
Q(\theta,\hat{W}) = (\hat{\pi}-h(\theta))'\hat{W}(\hat{\pi}-h(\theta)),
where \hat{W} is a s*s positive definite matrix. \hat{W} converges in probabil-
ity to W, which is also positive definite. The minimum distance estimator is
the \theta which minimizes Q.
First, write down the first order conditions for the minimum distance estimator
and state the identification condition. Is it necessary that s>=p? Explain your
answer.
Second, prove that minimum distance estimator is asymptotically normal. Write
down the asymptotic covariance matrix for a general weight matrix W. What is
the optimal choice of the weight matrix W?
What is the probabilistic limit of the minimized objective function Q? What is
the asymptotic distribution fo the minimized nQ(\theta,\hat{W}) with the optim-
al weight matrix?
--
※ 發信站: 批踢踢實業坊(ptt.cc), 來自: 140.112.151.175 (臺灣)
※ 文章網址: https://webptt.com/m.aspx?n=bbs/NTU-Exam/M.1686632815.A.C2F.html