Exercise 1
(a light theoretical 'warming-up')
Question 1.
- In a two-group randomized experimental design, a response y is
measured, and the model
yi=μi+εi
- is to be fitted, where
μi=β0+β1 xi,
- εi are i.i.d. with zero means and
xi=0 in Group 1, xi=1 in Group 2.
Express the parameters β0 and β1 in terms of the group means μ1 and μ2.
- The same model is fitted, but now xi is defined by
xi=-1/2 in Group 1, xi=1/2 in Group 2
What do parameters β0 and β1 represent now?
- A third definition of xi is
xi=a in Group 1, xi=b in Group 2
where a and b are arbitrary constants. What do parameters β0 and β1 represent in this general case?
Question 2.
For the regression model
yi=g(xi)+εi
with a continuous explanatory variable x1 and a factor x2: x2i=0,
i=1,...,n1; x2i=1, i=n1+1,...,n1+n2, give plots that represent the
following linear models, where x3=x12
- g(x)=β0+β1 x1+β2 x2
- g(x)=β0+β1 x1+β3 x3
- g(x)=β0+β1 x1+β2 x2+β3 x3
- g(x)=β0+β1 x1+β3 x3+β4 x1 x2
- g(x)=β0+β1 x1+β3 x3+β5 x3 x2
Question 3.
For a simple linear regression model
yi=β0+β1 xi+εi,
i=1,...,n show that the diagonal entries of the projection (hat) matrix H are
$h_{ii}=\frac{1}{n}+\frac{(x_i-\bar{x})^2}{\sum_{j=1}^n(x_j-\bar{x})^2}$
Question 4.
Consider a standard linear model but with
correlated errors:
y=X
β+
ε,
where ε
i's have zero means and Var(
ε)=V, i.e. Var(ε
i)=V
ii,
Cov(ε
i,ε
j)=V
ij
- Find a vector of Weighted Least Squares Estimators for
β that minimizes
(y-Xβ)t V-1
(y-Xβ)
- Is it unbiased estimator of β?
- Find a variance-covariance matrix for the vector of weighted least squares
estimators.
- What is the hat-matrix H for weighted least squares linear regression?
- Show that if, in addition, εi are normal, then weighted least squares
estimators are also maximum likelihood estimators.