Exercise 1
(a light theoretical 'warming-up')
Question 1.
- In a two-group randomized experimental design, a response y is
measured, and the model
yi=μi+εi
- is to be fitted, where
μi=β0+β1 xi,
- εi are i.i.d. with zero means and
xi=0 in Group 1, xi=1 in Group 2.
Express the parameters β0 and β1 in terms of the group means
μ1 and μ2.
- The same model is fitted, but now xi is defined by
xi=-1/2 in Group 1, xi=1/2 in Group 2
What do parameters β0 and β1 represent now?
- A third definition of xi is
xi=a in Group 1, xi=b in Group 2
where a and b are arbitrary constants. What do parameters
β0 and β1 represent in this general case?
Question 2.
For the linear regression model
yi=gi+εi
with a continuous explanatory variable x1 and a factor
x2: x2i=0,
i=1,...,n1; x2i=1, i=n1+1,...,n1+n2, give diagrams to represent the
following models, where x3=x12
- g(x)=β0+β1 x1+β2 x2
- g(x)=β0+β1 x1+β3 x3
- g(x)=β0+β1 x1+β2 x2+β3 x3
- g(x)=β0+β1 x1+β3 x3+β4 x1 x2
- g(x)=β0+β1 x1+β3 x3+β5 x3 x2
Question 3.
For a simple linear regression model
yi=β0+β1 xi+εi,
i=1,...,n show that the diagonal entries of the hat-matrix H are
hii=1/n+(xi-average(x))2/
Σj(xj-average(x))2
Question 4.
Consider a standard linear model but with correlated errors:
y=Xβ+ε,
where εi's have zero means and Var(ε)=V, i.e. Var(εi)=Vii,
Cov(εi,εj)=Vij
- Find a vector of Weighted Least Squares Estimates for
β that minimizes
(y-Xβ)'V-1
(y-Xβ)
- Find a variance-covariance matrix for the vector of weighted least squares
estimates.
- What is the hat-matrix H for weighted least squares linear regression?
- Show that if, in addition, εi are normal, then weighted least squares
estimates are also maximum likelihood estimates.