OLS Estimator, derivation sigmahat(beta0hat)

In summary: X }\right) }}{1 } In this formula, j represents the index of the estimated coefficient. In our case, j=0 because we are looking at the intercept coefficient. The first term, var(∆), represents the variance of the dependent variable. The second term, var(β1∆), represents the variance of the slope coefficient multiplied by the variance of the independent variable (X). The third term, -2cov(Y,β1X), represents the covariance between the dependent variable and the product of the slope coefficient and the independent variable.Now, let's apply this formula to our specific case. In order to get the formula for the variance
  • #1
chrisoutwrigh
5
0
Good day,
in the lectures of emperical economic research of my uni, we got to the topic of Linear Regression with one regressor. There I encountered upon:

[itex] {\hat{\sigma }_{\hat{\beta }_{0 }}}^{2 }=\frac{1 }{n }\cdot \frac{\text{var }{\left( {\left[ 1 -{\left( \frac{\mu _{x }}{E {\left( {X _{i }}^{2 }\right) }}\right) }\cdot X _{i }\right] }u _{i }\right) }}{{{\left[ E {\left( {{\left[ 1 -{\left( \frac{\mu _{x }}{E {\left( {X _{i }}^{2 }\right) }}\right) }\cdot X _{i }\right] }}^{2 }\right) }\right] }}^{2 }} [/itex]

When this obeys [heteroskedasticity-robust standard errors], the formula becomes:

[itex] {\hat{\sigma }_{\hat{\beta }_{0 }}}^{2 }=\frac{1 }{n }\cdot \frac{\frac{1 }{n -2 }\cdot \sum _{i =1 }^{n }{{\left[ 1 -\frac{\bar{X }}{\frac{1 }{n }\cdot \sum _{i =1 }^{n }{\left( {X _{i }}^{2 }\right) }\; }\cdot X _{i }\right] }}^{2 }\cdot {\hat{u }_{i }}^{2 }\; }{{\left( \frac{1 }{n }\cdot {{\left[ 1 -\frac{\bar{X }}{\frac{1 }{n }\cdot \sum _{i =1 }^{n }{\left( {X _{i }}^{2 }\right) }\; }\cdot X _{i }\right] }}^{2 }\right) }} [/itex]

I tried to get it to this form:

[itex] {\hat{\sigma }_{\hat{\beta }_{0 }}}^{2 }=\frac{\text{var }{\left( \hat{\beta }_{0 }\right) }}{1 }=\text{var }{\left( \bar{Y }-\hat{\beta }_{1 }\cdot \bar{X }\right) }=\text{var }{\left( \bar{Y }\right) }+\text{var }{\left( \hat{\beta }_{1 }\cdot \bar{X }\right) }-\text{cov }{\left( \bar{Y },\hat{\beta }_{1 }\cdot \bar{X }\right) } \\ \qquad{\hat{\sigma }_{\hat{\beta }_{0 }}}^{2 }=0 +{\bar{X }}^{2 }\cdot {\left( \frac{1 }{n }\frac{\frac{1 }{n -2 }\cdot \sum _{i =1 }^{n }{{\left( {X _{i }}^{}-\bar{X }\right) }}^{2 }\cdot {\hat{u }_{i }}^{2 }\; }{{{\left[ \frac{1 }{n }\cdot \sum _{i =1 }^{n }{{\left( {X _{i }}^{}-\bar{X }\right) }}^{2 }\; \right] }}^{2 }}\right) }-\text{cov }{\left( \hat{\beta }_{0 }+\bar{X }\cdot \hat{\beta }_{1 },\hat{\beta }_{1 }\cdot \bar{X }\right) }\\{\hat{\sigma }_{\hat{\beta }_{0 }}}^{2 }={\bar{X }}^{2 }\cdot {\left( \frac{1 }{n }\frac{\frac{1 }{n -2 }\cdot \sum _{i =1 }^{n }{{\left( {X _{i }}^{}-\bar{X }\right) }}^{2 }\cdot {\hat{u }_{i }}^{2 }\; }{{{\left[ \frac{1 }{n }\cdot \sum _{i =1 }^{n }{{\left( {X _{i }}^{}-\bar{X }\right) }}^{2 }\; \right] }}^{2 }}\right) }+0 \qquad [/itex]


so far no luck... [cov comprises again [itex] {\hat{\beta }_{0 }} [/itex] and [itex] {\hat{\beta }_{1 }} [/itex] so how to resolve? is it null?]

and where does the paraphrased term [itex] \qquad{\hat{H }_{i }}^{}=1 -\frac{\bar{X }}{\frac{1 }{n }\cdot \sum _{i =1 }^{n }{\left( {X _{i }}^{2 }\right) }\; }\cdot X _{i } [/itex] in the first equation come from?
it would be glad to get the complete derivation ;-)
 
Last edited:
Physics news on Phys.org
  • #2


Hello,

Thank you for reaching out with your question. I understand that you are struggling with understanding the derivation of the formula for the heteroskedasticity-robust standard errors in linear regression with one regressor. This can be a complex topic, so let me try to break it down for you.

First, let's review some basic concepts. In linear regression, we are trying to estimate the relationship between a dependent variable (Y) and an independent variable (X). The estimated equation is usually written as Y = β0 + β1X + u, where β0 and β1 are the intercept and slope coefficients, respectively, and u is the error term. The goal is to minimize the sum of squared residuals (SSR) in order to find the best fitting line.

Now, let's focus on the formula you provided:

{\hat{\sigma }_{\hat{\beta }_{0 }}}^{2 }=\frac{1 }{n }\cdot \frac{\text{var }{\left( {\left[ 1 -{\left( \frac{\mu _{x }}{E {\left( {X _{i }}^{2 }\right) }}\right) }\cdot X _{i }\right] }u _{i }\right) }}{{{\left[ E {\left( {{\left[ 1 -{\left( \frac{\mu _{x }}{E {\left( {X _{i }}^{2 }\right) }}\right) }\cdot X _{i }\right] }}^{2 }\right) }\right] }}^{2 }}

This formula is the calculation for the variance of the intercept coefficient, β0. It is derived from the general formula for the variance of an estimated coefficient, which is:

{\hat{\sigma }_{\hat{\beta }_{j }}}^{2 }=\frac{\text{var }{\left( \hat{\beta }_{j }\right) }}{1 }=\frac{\text{var }{\left( \bar{Y }-\hat{\beta }_{1 }\cdot \bar{X }\right) }}{1 }=\frac{\text{var }{\left( \bar{Y }\right) }+\text{var }{\left( \hat{\beta }_{1 }\cdot \bar{X }\right) }-2\cdot \text{cov }{\left( \bar{Y
 

Related to OLS Estimator, derivation sigmahat(beta0hat)

1. What is an OLS estimator?

An OLS (ordinary least squares) estimator is a statistical method used to estimate the unknown parameters of a linear regression model. It is used to calculate the best fitting line through a set of data points by minimizing the sum of squared residuals.

2. How is the OLS estimator derived?

The OLS estimator is derived using the method of least squares, which involves minimizing the sum of squared residuals between the observed data points and the predicted values from the linear regression model. This is done by taking the derivative of the sum of squared residuals with respect to the parameters of the model and setting it equal to zero.

3. What is the significance of sigma hat (β0 hat) in the OLS estimator?

Sigma hat (β0 hat) is the standard error of the intercept coefficient in the OLS estimator. It measures the variability of the intercept coefficient across different samples and is used to calculate confidence intervals for the intercept.

4. How is sigmahat(beta0hat) calculated in the OLS estimator?

Sigmahat(beta0hat) is calculated as the square root of the estimated variance of the intercept coefficient, which is obtained by dividing the sum of squared residuals by the degrees of freedom. The degrees of freedom is equal to the number of observations minus the number of parameters in the model.

5. What are the assumptions of the OLS estimator?

The OLS estimator relies on several assumptions, including linearity, independence of errors, constant variance of errors, and normality of errors. These assumptions must be met for the OLS estimator to be unbiased and efficient in estimating the parameters of a linear regression model.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
774
  • Set Theory, Logic, Probability, Statistics
Replies
26
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
866
  • Set Theory, Logic, Probability, Statistics
Replies
9
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
Replies
0
Views
485
  • Set Theory, Logic, Probability, Statistics
Replies
9
Views
2K
Back
Top