- #1
Jonas Hall
- 4
- 1
If you fit a parametrized model (i.e. y = a log(x + b) + c) to some data points the output is typically the optimized parameters (i.e. a, b, c) and the covariance matrix. The squares of the diagonal elements of this matrix are the standard errors of the optimized parameters. (i.e. sea, seb, sec). Now to get a confidence interval of 95% for a parameter you typically multiply this error with 1.96 (assuming a normal distribution)(i.e. a ± 1.96 sea). At least this is what I have found so far. But I wonder if this is the whole truth. Shouldn't you also divide by √n the way you do when you create a confidence interval for a mean? It just seems to me that the more data you have, the better the estimates of the parameters should become. Also, i find that if i don't divide by √n, then the values seem rather large, sometimes falling om the wrong side of 0.
Or... does the covariance matrix values grow smaller with increasing n and this is the reason that you don't divide by √n and the values are supposed to be quite large?
Grateful if someone could make this clear to me. I have never studied statistics "properly" but dabble in mathematical models and teach math in upper secondary school.
Or... does the covariance matrix values grow smaller with increasing n and this is the reason that you don't divide by √n and the values are supposed to be quite large?
Grateful if someone could make this clear to me. I have never studied statistics "properly" but dabble in mathematical models and teach math in upper secondary school.