That sounds reasonable, thank you. :smile:
If i understand it right, the instrument's error could be a constant, not a different error each time we measure. Then the samples would be all in a smaller region. Which means the variance would be smaller, right?
Thanks for a quick response, but it does not answer my question at all. I read all of it, but it is not what I would like to know.
I am asking whether I should use the arithmetic mean error (standard error) to determine the uncertainity in my measurement, or if I should take the instrument...
Hi.
Let's say I have data which I have measured. For example I measured a length of an object and the measurment was repeated 5 times. An instrument which I used to measure has an error, value of which I know.
My options are to either to just go with the instrument error (probably not, right?)...