- #1
BAnders1
- 65
- 0
In regards to data analysis, what would it mean if you measured some quantity, say the speed of light for example, and you measured an average value that was 110% of the true value. If your uncertainty in measurement is LESS than 10%, does that mean that something is wrong with your data or experimental method?
In general, should the uncertainty in measurement create a "bubble" around the average measured value that always encloses the true value, given that your experimental method and theory is correct, and that you accounted for every possible error?
In general, should the uncertainty in measurement create a "bubble" around the average measured value that always encloses the true value, given that your experimental method and theory is correct, and that you accounted for every possible error?