Measuring Agreement Between Experimental Values of Constant k

In summary, the two values for k that were experimentally determined don't agree to a certain degree. However, it is possible to say that one experimental value agrees within a certain standard deviation of the other. If you want to compare the stated and expected value, Bayesian methods could be used.
  • #1
StillAnotherDave
75
8
Homework Statement
How to compare the agreement between two values.
Relevant Equations
N/A
Hello folks,

Not strictly a homework question, but thought this might be the best place for it.

If I determine (experimentally) two values for some constant k, is there a simple meaningful way to state their (percentage) agreement. For example, imagine I carry out two different experiments and get a value for k= 0.070 ‎± 0.001 and k=0.074 ‎± 0.002 for each respectively. How can I state that these two values agree with each other to a certain degree. Is it possible to say that one experimental value agrees within a certain standard deviation of the other?

Similarly, if I were to experimentally get a value for k=0.070 ‎± 0.001 and the expected value is k=0.074. How do I best compare the stated and expected value.

I imagine this is a pretty obvious and simple question, but would appreciate some help.
 
Last edited:
Physics news on Phys.org
  • #2
This type of problem could be approached with Bayesian methods:

Given some experimental results, what is the probability that the true value is in a certain range?

This works nicely for when you only have a limited number of experiments. Rather than the frequentist approach, which assumes you can keep doing the experiment a near-infinite number of times to approach the real value.

You do not know the true value, but you have some prior observations (with some uncertainty). This may be more than you want, for the moment. It can get "mathy", but I find the topic quite interesting.

I liked this free Coursera course from UC Santa Cruz on the topic.
https://www.coursera.org/learn/bayesian-statistics
 
  • #3
Of course, you could keep performing the experiment to converge on a "better" value. This would depend on your budget (time or money). You wouldn't necessarily need to perform hundreds of experiments. Each new observation will provide more information, and a better credible set.
 
  • #4
That's interesting as I plan to look at Bayesian analysis at some point in the future.

But presently I'm looking for some simple option to make an easy direct comparison.
 
  • #5
It's hard to say something simple about 2 observations. It could be the true value is 0.070 and your other observation is just off, for some unknown reason. Or maybe it's 0.076 should be the correct value.

Domain knowledge could steer you to have more belief in one observation over the other one. Then again, domain knowledge has biased scientists to ignore results, which should have been considered.

But you know about the experiment(s), and we don't. If you cannot perform any additional experiments, then try to explain from your knowledge why they differ. Perhaps you performed 2 different methods (since there were different uncertainties in each observation).

You could take the average and report both observations. What is the cost of being off by a certain amount?
 
  • #6
Or just create a table with different possibilities: say both "values" were high: 0.071 and 0.076, then compute the average and standard deviation.
Then do again with 0.069 and 0.072
Then try the outer values: 0.069 and 0.076, and then the closer values: 0.071 and 0.072
 
  • #7
I don't know much about this so don't take anything other than possible pause for thought from my comments.
scottdave said:
you could keep performing the experiment to converge on a "better" value.
What if the experimental values do not converge? Rather continue to vary eratically with an average value that stays away from the other known or accepted value and maybe in a range that never includes it?

When I saw his two values, I thought, they do not agree. If one is right, the other must be wrong. One or other (or both) are wrong.

I may be looking at it from an engineering tolerance view, rather than a physics variability view.
Perhaps value ± delta means something like, the result lies within delta of value in some implied fraction of the experiments done.

I thought that one looked at all the sources of error and uncertainty, then said, I can't be sure of the exact value but I know it lies in the range value ± delta. My conclusion was that he must have some unaccounted source of error, that he should look for.
 
  • Like
Likes scottdave
  • #8
As @Merlin3189 stated, maybe they don't converge. Obviously there are some factors which have not been taken into account that the values do not agree (within tolerance). An engineering perspective is something important to consider: what is the cost of repeating the experiment. What is the cost of the stated value being off, etc.

Performing additional experiments will provide more information, perhaps about the source of variability.

I just try to provide ideas about what to try, without knowing what the goal of what is being measured.
 
  • Like
Likes Merlin3189
  • #9
StillAnotherDave said:
k= 0.070 ‎± 0.001 and k=0.074 ‎± 0.002

I believe your question is: with just this information, what can you say? It really boils down to the meaning of the provided error bars. They came from somewhere and they have underlying assumptions. There is also the question of whether they are one sigma or three sigma.

There can be a whole range of possibilities, so we can only guess at your situation. One common meaning is that the results of the measurement are expected to be normally distributed, and the error bar is an estimate of the 3 sigma half width of the distribution. Note that it is only an estimate and if the experiment were repeated a zillion times the actual distribution might have a different standard deviation. In other words there is error in the error bar. If these are two results using the same measurement technique, and the error bars came out 2 X different, that may be telling you that the estimate of the error bar is not very good and the actual distribution might have either value, or in between, or twice as big. That is telling you to do more repeated measurements and get a better handle on the distribution.

Let’s assume that you understand what went into the measurement and you know that the error bars are good estimates of the distributions. If so, why are they different? suppose these are two different measurement techniques that happen to have different precision. The first measurement is telling you the second value 0.074 is very improbable. If 0.001 is three sigma, 0.074 is 12 sigma from the mean and has only a 10^-32 chance of being from the same distribution. However, 0.074 is only the most probable result of the second experiment. Could the real value actually be 0.070? For the second distribution that is six sigma below the mean. There is still only a 1 in 10^8 chance that the true value is the value found in the first experiment. Taken together, there is very little chance these values are the same. Now you can start looking for reasons that they are different (experimental bias, or real physical reasons for the difference)

Of course the argument is different if those are 1 sima error bars. Then 0.074 is only 4 sigma above the first experiment. Still unlikely, but 1/3000 not 10^-32. More importantly 0.070 is only 2 sigma below the second measurement. There is a 14% chance 0.070 is correct. In this case you would say that the more precise measurement technique gives the value as 0.070 +\- 0.001 (one sigma) and that the second less precise measurement technique supports this value within experimental error.
 
  • Like
Likes Merlin3189
  • #10
StillAnotherDave said:
That's interesting as I plan to look at Bayesian analysis at some point in the future.

But presently I'm looking for some simple option to make an easy direct comparison.
In the Bayesian approach, as I understand it, you start with an a priori distribution and adjust it in the light of the data. You could take one reading with its error range as the prior and the other reading with its range as the update. That should be valid if the result does not depend on which way round you use them.

I presume the example data you quote are fictitious. Since they do not overlap, the first response should be to reconsider your estimate of the error bars.
 
  • Like
Likes Merlin3189
  • #11
Any experiment will bring a host of random elements into play for each of a series of supposed identical measurements. The Central Limit Theorem tell us that for a variety of real world circumstances these now-distributed measurements will follow a Gaussian curve. Therefore:
The average value is your best estimate
That estimate gets better as √n where n=number of data

This simple description will provide much of what you need for majority of analysis. It is remarkibly powerful. If you have partial knowledge about your outcomes or errors then the Bayesian approaches make sense but are more (often much more) involved
 
  • #12
hutchphd said:
Any experiment will bring a host of random elements into play for each of a series of supposed identical measurements. The Central Limit Theorem tell us that for a variety of real world circumstances these now-distributed measurements will follow a Gaussian curve. Therefore:
The average value is your best estimate
That estimate gets better as √n where n=number of data

This simple description will provide much of what you need for majority of analysis. It is remarkibly powerful. If you have partial knowledge about your outcomes or errors then the Bayesian approaches make sense but are more (often much more) involved
Actually, it's ##\sqrt{n-1}##, which means that with two readings the estimate of error in the averaged value will be just the standard deviation of the two numbers, i.e. difference divided by sqrt(2).
But this approach ignores the given error bars, and with so few readings these will often give a tighter bound (if trusted). The challenge is to use both pieces of information.
 
  • Like
Likes hutchphd
  • #13
Thanks. My n is usually large...
 
  • #14
hutchphd said:
Any experiment will bring a host of random elements into play for each of a series of supposed identical measurements. The Central Limit Theorem tell us that for a variety of real world circumstances these now-distributed measurements will follow a Gaussian curve. Therefore:
The average value is your best estimate
That estimate gets better as √n where n=number of data

This simple description will provide much of what you need for majority of analysis. It is remarkibly powerful. If you have partial knowledge about your outcomes or errors then the Bayesian approaches make sense but are more (often much more) involved

Yes, but in this case these are not simply repeated experiments. From somewhere he has independent and unequal estimates of the error of each of the two measurement values. In that case you do not simply average and say the deviation of the mean is ## \sqrt n## of the error in each.

Particularly not when the error bars are telling you these two values are not even statistically likely to be the same. The only thing to do in that case is go back and figure out what about your two experiments is different or what about your error estimates is wrong.
 
  • #15
My apologies... I did not carefully read the question. Absolutely my bad.
 
  • #16
Cutter Ketch said:
Yes, but in this case these are not simply repeated experiments. From somewhere he has independent and unequal estimates of the error of each of the two measurement values. In that case you do not simply average and say the deviation of the mean is ## \sqrt n## of the error in each.

Particularly not when the error bars are telling you these two values are not even statistically likely to be the same. The only thing to do in that case is go back and figure out what about your two experiments is different or what about your error estimates is wrong.
Perhaps a specific model will help.
Two people report to you their estimates of the same distance. One measured in metres, to the nearest cm, the other in feet, to the nearest half inch.
 
  • #17
Cutter Ketch said:
Yes, but in this case these are not simply repeated experiments. From somewhere he has independent and unequal estimates of the error of each of the two measurement values. In that case you do not simply average and say the deviation of the mean is ## \sqrt n## of the error in each.

Particularly not when the error bars are telling you these two values are not even statistically likely to be the same. The only thing to do in that case is go back and figure out what about your two experiments is different or what about your error estimates is wrong.

To make the question more concrete, suppose k is the spring constant for some physical spring-mass system being investigated.

The first value of k is determined by measuring extension of the spring as a function of masses added. This is used to plot a graph of Δx against m and then the gradient is used to determine k (F=mg=-kx). Uncertainty in extension is simple observational and scale uncertainty in the ruler used.

The second value of k is determined from a measure of oscillation where the time period T2 is plotted against mass and the gradient is again used to determine k (T2=4π2m/k). The error in T is again observational uncertainty using a stopwatch.
 
  • #18
Just realized that none of us, as I read the replies, has addressed the specific question asked. We've all focused on the usual question of how you combine them to come up with a single estimate of actual value and the uncertainty.

For degree of agreement, you would first define what you mean by the error bars, i.e., what distribution do these represent? Is it uniform across that range, or are these some number of standard deviations of a Gaussian distribution? Or...?
Having answered that, you have two probability density distributions. The area of the overlap would be an obvious indicator of the level of agreement, the maximum possible value being 1.

But note that the two measurements 8.0±0.1 and 8.0±0.2 would show less agreement than 8.0±0.2 and 8.0±0.2. So I am not sure this is what you are really after.
 

1. What is constant k and why is it important in scientific experiments?

Constant k, also known as the rate constant, is a numerical value that represents the rate at which a chemical reaction occurs. It is an important measure in scientific experiments because it helps to determine the relationship between the reactants and products, and can be used to predict the outcome of future reactions.

2. How is agreement between experimental values of constant k measured?

Agreement between experimental values of constant k is typically measured using statistical methods such as correlation coefficients and regression analysis. These techniques compare the data points from different experiments and determine the degree of similarity or agreement between them.

3. What factors can affect the agreement between experimental values of constant k?

There are several factors that can affect the agreement between experimental values of constant k, including experimental error, variations in experimental conditions, and the accuracy and precision of the measuring instruments used. It is important to control for these factors in order to obtain reliable and accurate results.

4. How can the agreement between experimental values of constant k be improved?

To improve the agreement between experimental values of constant k, it is important to carefully design and conduct experiments, use precise and accurate measuring instruments, and repeat the experiments multiple times to ensure consistency. Additionally, using statistical techniques to analyze the data and identify any outliers or errors can also help improve the agreement between values.

5. What are the limitations of measuring agreement between experimental values of constant k?

One limitation of measuring agreement between experimental values of constant k is that it does not take into account any potential errors or biases in the experimental design or data collection. Additionally, the accuracy of the measuring instruments and the precision of the experimental conditions can also affect the agreement between values. It is important to consider these limitations when interpreting the results of such measurements.

Similar threads

  • Introductory Physics Homework Help
2
Replies
35
Views
2K
  • Introductory Physics Homework Help
Replies
10
Views
919
  • Introductory Physics Homework Help
Replies
11
Views
4K
  • Engineering and Comp Sci Homework Help
Replies
3
Views
925
  • Introductory Physics Homework Help
Replies
1
Views
6K
  • Engineering and Comp Sci Homework Help
Replies
20
Views
2K
  • Special and General Relativity
Replies
4
Views
1K
  • Introductory Physics Homework Help
Replies
1
Views
1K
  • Atomic and Condensed Matter
Replies
3
Views
873
  • Introductory Physics Homework Help
Replies
1
Views
887
Back
Top