- #1
fanfan
- 3
- 0
I would like to ask if anybody can help me figure out a fair way to measure the difference of two measurements in percentage.
I have two sets of measurements X and Y, and both are data with unknown noise.
To measure the difference in percentage of these two, both equations can be used (I suppose):
1) I consider that X is the first measurement, and test how different is Y comparing to X
diff1_i = (xi-yi)/xi*100
or
2)
diff2_i = (xi-yi)/((xi+yi)/2)*100
Of course abs(diff2_i) is smaller than abs(diff1_i). But which one is more fair than the other?
By measuring the median or mean of all the difference Ʃdiff1_i or diff2_i (i=1:100), I can test if the two measurements are biased, or the difference is due to random noise in the data, right?
I tend to think that no matter which equation to use, the maximum difference in percentage should be looked into, right?
Fanfan
I have two sets of measurements X and Y, and both are data with unknown noise.
To measure the difference in percentage of these two, both equations can be used (I suppose):
1) I consider that X is the first measurement, and test how different is Y comparing to X
diff1_i = (xi-yi)/xi*100
or
2)
diff2_i = (xi-yi)/((xi+yi)/2)*100
Of course abs(diff2_i) is smaller than abs(diff1_i). But which one is more fair than the other?
By measuring the median or mean of all the difference Ʃdiff1_i or diff2_i (i=1:100), I can test if the two measurements are biased, or the difference is due to random noise in the data, right?
I tend to think that no matter which equation to use, the maximum difference in percentage should be looked into, right?
Fanfan