Error propagation for a sum of means

In summary: statistic is used to determine if a change in conditions (e.g. more photons hitting the detector) will result in a change in the photoelectron count.
  • #1
FatPhysicsBoy
62
0
I have a = {a1, a2, .., a1000}, where this set forms a distribution of photoelectrons (pe) seen by a particular photomultiplier tube (pmt) over 1000 repeated events. I then have N sets of these (N pmts), each containing 1000 pe values which I believe are indeed random and independent. So a, b, c, ... (not enough letters!) where a corresponds to pmt1, b corresponds to pmt2 etc.

a then goes into a histogram from which a mean and variance is extracted, this is then done for all N sets. I then define N variables pi = μiT where μi = mean from histogram i which in turn corresponds to a, and μT is the sum of the means (μi) from all N histograms.

I'm just now confused on how I'd calculate V[pi], i.e. for pmt1 given that I know E[ a ] and V[ a ]. So far I've thought that perhaps I can say μT = μ1 + μ2 + ... + μN = ((a1 + a2 + ... a1000) + (b1 + b2 + ... b1000) +... )/1000 and then therefore V[μT] = ((V[a1] + V[a2] + ... V[a1000]) + (V[b1] + V[b2] + ... V[b1000]) +... )/1000. I'm not sure however, that this is correct.. First of all I don't know if the logic is correct, and secondly I'm not sure whether the aj, bk and so on can be treated as independent and random variables (I think they are random and independent, but the means E[ a ], E[ b ] etc are not?)

Hopefully this hasn't been too confusing, and any ideas would be greatly appreciated. Cheers.
 
Physics news on Phys.org
  • #2
FatPhysicsBoy said:
I'm just now confused on how I'd calculate V[pi], i.e. for pmt1 given that I know E[ a ] and V[ a ].

I asssume "V(..)" means "variance of". However, "variance of" is an ambiguous phrase. You can "calculate" the variance of a sample. However, your goal might be to "estimate" the variance of some random variable - i.e. some population. One estimates the variance of a random variable by doing calculations on values in a sample. Those calculations often involve computing sample variances.

If you are trying to estimate the variance of a random variable, what is the definition of that random variable? For example, is it the mean number of electrons detected in 1000 events? Or is it the mean number of electrons detected in a single event?

Are the experiments, a,b,c,... all conducted under the same conditions?
 
  • Like
Likes FatPhysicsBoy
  • #3
Stephen Tashi said:
I asssume "V(..)" means "variance of". However, "variance of" is an ambiguous phrase. You can "calculate" the variance of a sample. However, your goal might be to "estimate" the variance of some random variable - i.e. some population. One estimates the variance of a random variable by doing calculations on values in a sample. Those calculations often involve computing sample variances.

If you are trying to estimate the variance of a random variable, what is the definition of that random variable? For example, is it the mean number of electrons detected in 1000 events? Or is it the mean number of electrons detected in a single event?

Are the experiments, a,b,c,... all conducted under the same conditions?

Hello Stephen, thank you for your reply. I do indeed mean "variance of" when I say V[..]. However, the sets a, b, c, etc do not correspond to distinct experiments. In fact, a single experiment (or event) results in N values corresponding to each PMT in the detector a1, b1, c1, ..., NthLetter1 Event 2 then has a2, b2, c2, ..., NthLetter2 and so on for 1000 events. At the end of this I look at the distributions a, b, c, and so on. Obtaining the mean of these distributions gives me a photoelectron expectation value for PMTa, PMTb, PMTc, ..., PMTNthLetter . I then have a distribution of N expectation values formed from the expected photoelectron count at PMTi.

I then run another test event with identical conditions where the sets a, b, c, .. are now single valued, they only have one element in them - the number of photoelectrons seen by PMTi in that single event. This single photoelectron value is compared to the expected value for each PMT using a chi square goodness of fit test. At the moment my chi square statistic is (di - pi)2 / (pi), where pi is as defined in my initial post, and di comes from the single test event and is defined as (photoelectron_count_on_pmt_i)/(sum_of_photoelectron_counts_over_all_N_pmts). The chi square statistic is then summed over all N pmts.

The motivation behind wanting to know the variance of pi comes from wanting to explore effectiveness when the chi square statistic is weighted with the variance instead of pi.
 
  • #4
First, I'll just attempt to rephrase the problem in different notation.

Let [itex] Y = (Y_1,Y_2,...Y_M) [/itex] be a vector of [itex] M [/itex] possibly distinct random variables representing the counts recorded by each of [itex] M [/itex] photo multiplier tubes (PMTs) in one experiment.

Let [itex] y[\ ][\ ] [/itex] be an array of observed data, where [itex] y[\ i][j] [/itex] gives the count on the [itex]i[/itex]th PMT in the [itex]j[/itex]th experiment, [itex] i = 1,2,..M [/itex], [itex] j = 1,2...N [/itex]. The [itex] N [/itex] experiments are assumed to be independent realizations of [itex] Y [/itex].

Let [itex] z[\ i] [/itex] be an array of data [itex] i = 1,2,...M [/itex] representing the observed count on each PMT in an single experiment done under possibly different conditions that the experiments mentioned above. (i.e. [itex] z[\ ] [/itex] is not from an experiment that produced the data in [itex] y[\ ][\ ] [/itex] ).

The goal is to do a hypothesis test using the null hypothesis that [itex] z[\ ] [/itex] is a realization of [itex] Y [/itex].
The general question is how to define and compute a Chi-squared statistic for this test.

------
Let [itex] \mu[\ i] = \frac{1}{n} \sum_{j=1}^N y[\ i] [j] [/itex]

Let [itex] S = \sum_{i=1}^M \mu[\ i] [/itex].

You define [itex] p[\ i] = \frac{\mu[\ i]}{S} [/itex].

The definition of [itex] p[\ i ] [/itex] "corresponds" to a random variable [itex] P_i = Y_i/ (\ \sum_{j=1}^M Y_j \ ) [/itex].

- or perhaps its "corresponds" to [itex] P_i = Y_i/(\sum_{j=1}^M E(Y_j) ) [/itex] ?

You want to know how to estimate the variance of [itex] P_i [/itex].
 
Last edited:
  • Like
Likes FatPhysicsBoy
  • #5
Stephen Tashi said:
First, I'll just attempt to rephrase the problem in different notation.

Let [itex] Y = (Y_1,Y_2,...Y_M) [/itex] be a vector of [itex] M [/itex] possibly distinct random variables representing the counts recorded by each of [itex] M [/itex] photo multiplier tubes (PMTs) in one experiment.

Let [itex] y[\ ][\ ] [/itex] be an array of observed data, where [itex] y[\ i][j] [/itex] gives the count on the [itex]i[/itex]th PMT in the [itex]j[/itex]th experiment, [itex] i = 1,2,..M [/itex], [itex] j = 1,2...N [/itex]. The [itex] N [/itex] experiments are assumed to be independent realizations of [itex] Y [/itex].

Let [itex] z[\ i] [/itex] be an array of data [itex] i = 1,2,...M [/itex] representing the observed count on each PMT in an single experiment done under possibly different conditions that the experiments mentioned above. (i.e. [itex] z[\ ] [/itex] is not from an experiment that produced the data in [itex] y[\ ][\ ] [/itex] ).

The goal is to do a hypothesis test using the null hypothesis that [itex] z[\ ] [/itex] is a realization of [itex] Y [/itex].
The general question is how to define and compute a Chi-squared statistic for this test.

------
Let [itex] \mu[\ i] = \frac{1}{n} \sum_{j=1}^N y[\ i] [j] [/itex]

Let [itex] S = \sum_{i=1}^M \mu[\ i] [/itex].

You define [itex] p[\ i] = \frac{\mu[\ i]}{S} [/itex].

The definition of [itex] p[\ i ] [/itex] "corresponds" to a random variable [itex] P_i = Y_i/ (\ \sum_{j=1}^M Y_j \ ) [/itex].

- or perhaps its "corresponds" to [itex] P_i = Y_i/(\sum_{j=1}^M E(Y_j) ) [/itex] ?

You want to know how to estimate the variance of [itex] P_i [/itex].

Hello Stephen, yes everything is correct. [itex] p[\ i ] [/itex] "corresponds" to [itex] P_i = Y_i/ (\ \sum_{j=1}^M Y_j \ ) [/itex] where the [itex]Y[/itex] used here comes from [itex]z[][/itex] and not [itex]y[][][/itex].

I realized I made a mistake when interpreting the chi-square equation and that the variance I want is indeed the one you mention and not that for [itex]p[\ i][/itex]. You see currently I am using [itex]\chi^{2} = \sum_{i=1}^{M}\frac{(P_{i} - p[\ i])^{2}}{p[\ i]}[/itex] but now I want to try [itex]\chi^{2} = \sum_{i=1}^{M}\frac{(P_{i} - p[\ i])^{2}}{\sigma_{P_{i}}^{2}}[/itex].

The general idea here is, I have my [itex]y[\ i][j][/itex] where all [itex]j[/itex] 'experiments' are at the same position [itex]x[/itex], then the [itex]p[\ i][/itex] give a 'percentage hit' e.g. If [itex]x[/itex] is right next to PMT20, then PMT20 will have a high value for [itex]p[20][/itex]. Then [itex]p[19][/itex] and [itex]p[21][/itex] will be slightly lower, [itex]p[18][/itex] and [itex]p[22][/itex] will be slightly lower still, and so on.

I then do everything again at position [itex]x_{1} \neq x[/itex], I then get a whole new set [itex]y_{1}[\ i][j][/itex]. Maybe this time [itex]x_{1}[/itex] is right near PMT140, then a similar argument to above applies.

Finally, when I run the single 'experiment', I get z[] and subsequently my [itex]P_{i}[/itex], I do this at position [itex]x[/itex] also, then the [itex]\chi^{2}[/itex] for the set of [itex]p[\ i][/itex] corresponding to [itex]y[\ i][j][/itex] will be lower than the [itex]\chi^{2}[/itex] for the set of [itex]p[\ i][/itex] corresponding to [itex]y_{1}[\ i][j][/itex].
 
  • #6
FatPhysicsBoy said:
Hello Stephen, yes everything is correct. [itex] p[\ i ] [/itex] "corresponds" to [itex] P_i = Y_i/ (\ \sum_{j=1}^M Y_j \ ) [/itex] where the [itex]Y[/itex] used here comes from [itex]z[][/itex] and not [itex]y[][][/itex].

The straightforward way to estimate the mean and variance of [itex] P_i [/itex] is to compute realizations of [itex] P_i [/itex] from [itex] y[\ ][\ ] [/itex]

Define [itex] q[\ i][j] = \frac{ y[\ i][j] }{\sum_{k=1}^M y[\ k][j] }[/itex]
Estimate [itex] E(P_i) [/itex] by the sample mean of the samples [itex] q[\ i][1],q[\ i][2],...q[\ i][N] [/itex].
Estimate [itex] Var(P_i) [/itex] by the sample variance of that set of samples.

We have to face the question of whether Chi-square is useful in this case. There are caveats about using Chi-square when the data is proportions instead of integer counts. (e.g. http://stats.stackexchange.com/questions/104323/chi-square-analysis-percentages ).

When people say "You can use Chi-square" or "You can't use Chi-square" they refer to using the Chi-square statistic and relying on the usual numerical tables or formulas for its distribution. You can define a statistic whose formula is that of Chi-square in a situation where it does not have the distribution of the "real" Chi-square statistic. The statistic will have some distribution and that distribution may be useful in hypothesis testing.

In my opinion, the safest way to approach even moderately complicated statistical problems is to use simulation. If you have a rough model for the physics of the experiment, you can estimate the distribution of the statistic you have defined.

----

There is the question of what you are trying to accomplish by a hypothesis test. For example, the null hypothesis that "The array of data z[\ ] is a realization of [itex] Y [/itex]" is different than the more specific null hypothesis "The value z[3] is a realization of [itex] Y_3 [/itex]" .
 
  • Like
Likes FatPhysicsBoy
  • #7
Stephen Tashi said:
The straightforward way to estimate the mean and variance of [itex] P_i [/itex] is to compute realizations of [itex] P_i [/itex] from [itex] y[\ ][\ ] [/itex]

Define [itex] q[\ i][j] = \frac{ y[\ i][j] }{\sum_{k=1}^M y[\ k][j] }[/itex]
Estimate [itex] E(P_i) [/itex] by the sample mean of the samples [itex] q[\ i][1],q[\ i][2],...q[\ i][N] [/itex].
Estimate [itex] Var(P_i) [/itex] by the sample variance of that set of samples.

We have to face the question of whether Chi-square is useful in this case. There are caveats about using Chi-square when the data is proportions instead of integer counts. (e.g. http://stats.stackexchange.com/questions/104323/chi-square-analysis-percentages ).

When people say "You can use Chi-square" or "You can't use Chi-square" they refer to using the Chi-square statistic and relying on the usual numerical tables or formulas for its distribution. You can define a statistic whose formula is that of Chi-square in a situation where it does not have the distribution of the "real" Chi-square statistic. The statistic will have some distribution and that distribution may be useful in hypothesis testing.

In my opinion, the safest way to approach even moderately complicated statistical problems is to use simulation. If you have a rough model for the physics of the experiment, you can estimate the distribution of the statistic you have defined.

----

There is the question of what you are trying to accomplish by a hypothesis test. For example, the null hypothesis that "The array of data z[\ ] is a realization of [itex] Y [/itex]" is different than the more specific null hypothesis "The value z[3] is a realization of [itex] Y_3 [/itex]" .

Thank you for your help Stephen, I will compute the variances using the method you suggested. I understand, this is supposed to be the first stage though. See how far we can go using chi-square and possibly replace this method with a likelihood method or some other determining method.

The aim is more in line with the first null hypothesis, "The array of data [itex]z[\ ][/itex] is a realisation of [itex] Y [/itex].

What do you mean by simulation? How would you propose achieving this using simulation/otherwise?

Thank you.
 
  • #8
FatPhysicsBoy said:
What do you mean by simulation? How would you propose achieving this using simulation/otherwise?

Define [itex] O_i = \frac{ z[\ i]}{ \sum_{k=1}^M z[k]} [/itex]

Let's say we define the statistic [itex] s [/itex] by [itex] s = \sum_{i=1}^M \frac{ ( O[\ i] - E(P_i))^2} {E(P_i)} [/itex] , where we have estimated [itex] E(P_i) [/itex] by the method in the previous post.

If the data [itex] z[\ ] [/itex] produces a value such as [itex] s = 4.8 [/itex] the question we need to answer is "How probable is it that [itex] s \ge 4.8 [/itex] when [itex] z [\ ] [/itex] is a realization of the same random variables that generated the [itex] y[\ ][\ ] [/itex] data?

If we are intellectually confident that [itex] s [/itex] has a distribution for which there are statistical tables and formulae, we just consult those to find the probability that [itex] s \ge 4.8 [/itex]. In your problem, I am not intellectually confident that [itex] s [/itex] has any standard statistical distribution. I'm leery of the fact that the total count of events varies between different experiments and I'm leery of the fact that [itex] O_i [/itex] is not a count of events, but rather a proportion of counts.

1) The first simulation I would try is "bootstrapping". Since your [itex] y[\ ][\ ] [/itex] data is not large by the standards of modern computers, I would compute the value of [itex] s [/itex] for each experiment in the [itex] y[\ ][\ ] [/itex] data. ( i.e. For each experiment [itex] j [/itex] , set [itex] z[\ i] = y[\ i][j] [/itex] and compute [itex] s [/itex]). Histogram the values of [itex] s [/itex] that are produced. This gives you an estimate of the distribution of [itex] s [/itex]. You can look at the histogram and estimate the answer to "what is the probability [itex] s \ge 4.8 [/itex] ?"

2) The next simulation, I would try is using a probability model to generate a histogram for [itex] s [/itex]. For example, we can look at the total event counts in experiments [itex] c_j = \sum_{i=1}^M y[\ i][j] [/itex]. Fit some integer valued statistical distribution to that data. Next look at how a given number of events are distributed among the detectors. Fit a distribution to that situation. Having the probability model, run many replications of it to generate simulated [itex] z[\ ] [/itex] data and histogram the distribution of [itex] s [/itex] that results.

3) I wouldn't let statistics lobotomize my knowledge of physics. You must know something about the physical aspects of the experiment. People develop simulations of physical experiments. If you can develop one, that is another method of generating simulated [itex] z[\ ] [/itex] data.

4) If publishing your work is a consideration, you should consult papers that were accepted for publication and see what sort of statistics they used. Applying statistics is a subjective matter. Different journals may have different standards.
 
  • Like
Likes FatPhysicsBoy
  • #9
Stephen Tashi said:
Define [itex] O_i = \frac{ z[\ i]}{ \sum_{k=1}^M z[k]} [/itex]

Let's say we define the statistic [itex] s [/itex] by [itex] s = \sum_{i=1}^M \frac{ ( O[\ i] - E(P_i))^2} {E(P_i)} [/itex] , where we have estimated [itex] E(P_i) [/itex] by the method in the previous post.

If the data [itex] z[\ ] [/itex] produces a value such as [itex] s = 4.8 [/itex] the question we need to answer is "How probable is it that [itex] s \ge 4.8 [/itex] when [itex] z [\ ] [/itex] is a realization of the same random variables that generated the [itex] y[\ ][\ ] [/itex] data?

If we are intellectually confident that [itex] s [/itex] has a distribution for which there are statistical tables and formulae, we just consult those to find the probability that [itex] s \ge 4.8 [/itex]. In your problem, I am not intellectually confident that [itex] s [/itex] has any standard statistical distribution. I'm leery of the fact that the total count of events varies between different experiments and I'm leery of the fact that [itex] O_i [/itex] is not a count of events, but rather a proportion of counts.

1) The first simulation I would try is "bootstrapping". Since your [itex] y[\ ][\ ] [/itex] data is not large by the standards of modern computers, I would compute the value of [itex] s [/itex] for each experiment in the [itex] y[\ ][\ ] [/itex] data. ( i.e. For each experiment [itex] j [/itex] , set [itex] z[\ i] = y[\ i][j] [/itex] and compute [itex] s [/itex]). Histogram the values of [itex] s [/itex] that are produced. This gives you an estimate of the distribution of [itex] s [/itex]. You can look at the histogram and estimate the answer to "what is the probability [itex] s \ge 4.8 [/itex] ?"

2) The next simulation, I would try is using a probability model to generate a histogram for [itex] s [/itex]. For example, we can look at the total event counts in experiments [itex] c_j = \sum_{i=1}^M y[\ i][j] [/itex]. Fit some integer valued statistical distribution to that data. Next look at how a given number of events are distributed among the detectors. Fit a distribution to that situation. Having the probability model, run many replications of it to generate simulated [itex] z[\ ] [/itex] data and histogram the distribution of [itex] s [/itex] that results.

3) I wouldn't let statistics lobotomize my knowledge of physics. You must know something about the physical aspects of the experiment. People develop simulations of physical experiments. If you can develop one, that is another method of generating simulated [itex] z[\ ] [/itex] data.

4) If publishing your work is a consideration, you should consult papers that were accepted for publication and see what sort of statistics they used. Applying statistics is a subjective matter. Different journals may have different standards.

Ultimately all the power is in my hands at the moment since everything we are discussing is in-fact being done in simulation. We have the full spherical detector geometry and can run events as and when and where we like through monte carlo. I think this method has broken down for the reasons you stated. Essentially, it works a lot better when there is no need to take a proportion of counts. I have the freedom to do this for the purposes of this study so I have removed that step. I am now taking [itex]y_{a}[\ i ][j][/itex],
[itex]y_{b}[\ i ][j][/itex], [itex]y_{c}[\ i ][j][/itex], .. where a, b, c, .. refer to different positions. I then take one of these, i.e. [itex]y_{c}[\ i ][j][/itex] and compare the [itex]y_{c}[\ i ][/itex] for each [itex]j[/itex] with the expected value at PMTi associated with [itex]y_{a}[\ i ][j][/itex], [itex]y_{b}[\ i ][j][/itex], [itex]y_{c}[\ i ][j][/itex], .. respectively. This is infinitely more successful than when there was a [itex]z[\ ][/itex] since [itex]z[\ ][/itex] had in common only the position with the y sets but not the number of photoelectrons deposited.

Thank you for your help Stephen, I am happy going forward with my study following our discussion. Apologies for the confusing nature of my explanations, there is a lot going on with this experiment and so I am usually a little confused!
 

Related to Error propagation for a sum of means

1. What is error propagation for a sum of means?

Error propagation for a sum of means is a method used to estimate the uncertainty or error in a calculated quantity that is derived from multiple measurements. It takes into account the individual errors of each measurement and combines them to determine the overall uncertainty in the final result.

2. Why is error propagation important in scientific research?

Error propagation is important in scientific research because it allows researchers to understand the reliability and validity of their measurements and calculations. By accounting for sources of error, scientists can ensure the accuracy and precision of their results and make informed conclusions.

3. How is error propagation for a sum of means calculated?

To calculate error propagation for a sum of means, the individual errors of each measurement are first determined. Then, the errors are combined using the root sum of squares method, where the individual errors are squared, added together, and then square rooted. This final value is the overall uncertainty or error in the sum of means.

4. What are some sources of error in a sum of means calculation?

Sources of error in a sum of means calculation can include random errors in individual measurements, systematic errors caused by faulty equipment or procedures, and human error. It is important to identify and minimize these sources of error to ensure the accuracy of the final result.

5. How can error propagation for a sum of means be improved?

Error propagation for a sum of means can be improved by increasing the number of measurements taken, using more precise equipment, and minimizing sources of error. Additionally, using statistical analysis techniques such as standard deviation and confidence intervals can provide a more accurate estimation of the overall uncertainty.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
8
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
878
  • Calculus and Beyond Homework Help
Replies
12
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
19
Views
5K
  • Linear and Abstract Algebra
Replies
1
Views
993
  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
33
Views
52K
Back
Top