Poisson Statistics + Photon Detections

In summary, Poisson statistics are used in photon detection experiments when the average frequency of occurrence for the time period in question is unknown.
  • #1
StevieTNZ
1,934
883
Hi there,

Having done a Google, I wasn't able to find much information relating specifically to Poisson statistics and photon detections.

I was wondering why photon detection experiments are calculated using Poisson statistics?
(So for example, would Poisson distribution calculations be applied to Bell inequality tests etc.?)

What assumptions, if any, are made in relation to the calculations, for when you calculate the Poisson confidence intervals for photon experiments?

Any assistance would be much appreciated,
Stevie
 
Physics news on Phys.org
  • #2
StevieTNZ said:
I was wondering why photon detection experiments are calculated using Poisson statistics?
Because they involve counting things.

A quick google for "poisson distribution" provided the following in the top 10 hits:
Particle Counting Statistics: PHYS 331: Junior Physics Laboratory I (Rice University TX).
http://www.umass.edu/wsp/statistics/lessons/poisson/: Jn. Statistics Lessons (UMass. Amhurst)

From the latter:
The Poisson distribution applies when:
(1) the event is something that can be counted in whole numbers;
(2) occurrences are independent, so that one occurrence neither diminishes nor increases the chance of another;
(3) the average frequency of occurrence for the time period in question is known; and
(4) it is possible to count how many events have occurred, but meaningless to ask how many such events have not occurred.
 
  • #3
Poisson statistics are related to the so-called coherent states of the electromagnetic field. These states are very classical in the sense that the uncertainty between amplitude and phase is minimal and independent of their value. A laser field is typically in a coherent state. States with a small definite particle number are very different from coherent states.

From the wiki article:
"Physically, this formula means that a coherent state remains unchanged by the detection (or annihilation) of field excitation or, say, a particle. The eigenstate of the annihilation operator has a Poissonian number distribution (as shown below). A Poisson distribution is a necessary and sufficient condition that all detections are statistically independent. Compare this to a single-particle state: once one particle is detected, there is zero probability of detecting another."
 
Last edited:
  • #4
Many thanks for your responses.

I saw that an assumption for this kind of statistic is that two simultaneous events do not occur at the same time.

I was wondering how we can use Poisson Statistics if we're unsure of the "the average frequency of occurrence for the time period in question", such as if photons go astray and thus undetected?
 
  • #5
I saw that an assumption for this kind of statistic is that two simultaneous events do not occur at the same time.
Two simultaneous events that occur at the same time would be a total of four events, which occur simultaneously ;)
You mean that events must happen one after the other?
... where did you see this?

I was wondering how we can use Poisson Statistics if we're unsure of the "the average frequency of occurrence for the time period in question", such as if photons go astray and thus undetected?
Photons that going undetected are factored in from when you calibrated the detector. It's a reason for doing control runs.

The design of the experiment takes into account previous experiments done on the source.
 
  • #6
Simon Bridge said:
Two simultaneous events that occur at the same time would be a total of four events, which occur simultaneously ;)
You mean that events must happen one after the other?
... where did you see this?

I can't seem to locate the page in which I saw this. It wasn't probably a credible page, as you've listed for me the four assumptions made.


Simon Bridge said:
Photons that going undetected are factored in from when you calibrated the detector. It's a reason for doing control runs.

The design of the experiment takes into account previous experiments done on the source.

I assume that any photons not detected won't make it into the final results, as we would be predetermining what result we would get without actually detecting it.

However, from the full results we get (e.g. four-fold coincidence counts rather than 3 of 4 photons detected), I gather we can use Poisson statistics to calculate the confidence interval in which the population mean of the sample may be within? Does that come with any assumptions?
 
  • #7
I assume that any photons not detected won't make it into the final results, as we would be predetermining what result we would get without actually detecting it.
Depends what you are measuring. It is usual to correct for detector efficiency for instance, as well as for "dead time".
This is why you do control runs using a source that has well-known statistics - to find out how the detector statistics relate to the source statistics.
 
  • #8
Simon Bridge said:
Depends what you are measuring. It is usual to correct for detector efficiency for instance, as well as for "dead time".
This is why you do control runs using a source that has well-known statistics - to find out how the detector statistics relate to the source statistics.

Would doing that help in calculating the systematic uncertainty of the final results?
 
  • #9
I should have also asked: is it acceptable to publish experimental results that are at one standard deviation, for conformation of predictions?

I'm not entirely sure how the calculation is done, except that the standard deviation is the variance of the data around the sample mean. But if you doubled the standard deviation do you risk including results not otherwise obtained in the experiment?
 
  • #10
is it acceptable to publish experimental results that are at one standard deviation, for conformation of predictions?
Why? Have you been rejected for publication with a note about your error analysis?
What is acceptable for publication is whatever the publisher says is acceptable.

If your predicted result is within 1sd of the mean of your experimental results, then the experiment results would support the model the prediction came from - or any other model that gave similar numbers.

So that would normally be quite acceptable.
It is not normally acceptable to reject the model based on a prediction >1sd from the experimental mean.
Science focuses on rejecting models, not accepting them.

I'm not entirely sure how the calculation is done, except that the standard deviation is the variance of the data around the sample mean. But if you doubled the standard deviation do you risk including results not otherwise obtained in the experiment?
The standard deviation is the square-root of the varience - is called the "statistical error".
In Poisson statistics, for large numbers of counts, the error on the number of counts in a time interval is the square-root of the number you got. For small counts, you have to go to the equations.

As for the rest: google for "hypothesis testing".

The hypothesis being tested is, loosely, that the physical model which gave rise to the prediction "works". The null-hypothesis is that it doesn't work.
Rejecting the hypothesis is the same as rejecting the model.

If the model predicts a result that turns out to be >2sd from the experimental mean, then you can reject the model with 95% confidence... if >3sd you reject the model with 99% confidence. The test says nothing about how confident you can be in the model if you don't reject it - since you don't know how many other models these results could support and, like you said, you risk erroneously accepting unrelated results as supporting the model.

Thus a single experiment cannot confirm a theory - only reject it.

Would doing that help in calculating the systematic uncertainty of the final results?
Doing what? A control run?

Control runs are essential to working out the systematic uncertainty in final results.
Always do a control run. Your measurements are invalid without one.

If you don't compare your yardstick with a known reliable yard, why should anyone believe any measurements you make with it? Especially considering the stakes if your measurements should come out different from those expected from otherwise established models?
 
Last edited:

Related to Poisson Statistics + Photon Detections

1. What are Poisson statistics and how are they related to photon detections?

Poisson statistics is a mathematical concept that describes the behavior of a random process that has a constant average rate but unpredictable outcomes. This concept is often used to analyze the distribution of photon detections, as photon detections can be considered a random process with a constant rate determined by the intensity of the light source.

2. How are Poisson statistics used in the study of photon detections?

Poisson statistics are used to analyze the probability of detecting a certain number of photons in a given time interval. By using this statistical framework, scientists can make predictions about the expected number of photon detections and compare it to the actual number of detections in experiments.

3. What is the difference between a Poisson distribution and a normal distribution?

A Poisson distribution is a discrete probability distribution that models the number of events occurring in a fixed interval of time or space, while a normal distribution is a continuous probability distribution that describes the distribution of a continuous variable. In the context of photon detections, a Poisson distribution is often used when analyzing discrete events (i.e. photon detections) while a normal distribution is used when analyzing continuous data (i.e. intensity of light).

4. Can Poisson statistics be applied to other types of data besides photon detections?

Yes, Poisson statistics can be applied to any type of data that follows a random process with a constant rate. This can include counting events in a specific time interval, such as the number of radioactive decays or the number of customers entering a store in a given hour.

5. What are the limitations of using Poisson statistics to analyze photon detections?

One limitation of Poisson statistics for analyzing photon detections is that it assumes a constant rate of photon detections, which may not always be the case in real-world experiments. Additionally, Poisson statistics may not be appropriate for analyzing data sets with a very small number of photon detections, as the assumptions of the Poisson distribution may not be met.

Similar threads

Replies
2
Views
990
Replies
14
Views
1K
Replies
4
Views
1K
Replies
16
Views
1K
  • Quantum Physics
Replies
13
Views
1K
Replies
80
Views
4K
  • Quantum Physics
Replies
4
Views
768
Replies
1
Views
876
  • Quantum Physics
Replies
4
Views
1K
Replies
78
Views
3K
Back
Top