Measurement problem and computer-like functions

In summary: But in the case of measurements of physical systems, it's not at all clear that we can unambiguously identify the physical variables and the measuring devices.
  • #36
But how do you prove the sum of the 4 rhos is smaller than 2 then ?

$n$ just means the nth measurement for B and B' they don't need to be simultaneously measured.
 
Last edited:
Physics news on Phys.org
  • #37
jk22 said:
But how do you prove the sum of the 4 rhos is smaller than 2 then ?

$n$ just means the nth measurement for B and B' they don't need to be simultaneously measured.

But the quantity [itex]A_n B_n + A'_n B_n + A_n B'_n - A'_n B'_n[/itex] does not appear in the derivation of the CHSH inequality.

As for what is proved:

A local, memoryless hidden-variables model for the EPR experiment would involve three probability distributions:
  1. [itex]P(\lambda)[/itex], the probability distribution for the hidden variable [itex]\lambda[/itex].
  2. [itex]P_A(\lambda, \alpha)[/itex], the probability of Alice getting result +1 given that the hidden variable has value [itex]\lambda[/itex] and Alice's device has setting [itex]\alpha[/itex]
  3. [itex]P_B(\lambda, \beta)[/itex], the probability of Bob getting result +1 given that the hidden variable has value [itex]\lambda[/itex] and Bob's device has setting [itex]\beta[/itex]
Such a model could be used to generate sample EPR results in the following way:
  • In round number [itex]n[/itex], Alice chooses a setting [itex]\alpha_n[/itex]
  • Bob chooses a setting [itex]\beta_n[/itex].
  • [itex]\lambda_n[/itex] is chosen randomly, according to the probability distribution [itex]P(\lambda)[/itex]
  • Then with probability [itex]P_A(\lambda_n, \alpha_n)[/itex] let [itex]A_n = +1[/itex]. With probability [itex]1 - P_A(\lambda_n, \alpha_n)[/itex] let [itex]A_n = -1[/itex]
  • With probability [itex]P_B(\lambda_n, \beta_n)[/itex] let [itex]B_n = +1[/itex]. With probability [itex]1 - P_B(\lambda_n, \beta_n)[/itex] let [itex]B_n = -1[/itex]
  • Then define [itex]\rho_n = A_n \cdot B_n[/itex]
After many, many rounds, you let [itex]\rho(\alpha, \beta)[/itex] be the average of [itex]\rho_n[/itex] over those rounds for which [itex]\alpha_n = \alpha[/itex] and [itex]\beta_n = \beta[/itex].

The CHSH proof shows that no such model will satisfy a certain inequality (in the limit of many, many rounds).
 
  • #38
stevendaryl said:
The CHSH proof shows that no such model will satisfy a certain inequality (in the limit of many, many rounds).

Actually, the proof is about correlations computed using a continuous distribution:

[itex]\rho(\alpha, \beta) = \int_\lambda P(\lambda) d\lambda (P^+(\lambda, \alpha, \beta) - P^-(\lambda, \alpha, \beta))[/itex]

where [itex]P^+(\lambda, \alpha, \beta)[/itex] is the probability that Alice's and Bob's results are the same sign, and [itex]P^-(\lambda, \alpha,
\beta)[/itex] is the probability that they are opposite signs. In terms the probabilities given earlier:

[itex]P^+(\lambda, \alpha, \beta) = P_A(\lambda, \alpha) P_B(\lambda, \beta) + (1 - P_A(\lambda, \alpha))(1 - P_B(\lambda, \beta))[/itex] and
[itex]P^-(\lambda, \alpha, \beta) = P_A(\lambda, \alpha) (1 - P_B(\lambda, \beta)) + (1 - P_A(\lambda, \alpha)) P_B(\lambda, \beta)[/itex]

But tests of the CHSH inequality assume that these continuously defined correlations can be approximated by discretely computed correlations.
 
  • #39
stevendaryl said:
Actually, the proof is about correlations computed using a continuous distribution:

[itex]\rho(\alpha, \beta) = \int_\lambda P(\lambda) d\lambda (P^+(\lambda, \alpha, \beta) - P^-(\lambda, \alpha, \beta))[/itex]

where [itex]P^+(\lambda, \alpha, \beta)[/itex] is the probability that Alice's and Bob's results are the same sign, and [itex]P^-(\lambda, \alpha,
\beta)[/itex] is the probability that they are opposite signs. In terms the probabilities given earlier:

[itex]P^+(\lambda, \alpha, \beta) = P_A(\lambda, \alpha) P_B(\lambda, \beta) + (1 - P_A(\lambda, \alpha))(1 - P_B(\lambda, \beta))[/itex] and
[itex]P^-(\lambda, \alpha, \beta) = P_A(\lambda, \alpha) (1 - P_B(\lambda, \beta)) + (1 - P_A(\lambda, \alpha)) P_B(\lambda, \beta)[/itex]

But tests of the CHSH inequality assume that these continuously defined correlations can be approximated by discretely computed correlations.
Actually, (and IMHO), the proof uses elementary logic and elementary probability theory, but you can hide it behind calculus if you like.
http://www.slideshare.net/gill1109/epidemiology-meets-quantum-statistics-causality-and-bells-theorem
http://arxiv.org/abs/1207.5103
Statistics, Causality and Bell's Theorem
Statistical Science 2014, Vol. 29, No. 4, 512-528
 
  • Like
Likes bhobba
  • #40
stevendaryl said:
... it seems to me that some kind of local world-splitting could give a local, non-realistic toy model to explain EPR-type correlations...
But I think if a world-splitting is caused by an event A and it has different variable values in the two branches for objects/events non-local to A it still should not count as a local model. So even world-splitting doesn't fit the bill.
EDIT: Plus I feel any world-splitting model could be Occam-ified by looking at just a single branch. But let's not go into religious wars right now ;)
 
Last edited:
  • #41
stevendaryl said:
they only reflect our knowledge, or lack of knowledge about the world.
Maybe you're too quick to put an equal sign between "having knowledge" and "not having knowledge". I mean think about it...when you measure Alice, you instantly know about Bob. Maybe the very thing that we've been looking for has been literally staring at us in the mirror all along.
 
  • #42
DirkMan said:
Maybe you're too quick to put an equal sign between "having knowledge" and "not having knowledge". I mean think about it...when you measure Alice, you instantly know about Bob. Maybe the very thing that we've been looking for has been literally staring at us in the mirror all along.

I'm not sure I understand what you're saying. What are you saying has been staring us in the face?
 
  • #43
georgir said:
But I think if a world-splitting is caused by an event A and it has different variable values in the two branches for objects/events non-local to A it still should not count as a local model.

I used the wrong words. It's not the entire world that splits, it's just Alice who splits when she does a measurement, and Bob who splits when he does a measurement. So it's only local parts of the world that split locally.

georgir said:
EDIT: Plus I feel any world-splitting model could be Occam-ified by looking at just a single branch.

But to go from multiple branches to a single branch is a nonlocal change. So it's better from the point of view of occam, but not from the point of view of locality.
 
  • #44
stevendaryl said:
I used the wrong words. It's not the entire world that splits, it's just Alice who splits when she does a measurement, and Bob who splits when he does a measurement. So it's only local parts of the world that split locally.
But if Alice splits into two branches where Bob's particle or detector have different variables (especially ones somehow related to her detector readings) I'd still classify this as non-local. If she splits into two branches where the only difference is her own readings, then that is equivalent to a single-universe local random variable model.

EDIT: To extend on that, I can see EPR explanations to the tune of "the universe splits into two branches, one where BOTH particles were H, one where BOTH particles were V" but if the particles are already separated (split happening on measurement), this split is practically a non-local interaction.
 
Last edited:
  • #45
stevendaryl said:
[itex]\rho(a,b) + \rho(a, b') + \rho(a',b) - \rho(a', b') \leq 2[/itex]

[itex]\rho(\alpha, \beta)[/itex] is computed this way: (for the spin-1/2 case)
  • Generate many twin pairs, and have Alice and Bob measure the spins of their respective particles at a variety of detector orientations.
  • Let [itex]\alpha_n[/itex] be Alice's setting on trial number [itex]n[/itex]
  • Let [itex]\beta_n[/itex] be Bob's setting on trial number [itex]n[/itex]
  • Define [itex]A_n[/itex] to be +1 if Alice measures spin-up on trial number [itex]n[/itex], and -1 if Alice measures spin-down.
  • Define [itex]B_n[/itex] to be [itex]\pm 1[/itex] depending on Bob's result for trial [itex]n[/itex]
Define [itex]\rho(\alpha, \beta)[/itex] to be the average of [itex]A_n \cdot B_n[/itex], averaged over those trials for which [itex]\alpha_n = \alpha[/itex] and [itex]\beta_n= \beta[/itex]. Then if [itex]a[/itex] and [itex]a'[/itex] are two different values for [itex]\alpha[/itex], and [itex]b[/itex] and [itex]b'[/itex] are two different values for [itex]\beta[/itex], then the CHSH inequality says that [itex]\rho(a,b) + \rho(a',b) + \rho(a,b') - \rho(a',b') \leq 2[/itex].
I just looked at a trivial average when the A_n are limited to one value.The continuous model is $$A(a,\lambda_1)B(b,\lambda_1)\rho(\lambda_1)+A(a,\lambda_2)B(b',\lambda_2)\rho(\lambda_2)+A(a',\lambda_3)B(b,\lambda_3)\rho(\lambda_3)-A(a',\lambda_4)B(b',\lambda_4)\rho(\lambda_4)$$

I looked at the case where we take away the integrals because there is only one measurement value.
 
  • #46
jk22 said:
I just looked at a trivial average when the A_n are limited to one value.The continuous model is $$A(a,\lambda_1)B(b,\lambda_1)\rho(\lambda_1)+A(a,\lambda_2)B(b',\lambda_2)\rho(\lambda_2)+A(a',\lambda_3)B(b,\lambda_3)\rho(\lambda_3)-A(a',\lambda_4)B(b',\lambda_4)\rho(\lambda_4)$$

I looked at the case where we take away the integrals because there is only one measurement value.
Rho usually stands for probability density. For each of the four correlations, Bell assumes that lambda is drawn at random from the same probability distrubution.
 
  • #47
gill1109 said:
Rho usually stands for probability density. For each of the four correlations, Bell assumes that lambda is drawn at random from the same probability distrubution.

Actually, in the Wikipedia article about the CHSH inequality, [itex]\rho(\alpha, \beta)[/itex] is the correlation. If [itex]A(\lambda, \alpha)[/itex] and [itex]B(\lambda, \beta)[/itex] both take on values +/- 1, then [itex]\rho(\alpha, \beta)[/itex] is the average over [itex]\lambda[/itex] of [itex]A(\lambda, \alpha) B(\lambda, \beta)[/itex].
 
  • #48
stevendaryl said:
Actually, in the Wikipedia article about the CHSH inequality, [itex]\rho(\alpha, \beta)[/itex] is the correlation. If [itex]A(\lambda, \alpha)[/itex] and [itex]B(\lambda, \beta)[/itex] both take on values +/- 1, then [itex]\rho(\alpha, \beta)[/itex] is the average over [itex]\lambda[/itex] of [itex]A(\lambda, \alpha) B(\lambda, \beta)[/itex].
Yes you are right, there are a lot of alternative notations, can be very confusing ... A lot of people use E(a, b) for the correlation, and it's the integral over lambda over A(lambda, a) B(lambda, b) rho(lambda) d lambda where rho(lambda) is the probability density of the hidden variable lambda.

The point being that when you change the settings, the probability density of lambda does not change. This is the freedom assumption (no conspiracy, fair-sampling, ...). Bell's theorem ( a theorem belonging to meta-phsyics) is not: "QM is incompatible with LHV". It's "QM is incompatible with LHV + no-conspiracy"). Bell himself referred to his inequality (and later the CHSH improvement) as "my theorem". I would say that Bell's inequality is a rather elementary mathematical theorem about limits of distributed (classical) computing. You can't build a network of computers which wins the "Bell game".
 
Last edited:
  • #49
jk22 said:
I just looked at a trivial average when the A_n are limited to one value.The continuous model is $$A(a,\lambda_1)B(b,\lambda_1)\rho(\lambda_1)+A(a,\lambda_2)B(b',\lambda_2)\rho(\lambda_2)+A(a',\lambda_3)B(b,\lambda_3)\rho(\lambda_3)-A(a',\lambda_4)B(b',\lambda_4)\rho(\lambda_4)$$

I looked at the case where we take away the integrals because there is only one measurement value.

What you wrote doesn't make sense to me. If you only use a single "round" of the experiment, then you can't have [itex]\lambda_1[/itex] and [itex]\lambda_2[/itex], you just have a single value of [itex]\lambda[/itex]. Also, what does "[itex]\rho[/itex]" mean in your formula?

I think you're getting confused about this. There are two different correlations computed: (1) the actual measured correlations, and (2) the predicted correlations.

The way that the actual measured correlations are computed is this:
  1. For every round (labeled [itex]n[/itex]), there are corresponding values for [itex]\alpha_n[/itex] and [itex]\beta_n[/itex], the settings of the two detectors, and there are corresponding values for the measured results, [itex]A_n[/itex] and [itex]B_n[/itex]
  2. To compute [itex]\rho(\alpha, \beta)[/itex], you average [itex]A_n \cdot B_n[/itex] over all those rounds [itex]n[/itex] such that [itex]\alpha_n = a[/itex] and [itex]\beta_n = b[/itex]
So there is no [itex]\lambda[/itex] in what is actually measured. [itex]\lambda[/itex] is only involved in the hypothetical model for explaining the results. The local hidden variables explanation of the results are:
  • For every round [itex]n[/itex] there is a corresponding value of some hidden variable, [itex]\lambda_n[/itex]
  • [itex]A_n[/itex] is a (deterministic) function of [itex]\alpha_n[/itex] and [itex]\lambda_n[/itex]
  • [itex]B_n[/itex] is a (deterministic) function of [itex]\beta_n[/itex] and [itex]\lambda_n[/itex]
What you suggested is that rather than having [itex]A_n[/itex] be a deterministic function, it could be nondeterministic, using say a random number generator. That actually doesn't make any difference. Intuitively, you can think that a nondeterministic function can be turned into a deterministic function of a random number [itex]r[/itex]. Then [itex]r_n[/itex] (the value of [itex]r[/itex] on round [itex]n[/itex]) can be incorporated into [itex]\lambda_n[/itex]. It's just an extra hidden variable.
 
  • #50
Rho is the density of probability of the hidden variable, in fact if i consider a single run it is simply 1.

The fact that we have only one lambda comes for a renaming of the integration. If we don't have the integration then allowing for four different hidden variables is maybe reasonable.
 
  • #51
jk22 said:
Rho is the density of probability of the hidden variable, in fact if i consider a single run it is simply 1.

The fact that we have only one lambda comes for a renaming of the integration. If we don't have the integration then allowing for four different hidden variables is maybe reasonable.

No, it's not reasonable. If you only have one twin-pair, then there is only one value for [itex]\lambda[/itex]. The expression you wrote doesn't have much connection with the CHSH inequality.

The CHSH inequality involves 2 settings for one device, [itex]a[/itex] and [itex]a'[/itex], and two settings for the other device, [itex]b[/itex] and [itex]b'[/itex]. Since a single "round" of the experiment only has one setting for each device, it takes at least 4 rounds to get information about all the combinations

[itex]a, b[/itex]
[itex]a', b[/itex]
[itex]a, b'[/itex]
[itex]a', b'[/itex]
 
  • #52
If you allow only for one lambda then the pairs are not independent which i don't see the reason. Pairs are supposed to be independent ?

You can write with only one lambda if you perform a change of variable $$\lambda_i=f_i(\lambda)$$ we cannot simply rename.

The point is to explain why qm allows this to be bigger than two since the experiments show a violation, not why it is smaller since it is not.
 
Last edited:
  • #53
jk22 said:
If you allow only for one lambda then the pairs are not independent which i don't see the reason. Pairs are supposed to be independent ?

There is one value of [itex]\lambda[/itex] for each twin-pair that is produced.

I'm confused as what exactly you are disputing, if anything. Is it:
  1. The definition of how the correlations [itex]\rho(\alpha, \beta)[/itex] are computed?
  2. The proof that a local hidden-variables model predicts (in the deterministic case) that [itex]\rho[/itex] would satisfy the CSHS inequality?
  3. The proof that introducing randomness makes no difference to that prediction?
  4. The proof that QM violates the inequality?
 
  • #54
stevendaryl said:
There is one value of [itex]\lambda[/itex] for each twin-pair that is produced.

I'm confused as what exactly you are disputing, if anything. Is it:
  1. The definition of how the correlations [itex]\rho(\alpha, \beta)[/itex] are computed?
  2. The proof that a local hidden-variables model predicts (in the deterministic case) that [itex]\rho[/itex] would satisfy the CSHS inequality?
  3. The proof that introducing randomness makes no difference to that prediction?
  4. The proof that QM violates the inequality?

As for #3, suppose that Alice's outcome [itex]A(\lambda, \alpha)[/itex] is nondeterministic. (The notation here is a little weird, because writing [itex]A(\lambda, \alpha)[/itex] usually implies that [itex]A[/itex] is a deterministic function of its arguments. I hope that doesn't cause confusion.) Then let [itex]X(\lambda,\alpha)[/itex] be the probability that [itex]A(\lambda, \alpha) = +1[/itex] and so the probability that it is -1 is given by [itex]1-X(\lambda,\alpha)[/itex]. Similarly, let [itex]Y(\lambda,\beta)[/itex] be the probability that Bob's outcome [itex]B(\lambda, \beta) = +1[/itex]. Then the probability that both Alice and Bob will get +1 is given by:

[itex]P_{both}(\lambda, \alpha, \beta) = X(\lambda, \alpha) \cdot Y(\lambda, \beta)[/itex]

But in the EPR experiment, if [itex]\alpha = \beta[/itex], then Alice and Bob never get the same result (in the anti-correlated version of EPR). So this implies

[itex]P_{both}(\lambda, \alpha, \alpha) = X(\lambda, \alpha) \cdot Y(\lambda, \alpha) = 0[/itex]

So either [itex]X(\lambda, \alpha) = 0[/itex] or [itex]Y(\lambda, \alpha) = 0[/itex]

Similarly, the probability of both getting -1 is given by:

[itex]P_{neither}(\lambda, \alpha, \alpha) = (1 - X(\lambda, \alpha)) \cdot (1 - Y(\lambda, \alpha))[/itex]

Since this never happens, the probability must be zero. So either [itex]X(\lambda, \alpha) = 1[/itex] or [itex]Y(\lambda, \alpha) = 1[/itex].

So for every value of [itex]\lambda[/itex] and [itex]\alpha[/itex], [itex]A(\lambda, \alpha)[/itex] either has probability 0 of being +1, or it has probability 1 of being +1. So it's value must be a deterministic function of [itex]\lambda[/itex] and [itex]\alpha[/itex]. Similarly for [itex]B(\lambda, \beta)[/itex]. So the perfect anti-correlations of EPR imply that there is no room for randomness.
 
  • #55
stevendaryl said:
As for #3, suppose that Alice's outcome [itex]A(\lambda, \alpha)[/itex] is nondeterministic. (The notation here is a little weird, because writing [itex]A(\lambda, \alpha)[/itex] usually implies that [itex]A[/itex] is a deterministic function of its arguments. I hope that doesn't cause confusion.) Then let [itex]X(\lambda,\alpha)[/itex] be the probability that [itex]A(\lambda, \alpha) = +1[/itex] and so the probability that it is -1 is given by [itex]1-X(\lambda,\alpha)[/itex]. Similarly, let [itex]Y(\lambda,\beta)[/itex] be the probability that Bob's outcome [itex]B(\lambda, \beta) = +1[/itex]. Then the probability that both Alice and Bob will get +1 is given by:

[itex]P_{both}(\lambda, \alpha, \beta) = X(\lambda, \alpha) \cdot Y(\lambda, \beta)[/itex]

But in the EPR experiment, if [itex]\alpha = \beta[/itex], then Alice and Bob never get the same result (in the anti-correlated version of EPR). So this implies

[itex]P_{both}(\lambda, \alpha, \alpha) = X(\lambda, \alpha) \cdot Y(\lambda, \alpha) = 0[/itex]

So either [itex]X(\lambda, \alpha) = 0[/itex] or [itex]Y(\lambda, \alpha) = 0[/itex]

Similarly, the probability of both getting -1 is given by:

[itex]P_{neither}(\lambda, \alpha, \alpha) = (1 - X(\lambda, \alpha)) \cdot (1 - Y(\lambda, \alpha))[/itex]

Since this never happens, the probability must be zero. So either [itex]X(\lambda, \alpha) = 1[/itex] or [itex]Y(\lambda, \alpha) = 1[/itex].

So for every value of [itex]\lambda[/itex] and [itex]\alpha[/itex], [itex]A(\lambda, \alpha)[/itex] either has probability 0 of being +1, or it has probability 1 of being +1. So it's value must be a deterministic function of [itex]\lambda[/itex] and [itex]\alpha[/itex]. Similarly for [itex]B(\lambda, \beta)[/itex]. So the perfect anti-correlations of EPR imply that there is no room for randomness.
This is fine in theory. The problem is that in experiment we do not see *perfect* anti-correlation. And experiment can't prove that we have *perfect* anti-correlation. It can only give statistical support to the hypothesis that we have close to perfect anti-correlation.

Hence the need to come up with an argument which allows for imperfection, allows for a bit of noise - lends itself to experimental verification ... CHSH.
 
  • #56
Indeed for randomness. If we suppose perfect cases are relevant while being of measure zero.

I think i would make an extension of point 4. That the violation of Chsh implies nonlocality.

if one $$\lambda$$ is given to each pair then violation of Chsh does not imply nonlocality. We could find four lambdas and the model $$A(a,\lambda)=sgn(\vec{a}\cdot\vec{\lambda})$$ to obtain the value 4 for a single trial ? Maybe I did wrong.
 
  • #57
jk22 said:
Indeed for randomness.
I think i would make an extension of 4. That the violation of Chsh implies nonlocality.

if one $$\lambda$$ is given to each pair then violation of Chsh does not imply nonlocality. We could find four lambdas and the model $$A(a,\lambda)=sgn(\vec{a}\cdot\vec{\lambda})$$ to obtain the value 4 for a single trial ? Maybe I did wrong.
We do lots of trials, for each of the four setting pairs. Four sub experiments, you could say, one each for each of the four correlations in the CHSH quantity S. So if you believe in local hidden variables, each of the four correlations is an average of values A(a, lambda)B(b, lambda) based on a completely different sample of hidden variables lambda. But we assume that those four samples are all random samples from the same probability distribution. This is called the freedom assumption (no conspiracy, fair sampling ...).

People tend to miss this step in the argument, or to misunderstand it. It's the usual reason for people to argue that Bell was wrong - they aren't aware of a statistical assumption and they aren't aware of it being needed to complete the argument. (Physicists tend to have poor training in probability and statistics but Bell's statistical intuition was very very good indeed.) Bell did mention this step explicitly (in his 1981 paper "Bertlmann's socks") but his critics tend to overlook it.
 
  • #58
This probability distribution is probably uniform ? Else it could be seen as a kind of conspiracy ?

In the proof i saw they make the sum of a single trial of each correlation but i suppose that the problem is that the variables can be different : https://en.m.wikipedia.org/wiki/Bell's_theorem under Derivation of the CHSH inequality.

It is written B+B' and B-B' but i think they supposed They all four depend on the same lambda. Isn't there four different lambdas since each term comes from a different pair ?
 
Last edited:
  • #59
jk22 said:
Indeed for randomness. If we suppose perfect cases are relevant while being of measure zero.

I think i would make an extension of point 4. That the violation of Chsh implies nonlocality.[

if one $$\lambda$$ is given to each pair then violation of Chsh does not imply nonlocality. We could find four lambdas and the model $$A(a,\lambda)=sgn(\vec{a}\cdot\vec{\lambda})$$ to obtain the value 4 for a single trial ? Maybe I did wrong.

The quantity of interest is [itex]A(a, \lambda) B(b, \lambda) + A(a', \lambda) B(b, \lambda) + A(a, \lambda) B(b', \lambda) - A(a', \lambda) B(b', \lambda)[/itex] (averaged over [itex]\lambda[/itex]). You can rearrange this into

[itex]A(a, \lambda) (B(b, \lambda) + B(b', \lambda)) + A(a', \lambda) (B(b, \lambda) - B(b', \lambda))[/itex]

Either [itex]B(b,\lambda)[/itex] has the same sign as [itex]B(b', \lambda)[/itex], or they have opposite signs. If they have the same sign, then the second term ([itex]A(a', \lambda) (B(b, \lambda) - B(b', \lambda))[/itex]) is zero. If they have opposite signs, then the first term, ([itex]A(a, \lambda) (B(b, \lambda) + B(b', \lambda))[/itex]) is zero. So there is no way to get that sum to be greater than 2.
 
  • #60
jk22 said:
It is written B+B' and B-B' but i think they supposed They all four depend on the same lambda. Isn't there four different lambdas since each term comes from a different pair ?

The idea is that for every pair, the quantity [itex]A(a,\lambda) B(b,\lambda) + A(a',\lambda) B(b,\lambda) + A(a,\lambda) B(b',\lambda) - A(a',\lambda) B(b',\lambda)[/itex] has to be less than 2. Now, we don't measure all 4 terms for each pair, we can only measure one term. However, if we average that quantity over lambda, we get:

[itex]\langle A(a) B(b) \rangle + \langle A(a) B(b') \rangle + \langle A(a') B(b) \rangle - \langle A(a') B(b') \rangle \leq 2[/itex]

where [itex]\langle ... \rangle[/itex] means average over [itex]\lambda[/itex]

So even though no single twin-pair can give us information about all four terms, we can experimentally determine the 4 separate values:
  1. [itex]\langle A(a) B(b) \rangle[/itex]
  2. [itex]\langle A(a) B(b') \rangle[/itex]
  3. [itex]\langle A(a') B(b) \rangle[/itex]
  4. [itex]\langle A(a') B(b') \rangle[/itex]
Then we can show that the four quantities violate CHSH.
 
  • #61
Neverthless I computed the probabilities with hidden variable and i got p(-4)=(3/4)^4 aso

They differ from qm and give the average S=2

The problem i see is numerically and experimentally : those are always finite number of trials and the statistics can vary.

If -4 arrives as the sum of a single trial then we could imagine we could select a sample where we get a violation ?
 
Last edited:
  • #62
jk22 said:
Neverthless I computed the probabilities with hidden variable and i got p(-4)=(3/4)^4 aso

They differ from qm and give the average S=2

The problem i see is numerically and experimentally : those are always finite number of trials and the statistics can vary.

If -4 arrives as the sum of a single trial then we could imagine we could select a sample where we get a violation ?

Yes, I think that a local hidden variables model can give a violation for a small sample size. the assumption is that

The average of [itex]A(\alpha) B(\beta)[/itex] over the sample is approximately equal to [itex]\int_\lambda P(\lambda) d\lambda A(\alpha, \lambda) B(\beta, \lambda)[/itex]. If you have a violation of CHSH that approximate equality can't hold.
 
  • #63
stevendaryl said:
Yes, I think that a local hidden variables model can give a violation for a small sample size. the assumption is that

The average of [itex]A(\alpha) B(\beta)[/itex] over the sample is approximately equal to [itex]\int_\lambda P(\lambda) d\lambda A(\alpha, \lambda) B(\beta, \lambda)[/itex]. If you have a violation of CHSH that approximate equality can't hold.
In fact, if local hidden variables are true, and you do the experiment, you will probably violate the CHSH bound with probability 50%. Nowadays we have exact finite N probability bounds: assuming LHV, the chance to violate CHSH: "S < = 2" in an experiment with N trials by more than epsilon, is less than ... (something like A exp( - B N eps^2).)

See e.g. http://arxiv.org/abs/1207.5103 Statistical Science 2014, Vol. 29, No. 4, 512-528 Theorem 1 (assuming no memory). Not the best result at all, but as simple as possible and with a relatively simple proof (elementary discrete probability ... at least, elementary for mathematicians. A first year undergraduate course is enough). A = 8 and B = 1 / 256
 
  • Like
Likes jk22
  • #64
For the results I obtained in qm

$$p(-4)=(1/2(1+1/\sqrt{2}))^4$$
$$p(-2)=4(1/2(1+1/\sqrt{2}))^3(1/2(1-1/\sqrt{2}))$$

And lhv

$$p(-4)=(3/4)^4$$
$$p(-2)=4(3/4)^3(1/4)$$

Hence in qm -4 appears more frequently than -2 whereas for hidden variables it is the opposite.
 
  • #65
jk22 said:
For the results I obtained in qm

$$p(-4)=(1/2(1+1/\sqrt{2}))^4$$
$$p(-2)=4(1/2(1+1/\sqrt{2}))^3(1/2(1-1/\sqrt{2}))$$

And lhv

$$p(-4)=(3/4)^4$$
$$p(-2)=4(3/4)^3(1/4)$$

Hence in qm -4 appears more frequently than -2 whereas for hidden variables it is the opposite.
I have no idea what these calculations are supposed to refer to.
 
  • #66
These should be the probabilities for the measurement results of AB-AB'+A'B+A'B' for the angles of measurement 0,Pi/4,Pi/2,3Pi/4 for A B A' B' respectively.

The lhv model considered was given in a previous post, it's the signum of the projection of the hidden vector on the direction of measurement.
 
  • #67
By the way shouldn't the measurement operator for CHSH not be $$A\otimes B\ominus A\otimes B '\oplus A'\otimes B\oplus A'\otimes B'$$

where $$\oplus$$ is the Kronecker sum ?

I think of that because in a CHSH experiment we sum eigenvalues of measurement.
 
  • #68
jk22 said:
By the way shouldn't the measurement operator for CHSH not be $$A\otimes B\ominus A\otimes B '\oplus A'\otimes B\oplus A'\otimes B'$$

where $$\oplus$$ is the Kronecker sum ?

I think of that because in a CHSH experiment we sum eigenvalues of measurement.
In an ideal CHSH experiment, we many times either simultaneously measure A on subsystem 1 and B on subsystem 2, or A on subsystem 1 and B' on subsystem 2, or A' on subsystem 1 and B on subsystem 2, or A' on subsystem 1 and B' on subsystem 2. Each time, the two subsystems have been yet again prepared in the same joint state.
 
  • #69
Indeed i saw a paper that shows the whole cannot be measured simultaneously : http://arxiv.org/abs/quant-ph/0206076

However If we use beam splitter instead of fast changing switcher could we say this were experimentally simultaneous ?
 
  • #70
stevendaryl said:
The idea is that for every pair, the quantity [itex]A(a,\lambda) B(b,\lambda) + A(a',\lambda) B(b,\lambda) + A(a,\lambda) B(b',\lambda) - A(a',\lambda) B(b',\lambda)[/itex] has to be less than 2.

What i meant is that we have [itex]A(a,\lambda_1) B(b,\lambda_1) + A(a',\lambda_2) B(b,\lambda_2)+ A(a,\lambda_3) B(b',\lambda_3)- A(a',\lambda_4) B(b',\lambda_4)[/itex] has to be less than 4 so that a violation is possible. we don't measure all 4 terms for each pair, we can only measure one term. However, if we average that quantity over lambda_i we get:

[itex]\langle A(a) B(b) \rangle + \langle A(a) B(b') \rangle + \langle A(a') B(b) \rangle - \langle A(a') B(b') \rangle \leq 2[/itex] [/QUOTE]
 

Similar threads

  • Quantum Physics
Replies
3
Views
372
  • Quantum Physics
Replies
4
Views
754
  • Quantum Physics
Replies
24
Views
1K
Replies
1
Views
645
  • Quantum Physics
Replies
4
Views
760
Replies
8
Views
1K
  • Quantum Physics
Replies
6
Views
1K
Replies
1
Views
859
  • Topology and Analysis
Replies
3
Views
175
Replies
0
Views
409
Back
Top