Questioning assumptions behind Bell's and related theorems

In summary: Punting on the definition of reality makes it hard to say much else.In summary, the conversation discusses the mathematical assumptions behind Bell's inequality and how they are being questioned by some authors using a Bohrian-type argument called "the chameleon model". This approach challenges the idea of a single probability space and raises questions about the validity of Bell's argument. However, it is ultimately limited by the lack of a consistent definition of reality.
  • #1
bohm2
828
55
I have trouble understanding the mathematical arguments behind this view but I thought I would post it, in case anybody has any information/understanding/insights. The basic idea is that the mathematical assumptions on which the validity of Bell's inequality depends are that all the random variables are defined on a single probability space. These authors then go on to question this assumption using a Bohrian-type argument which they refer to as "the chameleon model". Note that this has nothing to do with questioning loopholes, etc. as they are suggesting that Bell’s argument fails even before the issue of these loopholes. Also, note, that they are not basing their arguments on the contextuality as per Kochen-Specker theorem, as they question the assumptions behind this theorem also. There are a number papers/books taking this perspective:

So: if you want to keep the inequality (1) you MUST realize all the random variables in (3) in the same probability space!

ii) the physicists never use statements such as "realizability on a single probability space" but they hide this mathematical assumption in their notations, i.e. they use the same symbol to denote the results of different mutually incompatible experiments.
iii) without the assumption: "realizability on a single probability space" (or one of the equivalent hidden formulations introduced in the physical literature), cannot prove Bell's inequality.
Locality and Bell's inequality
http://cds.cern.ch/record/445808/files/0007005.pdf
Recall the basic idea of the chameleon effect: the local dynamics influences the statistics and since the factorization of the dynamics (3:6), i.e. ((1;M1), (2;M2)), is different from the factorization of the state (3:7), i.e. ((1; 2); (M1;M2)), the result of the local interaction is a global dependence of the final state on the whole measurement setting, i.e. (a; b)...Such a deformation is perfectly compatible with the assumption of an 100 percent (ideal) efficiency of the detectors.

In particular, it was pointed out that the proof of Bell’s inequality is based on the implicit use of a single Kolmogorov probability space, see Accardi [7]–[9], Khrennikov [11]–[14], Hess and Philipp [17]. We can call such an assumption probabilistic non–contextuality. By probabilistic contextuality we understand dependence of probability on experimental settings...However, there exists a model in that probabilistic contextuality (i.e., dependence of probabilities on experimental settings) can be produced without losses of particles. Moreover, in that model probabilistic contextuality is not a consequence of the quantum contextuality and hence the model is local.
Chameleon effect, the range of values hypothesis and reproducing the EPR-Bohm correlations
http://arxiv.org/pdf/quant-ph/0611259.pdf

In his opening address of the 2008 Växjö conference Foundations of Probability and Physics-5, Andrei Khrennikov took the position that violations of Bell inequalities [8] occur in Nature, but do not rule out local realism, due to lack of contextuality: the measurements needed to test Bell inequalities (BI) such as the BCHSH inequality cannot be performed simultaneously [9]. Therefore Kolmogorian probability theory starts and ends with having different probability spaces, and Bell inequality violation (BIV) just proves that there cannot be a reduction to one common probability space. This finally implies that no conclusion can be drawn on local realism, since incompatible information can not be used to draw any conclusion. As explained below,
the different pieces of the CHSH inequality involve fundamentally different distribution functions of the hidden variables, which cannot be put together in one over all covering distribution of all hidden variables of the set of considered experiments.
Is the Contextuality Loophole Fatal for the Derivation of Bell Inequalities?
http://dare.uva.nl/document/358619

Non-Kolmogorovian Approach to the Context-Dependent Systems Breaking the Classical Probability Law
http://link.springer.com/article/10.1007/s10701-013-9725-5

For anyone who has some understanding of Probability theory, do these Non-Kolmogorovian approaches/axioms seem reasonable/make sense?
 
Physics news on Phys.org
  • #2
bohm2 said:
I have trouble understanding the mathematical arguments behind this view but I thought I would post it, in case anybody has any information/understanding/insights. The basic idea is that the mathematical assumptions on which the validity of Bell's inequality depends are that all the random variables are defined on a single probability space. These authors then go on to question this assumption using a Bohrian-type argument which they refer to as "the chameleon model". ...

I always laugh when people create examples which do not map to Bell's Theorem, and then go on to disprove them. Bell Theorem is:

"No physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics."

Note that the ballot box and chameleon examples in the first paper are nothing like the quantum spin examples because spin can be any mixture of axes. When they connect their example to a suitable quantum prediction, I think they will have something more meaningful.

On the other hand, I mostly agree with the author through the point where he cites Bell's "vital" assumption: "the result B for particle 2 does not depend on the setting a, of the magnet for particle 1 ,nor A on b." This is essentially a nod to the EPR assumption that Alice's reality cannot be dependent on Bob's choice of measurement setting. That in turn is a restatement of the EPR assumption that elements of reality should not need to be simultaneously demonstrated to exist.

The rest of the paper mainly argues that "physicists ... use the same symbol to denote the results of different mutually incompatible experiments... (etc)". This argument has been around in numerous variations for some time, and has failed to gain traction. Primarily because it goes directly against the EPR assumption (prior paragraph) regarding simultaneous elements of reality. In other words: if you reject that EPR assumption (as Accardi essentially does after about 10 pages) then you don't get the Bell result. That is already generally accepted, hence nothing really new in this line of reasoning. To quote EPR:

"Indeed, one would not arrive at our conclusion if one insisted that two or more physical quantities can be regarded as simultaneous elements of reality only when they can be simultaneously measured or predicted . ... No reasonable definition of reality could be expected to permit this."

If you accept that at face value, you arrive at Bell's result. If you reject it, you arrive at Accardi's.
 
  • #3
bohm2 said:
I have trouble understanding the mathematical arguments behind this view but I thought I would post it, in case anybody has any information/understanding/insights. The basic idea is that the mathematical assumptions on which the validity of Bell's inequality depends are that all the random variables are defined on a single probability space.

Let me put the argument in a different way: The terms in Bell's inequality are functions A(λ), B(λ), C(λ). After some algebra of these functions, Bell obtains relations/inequalities which contain those functions. The above argument is essentially equivalent to the statement that any such relations between functions is meaningless unless the functions have the same domain. ie, the set of all λ must be exactly the same for each function.

In simple terms, it may appear trivial that A(λ) - A(λ) = 0, but if the first term was obtained in a situation in which λ = {1,2,3} and the second term was obtained in a situation in which λ = {4,5,6}, then the expression does not make much sense and can be violated, because A(λ = {1,2,3}) is a different random variable from A(λ = {4,5,6}) (aka. are defined on two different probability spaces)

Looking at page 406 of Bell's original paper(http://www.drchinese.com/David/Bell_Compact.pdf), the algebra leading up to equation 15 makes this clear.

Bell starts with A(λ)B(λ) - A(λ)C(λ) then factors out A(λ)B(λ) to obtain [1- B(λ)C(λ)] which makes it clear that all three expectation values E(A,B), E(A,C) and E(B,C) are calculated from the three functions A(λ), B(λ), C(λ) simply recombined in pairs. Which means any expression/relation which Bell obtained between those three expectation values, such as the one in equation 15, is meaningless unless those terms have the exact same domain for λ.

Every derivation of a Bell-like inequality including the CHSH has this "hidden" assumption at a crucial step. Just verify what is being factored out. See for example http://en.wikipedia.org/wiki/Bell's_theorem

If you think about what Bell was trying to prove, it becomes obvious that you can not claim to prove that A, B, C are/aren't simultaneously real for a given particle, if you measure them on different particles.
 
Last edited:
  • #4
billschnieder said:
If you think about what Bell was trying to prove, it becomes obvious that you can not claim to prove that A, B, C are/aren't simultaneously real for a given particle, if you measure them on different particles.

Wrong as usual, Bill.

That they are individually real WAS the EPR argument, and that point was not in dispute per se. That was done using TWO different particles. They then ASSUME that those elements should be simultaneously real as well, as I quoted verbatim above. Bell takes their assumption and shows it to be impossible in conjunction with the other EPR assumptions (locality, correctness of QM).

So do we really need to go through this time-wasting process again? As always, you can expect to have your speculative personal opinions flagged. Please re-read the rules and comply.
 
  • #5
With respect to Accardi et al, and the assumptions behind Bell's they write:
The main result of this paper consists in having realized that the mathematical assumptions on which the validity of the inequality depends are only the following two ones:

(i) that the random variables take values in the interval [-1; 1] (originally Bell considered only the set {-1; 1} but shortly after he extended his result to the full interval)
(ii) that all the random variables are defined on a single probability space.
Note on the EPR-Chameleon Experiment
http://ics.org.ru/doc?pdf=855&dir=e

Anyway, I was under the impression that these models do rely on a subquantum theory and do make different predictions than QM. I know this is the case with Khrennikov.
 
Last edited by a moderator:
  • #6
bohm2 said:
With respect to Accardi et al, and the assumptions behind Bell's they write:

Note on the EPR-Chameleon Experiment
http://ics.org.ru/doc?pdf=855&dir=e

Anyway, I was under the impression that these models do rely on a subquantum theory and do make different predictions than QM. I know this is the case with Khrennikov.

A relatively old paper (2003) to reference, don't you think, considering it claims to refute Bell? And there is no experiment actually presented despite the title. There is discussion of a computer simulation.

I really don't see how we can go too far down this path without running into rampant speculation about the subject. If you were discussing Bell, that would be one thing. You are really attempting to discuss the pros and cons of an anti-Bell argument as if this has equal footing with pro-Bell arguments. As you well know, this is not the place for such debate. It is well accepted that there are several assumptions going into the Bell result. If you reject any of those (locality, realism, accuracy of QM), you will be at a different spot - and essentially that is your right.

But it is flat out incorrect to say that your choice of assumptions must be accepted by everyone else. They won't be. The entire point of the EPR/Bell assumptions is that they are reasonable. That is why the Bell result is so widely accepted and why it is so important.
 
Last edited by a moderator:
  • #7
DrChinese said:
A relatively old paper (2003) to reference, don't you think, considering it claims to refute Bell? And there is no experiment actually presented despite the title. There is discussion of a computer simulation...But it is flat out incorrect to say that your choice of assumptions must be accepted by everyone else. They won't be. The entire point of the EPR/Bell assumptions is that they are reasonable. That is why the Bell result is so widely accepted and why it is so important.
I don't think it's really refuting Bell. It's questioning whether Bell's applies to QM; that is, it's questioning whether the mathematical formalism of QM has some features of non-Kolmogorovian probability theory as is the case with some phenomena outside QM. And there are 2013 papers (like the one by Khrennikov) I linked above. One of the assumptions of Bell's is the acceptance of Kolmogorovian axiomatics. If one is willing to accept a non-Kolmogorovian probabilistic model, then one can have both locality and "realism". And again, I'm not competent to evaluate the mathematical arguments but what I wanted to know/understand is whether this is, in fact, even possible? As the author wrote in the 2013 paper:
To compare the theory of probability with theories in geometry, we may consider the Kolmogorov axiomatic approach as an analogue to Euclid's axiomatic system. Now we know that there non-Eucledian geometries. I think that we may also begin to speak about non-Kolmogorov probabilistic models.
 
  • #8
bohm2 said:
I don't think it's really refuting Bell. It's questioning whether Bell's applies to QM...

Well sure it is refuting Bell. Bell, as summarized above:

"No physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics."

And the paper of Accardi et al says:

... the claim that the experimental validity of the correlation is incompatible with a local realistic interpretation of quantum mechanics, is definitively unwarranted both for theoretical and for experimental reasons.

This is not really the place to dissect an old paper (or line of reasoning, regardless of how it is warmed over) that is diametrically opposite of accepted thinking, and always has been. You and Bill should take this off-line, I am sure he will have plenty of comments, as this is almost verbatim his reasoning as to why Bell is wrong.
 
  • #9
bohm2 said:
The basic idea is that the mathematical assumptions on which the validity of Bell's inequality depends are that all the random variables are defined on a single probability space.
If random variables are defined on different probability spaces then you have either unfair source, unfair detection or unfair something else. So it's already taken into consideration.
 
  • #10
More generally, Bell's inequality applies to a very broad class of possible alternatives to QM. Bell's inequality does not apply to every possible alternative to QM, because some input assumptions are needed.

One question (that several of these papers seem to be addressing) is this: Can an alternative to QM that does not obey Bell's inequality be "locally realistic"?

To me, this question is completely uninteresting. It's just semantics, how we choose to define the phrase "locally realistic". Exactly how this phrase is defined has nothing to do with physics.

A much more interesting question is whether an alternative to QM can be constructed that (1) agrees with all tests of QM to date, and (2) makes a prediction different from QM for some experiment yet to be performed.

Without this, there is really nothing in these papers to attract the attention of most physicists.
 
  • #11
This an interesting thread and if someone can explain to me why the equation 4 in this link holds I would be externally grateful. I just cannot see the the right hand side is not squared? If the physics is encoded in the wavefunction and the probability is that multiplied by the complex conjugate why would you expect correlations to go as cos rather than as cos squared ? Many thanks.
http://www.physik.uni-bielefeld.de/~yorks/qm12/ghsz.pdf
 
  • #12
zonde said:
If random variables are defined on different probability spaces then you have either unfair source, unfair detection or unfair something else. So it's already taken into consideration.
I do not follow. Taken into consideration in what way? I do not see how it is taken into account in the derivation of Bell's inequality, or in the proof of Bell's theorem. Do you mean in the interpretation of experimental results? The papers by Accardi are arguing that even with 100% detection you can still have a local realistic situation in which a joint probability distribution is not possible.

Such a deformation is perfectly compatible with the assumption of an 100 percent (ideal) efficiency of the detectors. In fact the efficiency is measured by the ratio of the number of detected particles over the number of particles which have interacted with the apparatus.
 
  • #13
Avodyne said:
More generally, Bell's inequality applies to a very broad class of possible alternatives to QM. Bell's inequality does not apply to every possible alternative to QM, because some input assumptions are needed.

Accardi's argument seems pretty straight-forward and correct to me:

They reduce the assumptions required to obtain the inequalities down to only 2
(I) Random variables with values [-1, +1]
(II) Random variables from the same probability space
With those two assumptions only, they get the inequalities. Violation of the inequalities means one of those two assumptions is false, which is obviously the second one. It is not difficult to see why (II) is false in QM and in the experiments.

If as you say, Bell's inequality applies to a very broad class of possible alternatives to QM, then this class must be the one for which those two assumptions are true. The authors argue convincingly in my opinion that the second assumption is not a reasonable one. Specifically they discuss the class of systems for which the second assumption fails -- those that are adaptive, i.e. we measure an interaction between particle and apparatus (color of chameleon on leaf) and not per-existing properties independent of observation (color of ball in the box). They even construct a counter-example along the same lines which reproduces the EPR correlations with 100% detection efficiency. They conclude that QM systems are of the color of chameleon on a leaf kind, which they term "Chameleon Reality".

This should be interesting to physicists interested in QM alternatives, or the interpretation of QM.
 
  • #14
A recent Sept. 2013 video presentation of Accardi's arguments can also be found here:

Urns and Chameleons: two metaphors for two different types of measurements
http://vimeo.com/76307891
 
  • #15
billschnieder said:
The authors argue convincingly in my opinion that the second assumption is not a reasonable one.
Doesn't matter if it's "reasonable" or not (whatever that might mean).

What matters is whether anyone can construct a theory that agrees with all experiments done to date, and either (1) makes a prediction that differs from QM that can be tested (e.g., Penrose's objective collapse), or (2) is equivalent to QM in terms of experimental predictions, but allows for new methods of calculation (e.g., Feynman's path-integral formulation of QM). Nothing less than either (1) or (2) would be interesting to the overwhelming majority of physicists.
 
  • #16
bohm2 said:
A recent Sept. 2013 video presentation of Accardi's arguments can also be found here:

Urns and Chameleons: two metaphors for two different types of measurements
http://vimeo.com/76307891

With all due respect, it looks like Prof. Luigi Accardi is making a terrible mistake. Bell's theorem is not mainly about probabilities, but the key factor is the correlations between Alice & Bob.

Let’s say that Prof. Accardi discovers and proves a new kind of non-Kolmogorovian probability theory, that proves that in EPR-Bell we are not measuring the predefined LHV in the "Urn" (which is basically what Bell/QM has already proven) but "Labile Flying Chameleons", that interacts with measuring apparatus to gain their final value (which is what QM has been saying all along).

Would this change anything??

No, nothing. If we discover that there are not only Green and Brown in the properties of the "Labile Flying Chameleons", but also Purple, it would not change anything.

The correlations regarding 'the old' Green/Brown (+1/-1), that are 100% incompatible with LHV, would obviously still be there (unless someone is claiming that a theory can change the outcome of physical experiments!), and I think most would agree that the new Purple property can’t change this fact, no matter what it’ll do, correlated or uncorrelated.

Hope it helped.

Regards DA
 
  • Like
Likes 1 person
  • #17
billschnieder said:
I do not follow. Taken into consideration in what way? I do not see how it is taken into account in the derivation of Bell's inequality, or in the proof of Bell's theorem. Do you mean in the interpretation of experimental results? The papers by Accardi are arguing that even with 100% detection you can still have a local realistic situation in which a joint probability distribution is not possible.
Accardi says that you can have local realistic situation with 100% efficient detectors not 100% efficient detection.

later in the same paper he says:
"Computer 1 computes the position of particle 1 using the deterministic dynamics and sends back S(1)a(sigma j)(=1 or -1) if the particle is inside the apparatus. It sends back nothing if the particle is outside the apparatus."

This is detection loophole even with 100% efficient detectors. Detection efficiency is found from proportion between coincidence count and single count (for whole experimental setup).
 
  • #18
zonde said:
Accardi says that you can have local realistic situation with 100% efficient detectors not 100% efficient detection.

later in the same paper he says:
"Computer 1 computes the position of particle 1 using the deterministic dynamics and sends back S(1)a(sigma j)(=1 or -1) if the particle is inside the apparatus. It sends back nothing if the particle is outside the apparatus."

This is detection loophole even with 100% efficient detectors. Detection efficiency is found from proportion between coincidence count and single count (for whole experimental setup).
I think Richard Gill gives a similar argument:
Accardi et al. (2002, 2003) furthermore insist that their work, based on the chameleon effect, has nothing to do with the so-called detection loophole. Rather, they claim that the chameleon model is built on a fundamental legacy of measurement of quantum systems, that there is also indeterminacy in whether or not a particle gets measured at all, and when it gets measured...
But then Gill goes on to argue that:
It is a mathematical fact that “chameleon model” of the type proposed by Accardi et al. (2002, 2003) can be converted into a “detection loophole model”, and vice-versa. This result has been independently obtained by Takayuki Miyadera and Masanori Ohya, and by the present author (unpublished).
The chaotic chameleon
http://arxiv.org/pdf/quant-ph/0307217.pdf

Personally, I tend to think that the "memory loophole" has the most promise since such an effect has been seen in quantum analogues in Couder experiments:
 
Last edited:
  • #19
bohm2 said:
I think Richard Gill gives a similar argument:

But then Gill goes on to argue that:

The chaotic chameleon
http://arxiv.org/pdf/quant-ph/0307217.pdf

Gill in no way believes that Accardi's conclusion (ie local realistic models are compatible with QM) is correct. As a reminder, the detection loophole (mentioned in Gill's paper in regard to Accardi's ideas) is already closed. And was when Accardi's paper was written.

So I think my point is that there is still nothing to make this thread - regardless of level of interest - make sense within our rules. One cannot start with Accardi's non-standard scientific conclusion* and debate its merits. What are you going to accomplish, overturn accepted science? This is not the place for that. And there is really nothing being explored here other than various mathematical ideas regarding the nature of reality. Well, as mentioned, each of us is entitled to reject EPR realism - which is well-defined and well-accepted. If you do, there's your answer - you will reject Bell too.


*See my post #8. Instead, Gill would agree that "No physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics."
 
  • #20
bohm2 said:
... do these Non-Kolmogorovian approaches/axioms seem reasonable/make sense?

Short answer? No

I try to cut down on the Buchstabensalat, it has a bad influence on my poor little green brain... but I made a short exception and skimmed the paper to see if there is any physical meaning of the EPR-chameleon experiment. Same answer, no meaning, at all.

As you can see they talk only about EPR experiments, not EPR-Bell, and that’s because the whole thing breaks down as soon as you go beyond measuring 2 out of 3 possible fixed settings (i.e. DrC’s 0°, 120°, 240°) that between them has the same ratio (i.e. 120°), which is based primarily on the dilemma of 3 predetermined counterfactual values that will destroy the party, without knowledge of the twin partners destiny.

This setup is quite vulnerable to the detection loophole. I know; I hacked a simple little JavaScript that crushes "DrC’s Proof" without mercy... and it didn’t required a professor title or extended papers to accomplish - the math is basically arithmetic on the level of primary school.

So, if you are a little bit woolly about the detection loophole, mixing EPR & EPR-Bell without discretion, and never proceed to the tougher Bell settings (that don’t have the same ratio between them, i.e. 0°, 22.5°, 45°), I guess you can sleep well at night – claiming that there indeed is a physical meaning to the EPR-chameleon experiment...


[PLAIN said:
http://arxiv.org/abs/quant-ph/0112067]We[/PLAIN] do not pretend our model to be a hidden variable model for the EPR experiments. Our goal is to prove that Bell’s statement [3]: “... the statistical predictions of quantum mechanics are incompatible with local predetermination ...” is theoretically and experimentally unjustified if by ”statistical predictions of quantum mechanics” we mean the EPR correlations and by ”local predetermination” the possibility of reproducing these correlations by a classical deterministic macroscopic system subject to local choices.

[...]

Our experiment describes the following classical dynamical system. A source C (central computer) produces pairs of particles (S 1 ,S 2 ) which travel in different directions and after some time, each particle interacts with a measurement apparatus (M 1 ,M 2 ). By the chameleon effect the dynamical evolution of each particle depends on the setting of the nearby apparatus, but not on the setting of the apparatus interacting with the other particle (locality).
[bolding mine]

And of course, if we run the entire "Chameleon Rainbow" – from 0° to 360° – I guess those little creatures would start looking like "Glowing Crackpots" more than anything else. :wink:
 
Last edited by a moderator:
  • #21
zonde said:
Accardi says that you can have local realistic situation with 100% efficient detectors not 100% efficient detection.
I think they do in fact challenge the notion of 100% detection that you imply here.
In fact the efficiency is measured by the ratio of the number of detected particles over the number of particles which have interacted with the apparatus.
It would be totally meaningless to take into account, in the determination of the efficiency, also those particles whose space trajectory has brought them so far from the apparatus that no physical interaction between them is conceivable.

later in the same paper he says:
"Computer 1 computes the position of particle 1 using the deterministic dynamics and sends back S(1)a(sigma j)(=1 or -1) if the particle is inside the apparatus. It sends back nothing if the particle is outside the apparatus."
This is detection loophole even with 100% efficient detectors.
Why should the apparatus be expected produce an outcome at a given instant in time, when there is no particle inside? You could think about it in this way: each apparatus simply produces a list of time-stamped outcomes for every particle that comes into it. One list for Alice and another for Bob. Once the source has done it's thing and the apparata have recorded their lists, the experiment is done. Now where is the detection loophole. All the particles emitted were detected.

Detection efficiency is found from proportion between coincidence count and single count (for whole experimental setup).

So then what you call "detection efficiency" is really a statement about how well you can match the list of results from Alice's side to the list at Bob's side during the data analysis. Probably closer to the coincidence loophole than detection efficiency. And I do not see how this loophole can be eliminated unless anyone doing data analysis is absolutely sure they have matched the list correctly. But how can anyone be sure?

This is why Gill and Larsson conclude in their 2003 paper on the "coincidence loophole" that:
The results underline the importance of eliminating post-selection in future experiments.
 
  • #22
In case, there's any misunderstanding, the point of the thread was to get input from others so I can better understand these arguments, as I had trouble with them. So don't jump all over on me...please. Moreover, I just had a chance to briefly look at Richard D. Gill's critical paper on all these attempts and although he does acknowledge that he hasn't had yet (as of 2003) studied Khrennikov's p-adic probability model he's skeptical. I'm not sure if he's published any thing more recently. Still, I found this comment by Gill interesting:
Another of my anti-Bellist friends, Walter, has claimed that Bell neglected the factor time in his theory. Real experiments are done in one laboratory over a lengthy time period, and during this time period, variables at different locations can vary in a strongly correlated way—the most obvious example being real clocks! Well, in fact it is clear from “Bertlmann’s socks” that Bell was thinking very much of time as being a factor in classical correlation, see his discussion of the temporal relation between the daily number of heart-attacks in Lyons and in Paris (the weather is similar, French TV is identical, weekend or weekday is the same ...). In the course of time, the state of physical systems can drift in a systematic and perhaps correlated way. This means that the outcomes of consecutive measurements might be correlated in time, probability distributions are not stationary, and statistical tests of significance are invalidated. Information from the past is not forgotten, but accumulates. The phenomenon has been named “the memory loophole”.
Time, Finite Statistics, and Bell’s Fifth Position
http://arxiv.org/pdf/quant-ph/0301059.pdf

I was wondering, does anybody see any connection between this point and the path memory or pseudo non-locality seen in the Couder experiments:
A single impact is studied experimentally and in a fluid mechanics theoretical approach. It is shown that each shock emits a radial traveling wave, leaving behind a localized mode of slowly decaying Faraday standing waves. As it moves, the walker keeps generating waves and the global structure of the wave field results from the linear superposition of the waves generated along the recent trajectory. For rectilinear trajectories, this results in a Fresnel interference pattern of the global wave field. Since the droplet moves due to its interaction with the distorted interface, this means that it is guided by a pilot wave that contains a path memory. Through this wave-mediated memory, the past as well as the environment determines the walker’s present motion.
Information stored in Faraday waves: the origin of a path memory
http://stilton.tnw.utwente.nl/people/eddi/Papers/Walker_JFM.pdf

There is already good reason to think that the walkers might exhibit some sort of pseudo nonlocality. Fort and Couder find that the dynamics of walkers is governed largely by the “memory” of past waves, which gradually builds up over the oil bath into a wave field. In this way, the Paris researchers say, one walker can seem to nonlocally affect another walker on the other side of the bath, thanks to a wave – or combination of waves – it emitted previously. This “memory” effect was key to the observed quantization of walker orbits on a rotating oil bath. This year, the researchers demonstrated the memory effect in a more general sense: if a walker is left long enough, its trajectory becomes “entangled” with a wave field in the bath.
Classically quantum
http://physicsworld.com/cws/article/indepth/2013/nov/07/classically-quantum
 
Last edited:
  • #23
bohm2 said:
Personally, I tend to think that the "memory loophole" has the most promise since such an effect has been seen in quantum analogues in Couder experiments:
The memory loophole is just another way to avoid having a single probability space for the random variables, just as Accardi argued. In short the gist of Accardi's argument is more general, and in summary says that anything that guarantees a different probability space for the random variables, can violate the inequalities, in other words, violation of the inequalities proves that you do not have a single probability space for the random variables. Which is exactly what Boole had proved more than a century ago.

If by memory loophole you are referring to the model by De Raedt, then this paper, just published might interest you http://jpsj.ipap.jp/link?JPSJ/82/034004/ , it is an experimental test which claims to have falsified it. Of course the response from De Raedt was published in the same journal. You can find both on arxiv.
 
Last edited by a moderator:
  • #24
billschnieder said:
If by memory loophole you are referring to the model by De Raedt, then this paper, just published might interest you http://jpsj.ipap.jp/link?JPSJ/82/034004/ , it is an experimental test which claims to have falsified it. Of course the response from De Raedt was published in the same journal. You can find both on arxiv.

The experiment:
http://arxiv.org/abs/1303.5281
 
Last edited by a moderator:
  • #25
DevilsAvocado said:
No, nothing. If we discover that there are not only Green and Brown in the properties of the "Labile Flying Chameleons", but also Purple, it would not change anything. The correlations regarding 'the old' Green/Brown (+1/-1), that are 100% incompatible with LHV, would obviously still be there (unless someone is claiming that a theory can change the outcome of physical experiments!), and I think most would agree that the new Purple property can’t change this fact, no matter what it’ll do, correlated or uncorrelated.
I'm still having trouble understanding this. I mean, contextualism is also a necessary feature of other models like Bohmian mechanics. What Bohmians refer to as "contextual realism" (e.g. spin, etc.) seems similar to what Accardi calls adaptive realism/chameleon effect. The difference though is that Bohmians accept non-locality as a necessary feature of their model on top of the contextuality. But Accardi is arguing that there's a difference between quantum contextuality vs probabilistic contextuaity. So while the former necessitates Bell's and non-locality the latter doesn't. He writes:
How does the result of measurement S arise? This is the result of dynamical process of interaction of a system and a measurement device. In such an approach there as nothing against realism. However, this is the adaptive (or chameleon) realism (which is not at all realism of balls having once and for ever determined color). The chameleon effect simply states that, since dynamics is determined by the variable subjected to measurement, we obtain probability distributions depending on experimental settings. Thus the chameleon approach implies probabilistic contextuality, hence, the possibility of violation of Bell’s inequality. Nevertheless, dynamics of measurements can be completely local. Let a and b be two quantum observables represented by commuting operators. Then there are two different dynamical systems corresponding to the a and b-measurements, respectively. In general, they do not depend on each other. Therefore the chameleon effect induces probabilistic contextuality, but not
at all quantum contextuality.
Chameleon effect, the range of values hypothesis and reproducing the EPR-Bohm correlations
http://arxiv.org/pdf/quant-ph/0611259.pdf

I don't understand this and I think you are making the same point, but I don't think it can be that simple, which is why I was hoping someone could explain it to me, as I can't follow the math.
 
  • #26
bohm2 said:
Chameleon effect, the range of values hypothesis and reproducing the EPR-Bohm correlations
http://arxiv.org/pdf/quant-ph/0611259.pdf

bohm2,

Range σ = Range S

Why would the range of values hypothesis matter? I take as a given that the above is false (from the paper). I do so because Bell states that potential hidden variables could be variables, functions or sets of variables/functions. On the other hand, a quantum superposition can be said to consist of many possible paths which can take on only limited output states. So either way, there could potentially be many values (states) of σ and only binary values for observable S.

Again, Accardi's paper is basically an anti-Bell assault on multiple fronts. Not one of the ideas is clearly developed to a satisfactory level. Which is why it sits in a dust bin. To see how ridiculous this discussion really is:

http://gsjournal.net/Science-Journals/Research Papers-Relativity Theory/Download/1503
"In this article it is shown that the General relativity theory is an incorrect theory of gravity..."

Even if this guy were correct, this is the wrong place to discuss its merits (since he is asserting anti-mainstream science). And if it has no merits, this is still the wrong place to discuss it. :smile: We should be discussing the merits of mainstream theory and experiment, which is advancing quite rapidly in the area of entanglement.

Hopefully my reference will be flagged and this thread closed. :biggrin:
 
  • #27
The OP wanted an opinion, and mine is that this is an interesting and valid paper http://arxiv.org/pdf/quant-ph/0611259.pdf and Accardi's Chameleon Reality is an explanation on why and how the EPR/Bell inequalities are violated. This in itself should make it worthy of further discussion.
 
  • #28
billschnieder said:
I think they do in fact challenge the notion of 100% detection that you imply here.
In fact the efficiency is measured by the ratio of the number of detected particles over the number of particles which have interacted with the apparatus.
It would be totally meaningless to take into account, in the determination of the efficiency, also those particles whose space trajectory has brought them so far from the apparatus that no physical interaction between them is conceivable.
How you could possibly find out the number of particles which have interacted with the apparatus?


billschnieder said:
Why should the apparatus be expected produce an outcome at a given instant in time, when there is no particle inside? You could think about it in this way: each apparatus simply produces a list of time-stamped outcomes for every particle that comes into it. One list for Alice and another for Bob. Once the source has done it's thing and the apparata have recorded their lists, the experiment is done. Now where is the detection loophole. All the particles emitted were detected.
Are you sure about that last statement - "All the particles emitted were detected."? Hmm, maybe I misunderstood Accardi's model. My impression was that some particles go astray. But if you say that coincidence does not happen because particle is not yet detected it would be different model. Can you find a quote that illustrates your point?

Anyways you open loophole when you discard unpaired single detections. And this is the way how different probability spaces appear in Accardi's model. And discarding unpaired single detections is already recognized as loophole.


billschnieder said:
So then what you call "detection efficiency" is really a statement about how well you can match the list of results from Alice's side to the list at Bob's side during the data analysis. Probably closer to the coincidence loophole than detection efficiency. And I do not see how this loophole can be eliminated unless anyone doing data analysis is absolutely sure they have matched the list correctly. But how can anyone be sure?
If you have 100% coincidences there is no place left for coincidence time loophole. Coincidence time loophole relays on discarded single detections just the same.
 
  • #29
bohm2 said:
I'm still having trouble understanding this. I mean, contextualism is also a necessary feature of other models like Bohmian mechanics. What Bohmians refer to as "contextual realism" (e.g. spin, etc.) seems similar to what Accardi calls adaptive realism/chameleon effect. The difference though is that Bohmians accept non-locality as a necessary feature of their model on top of the contextuality. But Accardi is arguing that there's a difference between quantum contextuality vs probabilistic contextuaity. So while the former necessitates Bell's and non-locality the latter doesn't.
[...]
I don't understand this and I think you are making the same point, but I don't think it can be that simple, which is why I was hoping someone could explain it to me, as I can't follow the math.
[bolding mine]

I have to confess; I haven’t study Accardi in detail, but to me the situation looks like this:

So yes, contextuality is obviously a requirement, but then things goes bad for the Chameleon model. It does not survive Gleason's theorem (the Chameleon is either green on a leaf or brown on a log), and this is what I tried to say in post #20 – when you go further and expose the “continuous sinusoidal nature” of Bell's theorem, the flying Chameleons are forced to party alone with Norwegian Blue Parrot... :smile:

There is absolutely no way that the Chameleons would survive a test around the full “360° Bell circle”, no way. We all know that the correlations depend solely on the relative angle between Alice & Bob, which Prof. Accardi pay no attention to at all.

It might seem a little bit 'rude', but to me it looks like Prof. Accardi is building a 'classical card castle' around the theoretical chameleons – with no or very little substance – and then to prove the whole thing he exploits the detection loophole (which is embarrassingly easy to do for Mermin’s 'counterfactual argument' in Bell’s theorem).

You definitely do not need non-Kolmogorovian probability or Chameleons to do this.

I could be wrong, but if so, I need more than bivalent Chameleons to prove it... :wink:
 
Last edited by a moderator:
  • #30
morrobay said:
The OP wanted an opinion, and mine is that this is an interesting and valid paper http://arxiv.org/pdf/quant-ph/0611259.pdf and Accardi's Chameleon Reality is an explanation on why and how the EPR/Bell inequalities are violated. This in itself should make it worthy of further discussion.

You are welcome to your opinion. Mine is that your comments are overly speculative and go against the mainstream view. This is not the place to discuss non-standard science, that should be done elsewhere.
 
  • #31
zonde said:
How you could possibly find out the number of particles which have interacted with the apparatus?

That's an important question. In a simulation you can, in real world experiments you can't. That is why they say in section 5 of this paper (http://ics.org.ru/doc?pdf=855&dir=e , page 113), that:

Moreover, and this is a possible difference between the classical and the quantum
case, the very notion of "total number of pairs emitted by the source" is a totally
platonic and in principle unobservable quantity in the quantum case (under the
assumption of a neat space separation between the two apparata).
In some, but not all, classical situations this number might be observable, but in
a quantum context, where you cannot follow the trajectory of single particles without
altering it, this number is quite unobservable.​


Are you sure about that last statement - "All the particles emitted were detected."? Hmm, maybe I misunderstood Accardi's model. My impression was that some particles go astray. But if you say that coincidence does not happen because particle is not yet detected it would be different model. Can you find a quote that illustrates your point?
From the same paper, page 106 they say:
In experiments with photons the term "simultaneous" has to be meant in the sense of a very narrow time window. But our experiment can also reproduce the ideal situation in which all apparata involved are 100% efficient. Exactly as in the experiment for photons the statistics is conditioned on coincidences (these topics are further discussed in Sec. 5). We do not know the
mechanism of coincidences for individual photons because quantum mechanics does not predict the space-time trajectories of microscopic particles. In our model this mechanism is:
(i) deterministic, i.e. uniquely pre-determined by the hidden parameters;
(ii) entirely local.​
Also look at point (2) on page 114
 
Last edited by a moderator:
  • #32
DrChinese said:
You are welcome to your opinion. Mine is that your comments are overly speculative and go against the mainstream view. This is not the place to discuss non-standard science, that should be done elsewhere.

Clarification on mainstream view please : Is it the mainstream view (assumption) that particles once and for all have a determined spin . Or that the balls have a determined color before measurement ? Reference page 4 : And is the following not mainstream and speculative ? Measurement of S is the result of a dynamical process of interactions of a system and a measurement device. Also reference page 8 (17).
 
Last edited:
  • #33
morrobay said:
Clarification on mainstream view please

The working definition given by the rules of these forums is rather clear:
"Generally, discussion topics should be traceable to standard textbooks or to peer-reviewed scientific literature."

Peer-reviewed scientific literature is defined as follows:
"Usually, we accept references from journals that are listed here:

http://ip-science.thomsonreuters.com/mjl/

Use the search feature to search for journals by words in their titles. If you have problems with the search feature, you can view the entire list here:

http://ip-science.thomsonreuters.com...cgi?PC=MASTER

In recent years, there has been an increasing number of "fringe" and Internet-only journals that appear to have lax reviewing standards. We do not generally accept references from such journals. Note that some of these fringe journals are listed in Thomson Reuters. Just because a journal is listed in Thomson Reuters does not mean it is acceptable."
 
Last edited by a moderator:
  • #34
morrobay said:
Is it the mainstream view (assumption) that particles once and for all have a determined spin . Or that the balls have a determined color before measurement ?

It is mainstream that either entangled particle spin is not predetermined, or there are non-local factors affecting it. The assumptions for Bell's Theorem are generally taken to be EPR locality and EPR realism. Some may say there are other assumptions as well, but generally those (for example free will, no conspiracy, etc) have nothing whatsoever to do with Bell and apply to ALL scientific setups equally.

It is mainstream that Bell's Theorem has survived all challenges. Bell is so widely accepted that any new QM interpretation must devote a section to explaining how it reconciles with Bell. That is, if it is to be taken seriously. See for example this new interpretation published this week:

http://arxiv.org/abs/1312.3427

The Emergent Copenhagen Interpretation of Quantum Mechanics
Timothy J. Hollowood
(Submitted on 12 Dec 2013)

We introduce a new and conceptually simple interpretation of quantum mechanics based on reduced density matrices of sub-systems from which the standard Copenhagen interpretation emerges as an effective description of macroscopically large systems. Wave function collapse is seen to be a useful but fundamentally unnecessary piece of prudent book keeping which is only valid for macro-systems. The new interpretation lies in a class of modal interpretations in that it applies to quantum systems that interact with a much larger environment. However, we show that it does not suffer from the problems that have plagued similar modal interpretations like macroscopic superpositions and rapid flipping between macroscopically distinct states. We describe how the interpretation fits neatly together with fully quantum formulations of statistical mechanics and that a measurement process can be viewed as a process of ergodicity breaking analogous to a phase transition. The key feature of the new interpretation is that joint probabilities for the ergodic subsets of states of disjoint macro-systems only arise as emergent quantities. Finally we give an account of the EPR-Bohm thought experiment and show that the interpretation implies the violation of the Bell inequality characteristic of quantum mechanics but in a way that is rather novel. The final conclusion is that the Copenhagen interpretation gives a completely satisfactory phenomenology of macro-systems interacting with micro-systems.
 
  • #35
billschnieder said:
Why should the apparatus be expected produce an outcome at a given instant in time, when there is no particle inside?

Because it's a scenario that Bell's analysis can justifiably be applied to. There's no point in doing an experiment that hasn't been shown to be able to detect a difference between locality and nonlocality, particularly when we have known counterexamples.

In a properly performed loophole-free Bell-type test, the detection time windows (when the two parties are going to perform measurements and record outcomes) should normally be decided in advance, or at least before the choices of measurements are made. The usual way to handle "noise" events (e.g. non-detection when a detection was expected) is simply to map them to particular outputs (for instance, the experimenters adopt the convention of recording all non-detections as '+1' events). That's the type of scenario that Bell's theorem readily applies to, and trying to apply it to anything more sophisticated than that would need to be supplemented with a careful justification of how and why it can be done.
 

Similar threads

Replies
0
Views
760
Replies
50
Views
4K
Replies
80
Views
4K
Replies
93
Views
5K
  • Quantum Interpretations and Foundations
10
Replies
333
Views
12K
  • Quantum Interpretations and Foundations
25
Replies
874
Views
31K
  • Quantum Interpretations and Foundations
2
Replies
44
Views
2K
Replies
55
Views
6K
  • Quantum Physics
7
Replies
220
Views
18K
Replies
25
Views
2K
Back
Top