EPR Experiment with Pool Balls

In summary, the conversation discusses the idea of applying the same logic of pool balls to the EPR experiment, which involves two particles in an entangled state. However, this approach does not match the results of real experiments, and quantum mechanics has been shown to better predict the outcomes. The non-locality of the experiment is a quirk of the theory and there are other interpretations, such as the Consistent Histories approach, which suggest that the EPR paradox can be explained without non-locality.
  • #1
Tachyon son
38
2
***EPR Experiment with Pool Balls***

First of all sorry if what I am going to ask seems crazy stupid, but it is an idea that rounds my mind since I started reading about EPR subject.

As far as I've read, you can imagine EPR with photon polarization or with particle spin. So, I have imagined it with two complementary pool balls inside a bag. One is red, the other is black.
I take one without looking at it. Then I travel, let's say, 1 lightyear distance.
Now it is when I look at the ball to see its colour. 50% probability then. Obviously, if my ball is red, the remaining one in the bag is black.
If we applicate the non locality principle, it will say that both balls were on a uncertain color until being looked. I know this is very stupid concept, because we certainly know that my ball was red all the time since my election, and the remaining one of course was black.

The point of my question is why we can't apply the same logic to the EPR experiment. If I use two electrons with spin 1 and -1 to make the experiment, they were in that state all the time since the separation! Theres no any comunication nor information travel!

Please don't be too hard with me in the answers
 
Physics news on Phys.org
  • #2


Originally posted by Tachyon son
The point of my question is why we can't apply the same logic to the EPR experiment. If I use two electrons with spin 1 and -1 to make the experiment, they were in that state all the time since the separation! Theres no any comunication nor information travel!
I assume you mean the usual kind of EPR experiment where you have two spin 1/2 particles in an entangled state where the total spin is zero.

If we just measured the spin in one and the same direction for both particles, then your model would (seem to) work. If we measure spin along the x-axis, we will always get a matching pair of answers: if particle A says up, particle B says down. You could pretend that each particle had its x-axis spin assigned to it, just like the color of your pool balls.

But things get more interesting when we measure the spin at different angles for each particle, say x-axis for one, y-axis for the other. It turns out that these spin measurements are incompatible, meaning that measuring one seems to "destroy" any value of the other. For any given measurement, you can only measure one direction of spin. You would have to extend your pool ball model to include a new variable to represent the y-axis measurement: let color be the spin in the x-direction, and (say) shape represent spin in the y-axis (round = up; cube = down). Things get even more complicated when you let the spin angle have any angle.

A more sophisticated (but still unworkable) model would just assign an "instruction set" to each particle. The instructions would tell it what to do upon encountering a measuring device for any spin direction: essentially this is a list of spin values for any direction of measurement. All the spin information rides along with each particle, so no funny business about communication or information travel.

But the bottom line is this. Nature doesn't seem to work that way! These kinds of model (your pool balls or the "instruction set" model) have been showed to lead to correlations that do not match the results of real experiments. (This is the essential content of Bell's theorem.) Quantum mechanics, on the other hand, predicts the results nicely.

This may not seem like much of an answer, since I'm basically saying: It just doesn't work. To go deeper would involve describing the spin correlations and the details of Bell's theorem.
 
  • #3
The instructions would tell it what to do upon encountering a measuring device for any spin direction: essentially this is a list of spin values for any direction of measurement. All the spin information rides along with each particle, so no funny business about communication or information travel.

But suppose particle A encounters a measuring device first. So it follows the instructions and 'becomes' a particle with spin Sa. So far so good.. but now particle B, as per its version of the instructions, must assume a definite spin Sb: it has no choice. However, Sb is corelated with Sa, which in turn is a function of the kind of measurement that A has encountered, and this could be light years away at that moment.. so the 'instruction set' model would have to be nonlocal anyway.

The point being that a nonlocal "instruction set model" can be ruled out without delving into the details of Bell's theorm etc... no?
 
  • #4
Your answers are too focused on my error concerning spin version of the experiment, thanks for repliying and clarify that.

The whole point of my question rests at the end: "Why we can't apply the same logic (of pool balls) to the EPR experiment".
In other words, non locality exists or is the polarization of photons already decided from the start of the experiment?
 
Last edited:
  • #5
The whole notion of 'instruction set' is what Bell's theorem is about. However, Bell's theorem does make some assumptions that aren't necessarily valid.

I haven't studied QM, but I am fairily convinced that the non-locality is a quirk of the theory more than it is a contradiction -- Specifically, that it is possible to construct a model of the Electron that deals with the EPR paradox without nonlocality, but that behaves identically to other predictions made by the typical QM model otherwise.

Proponents of the Consistent Histories approach to QM claim that the EPR paradox is actually like your pool ball example, but I don't know enought about it, or how it differs from the Copenhagen Interpretation to give you any further insight.
 
Last edited:
  • #6
"Why we can't apply the same logic (of pool balls) to the EPR experiment".
In other words, non locality exists or is the polarization of photons already decided from the start of the experiment?


Suppose you had a weird kind of pool ball to which you could put the question : "Are you red or black?" and it would randomly reply either "red" or "black". If two balls were corelated, then getting a "red" reply from one would guarantee a "black" reply from the other.

But now there is another question that you could ask: "Are you new or old?" (forget about the *meaning* of the answer :) If you get "new" from the first ball you are guaranteed to get "old" from the second.

Finally, a ball that has replied "red" once will stick to this answer as long as you stick to the same question; but if you ask a "red" ball the old-new question, you will randomly get "old" or "new".

If you ask "red or black" of a "new" ball you will get a random reply, either "red" or "black".


If you alternate your questions successively, there is a fair chance that a ball that once said "red" will now say "black".

NOW, in order to preserve the correlation between the two pool balls, it is clear that the second must know what question you asked the first one, so that it will know whether to randomize or not. As I understand it, it is this information that 'travels' nonlocally in the 'instruction set' model. Once this information is available, the second ball can follow the instructions to produce the corelated answer. If you discard the instruction set concept, then the answer itself must travel nonlocally.
(Corrections welcome!)
 
Last edited:
  • #7
The whole point of my question rests at the end: "Why we can't apply the same logic (of pool balls) to the EPR experiment".
Because that is what the EPR experiment invalidates.

In essence, Einstein stated that it is nonsensical to say that observing at the end changes the other ball, and that the ball must have had a state (red or black) from your election. It is simply our knowledge that is lacking. Bell then analysed this and produced the Bell inequalities, which would be true if this sort of "local realistic", hidden variable logic is true.

But experiments then carried out violated the Bell inequalities, showing this sort of thinking to be invalid. Simple as that.
 
  • #8
Originally posted by FZ+
Because that is what the EPR experiment invalidates.
...
But experiments then carried out violated the Bell inequalities, showing this sort of thinking to be invalid. Simple as that.

Not quite as simple. No experiment, despite three decades of trying, has invalidated local realism. Although the "Quantum Mystery Cult" euphemistically labels this failure as "loopholes" in the existent experiments, the simple fact is that only the "data" adjusted (cherry picked, for non-QMC members) using various convenient rationalizations (additional "assumptions" such as "fair" sampling, "accidental" coincidence subtractions, non-enhancement,... etc ) violate Bell inequalities.

The unadjusted data not only has not violated the inequalities, but there are even plausible classical theories (such as "stochastic electrodynamics" for photons and Barut's self-field electrodynamics for fermions) which reproduce the actual data of all EPR experiments so far.
 
Last edited:
  • #9
Originally posted by nightlight
Not quite as simple. No experiment, despite three decades of trying, has invalidated local realism. Although the "Quantum Mystery Cult" euphemistically labels this failure as "loopholes" in the existent experiments, the simple fact is that only the "data" adjusted (cherry picked, for non-QMC members) using various convenient rationalizations (additional "assumptions" such as "fair" sampling, "accidental" coincidence subtractions, non-enhancement,... etc ) violate Bell inequalities.

The unadjusted data not only has not violated the inequalities, but there are even plausible classical theories (such as "stochastic electrodynamics" for photons and Barut's self-field electrodynamics for fermions) which reproduce the actual data of all EPR experiments so far.

Harsh words, and not really accurate. Following the standard Copenhagen Interpretation of QM does not qualify one as a member of the "Quantum Mystery Cult." Your spirit is misplaced.

The fact is that decades of experiments have soundly supported the predictions of QM, and have failed to indicate the existence of a more complete specification of reality as discussed in EPR. To the vast majority of scientists in the area, the matter is reasonably settled by the experiments of Aspect et al. Local reality is rejected by experiment.

What is true is that there have and will continue to be those to whom the experiements leave some amount of room for a "way out". For years, the criticism was leveled at Aspect that the observer and subject systems were in communication. He fixed that criticism. Lately, there has been a criticism on the grounds of counting inefficiency.

I could certainly agree that further refinement of the experiments to answer such criticism is warranted. I don't expect anything radical or surprising to occur, but you never know.
 
  • #10
He is referring to criticisms of the Aspect and other experiments directed at showing the Bell inequality violation by quantum mechanics. There are some weaknesses that even quantum physicists recognize, and the "reality" partisans have chosen to make a stand on these.
 
  • #11
The fact is that decades of experiments have soundly supported the predictions of QM,

No one is arguing against the QM statistical predictions. The argument is against the unsubstantiated claims that the experiments exclude local realism. To arrive at that "conclusion" the data has to be cherry picked based on metaphysical and unverified (or unverifiable) ad hoc rules.

For example, in all the experiments there is a "fair sampling" assumption -- an assumption which implies that the local hidden variables do not affect the probability of detection. Under such assumption all that the experiment excludes are the local hidden variables which don't affect the probability of detector trigger.

Check for example the paper by Emilio Santos which explains why "fair sampling" is an absurd assumption.

To the vast majority of scientists in the area, the matter is reasonably settled by the experiments of Aspect et al. Local reality is rejected by experiment.

The experiments still show only that certain absurdly restricted (as Santos explains in the paper above) types of local realism are excluded. Perfectly plausible local realistic theories, such as stochastic electrodynamics (e.g. check papers by Emilio Santos & Trevor Marshall for details) fit the actual data as well as QM.

What is true is that there have and will continue to be those to whom the experiements leave some amount of room for a "way out".

I suppose all the past inventors of "perpetuum mobile" machines could claim the same about the non-believers -- except for that little glitch with friction, which is entirely due to the present technological imperfections, and which we will fix in the near future, the machine runs for ever (even though it actually stops). The doubters are merely looking for "unimportant" loopholes and "wiggle room." Yeah, sure. It either works or it doesn't.

For years, the criticism was leveled at Aspect that the observer and subject systems were in communication. He fixed that criticism.

That was a fake "criticism" by the supporters of the QMC, not the opponents. No one was proposing models, much less theories, which would explain the optical experiments that way (via distant subluminal communication between the two far apart sides of the aparatus). The Aspect's "fix" was thus like a magician theatrically rolling up his coat sleeves, after a "neutral" voice from the public shouted about the card hiding in the sleeve.

Lately, there has been a criticism on the grounds of counting inefficiency.

The inefficiency problem better known under the euphemism "detection loophole" has been a known problem well before Aspect did his thesis. It hasn't been fixed.
 
Last edited:
  • #12
Originally posted by nightlight
The fact is that decades of experiments have soundly supported the predictions of QM,

No one is arguing against the QM statistical predictions. The argument is against the unsubstantiated claims that the experiments exclude local realism. To arrive at that "conclusion" the data has to be cherry picked based on metaphysical and unverified (or unverifiable) ad hoc rules.

For example, in all the experiments there is a "fair sampling" assumption -- an assumption which implies that the local hidden variables do not affect the probability of detection. Under such assumption all that the experiment excludes are the local hidden variables which don't affect the probability of detector trigger.

Check for example the paper by Emilio Santos which explains why "fair sampling" is an absurd assumption.

To the vast majority of scientists in the area, the matter is reasonably settled by the experiments of Aspect et al. Local reality is rejected by experiment.

The experiments still show only that certain absurdly restricted (as Santos explains in the paper above) types of local realism are excluded. Perfectly plausible local realistic theories, such as stochastic electrodynamics (e.g. check papers by Emilio Santos & Trevor Marshall for details) fit the actual data as well as QM.

Nice paper by Santos, BUT...

1. It is a new paper, and hardly the last word. Certainly would not be considered authoritative at this point. However, I will accord it the courtesy of addressing it on its merits.

2. Bell's Inequalities: I did not take away from the Santos paper any real criticism of the Bell derivation. In a "perfect world", the Inequality could be used to rule out all LHV theories. I disagree with the notion that Bell's "second part" (per the paper) is confused in some way. All I can see is the criticism that an actual "loophole free" experimental setup was not described. Hardly a reasonable critique of Bell by any common standard. Bell did his job fully.

3. The Aspect-type experimental setup and the "fair sampling" assumption:

Santos states:

"In the context of LHV theories the fair sampling assumption is, simply, absurd. In fact, the starting point of any hidden variables theory is the hypothesis that quantum mechanics is not complete, which essentially means that states which are considered identical in quantum theory may not be really identical. For instance if two atoms, whose excited states are represented by the same wave-function, decay at different times, in quantum mechanics this fact may be attributed to an ”essential indeterminacy”, meaning that identical causes (identical atoms) may produce different effects (different decay times). In contrast, the aim of introducing hidden variables would be to explain the different effects as due to the atomic states not being really identical, only our information (encapsuled in the wave-function) being the same for both atoms. That is, the essential purpose of hidden variables is to attribute differences to states which quantum mechanics may consider identical. Therefore it is absurd to use the fair sampling assumption -which rests upon the identity of all photon pairs- in the test of LHV theories, because that assumption excludes hidden variables a priori.

"For similar arguments it is not allowed to subtract accidental coincidences, but the raw data of the experiments should be used. In fact, what is considered accidental in the quantum interpretation of an experiment might be essential in a hidden variables theory."

There are some pretty big claims here, and I don't think they are warranted. Fair sampling is far from an absurd assumption. There has never been a single experimental test of a quantum variable which has even slightly hinted at the existence of a deeper level of reality than is currently predicted by QM. Hardly what I would call "absurd". You might as well call the notion that the sun will rise tomorrow as absurd.

You might say that it is an unwarranted or burdensome requirement. But I don't even follow that line of reasoning. Clearly, the requirement is that a LHV theory otherwise provide identical predictions to QM. Fair sampling fits this naturally. In the view of Santos, not only are the Bell Inequalities not violated in the Aspect experiments, but a new and previously unknown hidden local quantum observable is rearing its head. And somehow this observable only shows itself during this type of experiment, and no others. That observable is one in which the photon detection is suppressed or enhanced just enough to appear to match the predictions of QM (i.e. outside of the Bell Inequality); while actually falling within the statistical range of the Inequality. That's a big step, one which I might reasonably expect to have been noticed previously.

4. I have not had time to otherwise anaylze the formula logic of the paper. I will take a look at that.

A degree of skepticism is good, and healthy. I don't see the point of insults.
 
Last edited:
  • #13
1. It is a new paper, and hardly the last word. Certainly would not be considered authoritative at this point. However, I will accord it the courtesy of addressing it on its merits.

That particular paper is new, but Santos, Marshall, Jaynes and others have been criticizing the EPR-Bell experiment claims since the late 70s (check listings there, there are at least couple dozen papers by Marshall-Santos group). This wasn't merely a critique based on artificial narrow counterexamples for the particular experimental claims but a full fledged local realistic theory of quantum optics phenomena (stochastic electrodynamics; it falls short for the massive particles although the Barut's self-field electrodynamics covers fermions as well as QED to the orders it was computed).

Regardless of the ultimate value of stochastic electrodynamics as an alternative theory (it is incomplete as it stands), the mere existence of a local fields model for the actual EPR-Bell experimental data plainly demonstrates that the claims that any local realistic mechanism is being excluded by the experiments is false.

2. Bell's Inequalities: I did not take away from the Santos paper any real criticism of the Bell derivation.

The Santos-Marshall group makes distinction between the QM dynamics, which they accept, and the "measurement theory" (the non-dynamical, mystical part - projection postulate) which they reject. The Bell's theorem needs a collapse of the remote state to achieve its locality violation. They reject such collapse and point out that it hasn't been demonstrated by the experiments.

The problem nowdays with challenging the general state collapse hypothesis (projection postulate) is that it is a key ingredient necessary for Quantum Computing to work. If it is not true in the full generality, the QC won't work any better than a classical analog computer. Thus the challenge is not merely against ideas but against the funding draw QC has, a sure recipe to get yourself cut off from the leading journals and conferences. (Before the QC hype, there was a healthy debate and they were published in every major journal.)

There are some pretty big claims here, and I don't think they are warranted. Fair sampling is far from an absurd assumption.

In any deterministic hidden variable theory, the detection probability must by definition depend on some hidden variable value. The "fair sampling" hypothesis is thus an assumption that the hidden variable affecting the detection probability (the probability of triggering the avalanche and its timing when coincidence time-windows are used for pair detection) is independent from the hidden variables affecting the detected outcome (i.e. +/- choice).

Therefore that is all that experiments exclude -- the local theories for which the two sets of hidden variables are independent of each other. That is not true even for the most simple minded classical electrodynamics models of polarization and detection (or for stochastic electrodynamics or for Barut's self-field ED).

Thus the assumption is absurd since it helps experiments exclude something that isn't even included among the proposed alternatives. This is no different "exclusion" than the "refinements" of the experiments to use randomly varying polarizer direction (which you brought up earlier) -- it topples down its own strawman, not the actual theories being proposed by the opponents.

There has never been a single experimental test of a quantum variable which has even slightly hinted at the existence of a deeper level of reality than is currently predicted by QM. Hardly what I would call "absurd".

QM doesn't offer any "reality" deeper or otherwise. If you believe in any reality, local or not, the quantum phenomena require explanation beyond the prescriptions on how to calculate the probabilities.

You might say that it is an unwarranted or burdensome requirement. But I don't even follow that line of reasoning. Clearly, the requirement is that a LHV theory otherwise provide identical predictions to QM. Fair sampling fits this naturally.

There is no need for "unwarranted" or "burdensome" attributes in order to analyze what is it exactly that the "fair sampling" (purely mathematically) excludes -- it is an ad hoc constraint on hidden variables, which hand-waves off the table several proposed alternatives, leaving only the strawman local theories (that no one has proposed) for the experiments to refute.

For more discussion on the "fair" sampling hypothesis and the proposed simple additional experiment to test it for the existent EPR-Bell setups check the paper by G. Adenier, A. Khrennikov. I haven't seen as yet any of the several active quantum optics groups, who are claiming to have established Bell inequality violations, checking the assumption on their setup. Since the additional tests proposed are quite simple on the existent setup, it is suprising that no one has yet picked the clear cut open challenge of the above paper, especially considering that the verification of the fair sampling as proposed would eliminate all known plausible LHV theories (they all rely on "unfair" sampling). Or maybe some have tried it and the data didn't come out the way they wished, and they didn't want to be the first with the "bad" news. We'll have to wait and see.

PS: After writing the above, I contacted the authors of the cited paper and the status is that even though they had contacted all the groups which have done or plan to do EPR-Bell experiments, oddly no one was interested in testing the 'fair sampling' hypothesis.

Clearly, the requirement is that a LHV theory otherwise provide identical predictions to QM. Fair sampling fits this naturally.

As pointed out by Santos, the QM has two sharply divided components, dynamics and the measurement theory. They reject the measurement theory (in its full generality) and some of its implications. That is precisely what the Bell EPR tests were supposed to clarify - does the world behave that way. The result so far have not produced the type of distant collapse (projection of the composite state) as assumed by Bell for his inequalities.

The "fair sampling" is an assumption outside of QM (or any other theory or any experiment). The actually proposed alternative theories do not satisfy fair sampling, i.e. the hidden variables do not decouple into independent sets which separately control the detection timing and probability from variables controlling the +/- outcome.
 
Last edited:
  • #14
epr without pool balls

hi tachyon son!

the problem with thinking the EPR problem wih pool balls is that there is a well defined colour for the pool ball, whether you measure it or not!

but in q.m, a particle has a defenite value for an observable only when you measure it!

in fact , this is what the original EPR paper is about! it doesn't talk anything about pool balls or about the usual thing about two particles with total spin zero sent in opposite directions(the usual stuff)
what the actual epr paper says is this..

in q.m, you can't have a particle in a state of defenite mamentum and position ...this is the position momentum uncertainity princple.
now suppose you have an entangled pair(momentum entangled, ie total momentum is zero) of particles going off in the opposite directions , and you decide to measure. now, if you measure the position of particle A( let us call it particles A and B), particle B goes to a state(eigenstate) with a well defined position.
( particle A, on which you perform the measurement also goes to an eigenstate of position.)

but suppose you decide to measure momentum instead, then particle B goes to a state with well defined momentum!

so, in fact particle B goes to an eigenstate,which depends on what you decide to measure! suppose the particles are light years apart, then, your choice of whether to measure position or momentum influences( instantaneously) a a particle which is light years away to collapse it into an eigenstate(of what you measure)!
until you make the measurement, you cannot say that the particles are in a state of position or momentum.

you can do the EPR experiment with spin as well...that version is due to Bohm or somebody...and in fact the Aspect experiment which confirmed bell's theorem was performed with the polarizationof photons! so it doesn't depend on which variable(or, in the language of q.m,observable ) you use!

that's all.[zz)]
 
  • #15


Originally posted by venkat

...and in fact the Aspect experiment which confirmed bell's theorem was performed with the polarizationof photons!
that's all.
The Aspect's experiment, or any other attempt in over three decades of trying, have not confirmed Bell's inequality. See the above discussion of the "fair" sampling hypothesis (that all such experiments assume upfront) and what it means.
 
  • #16


Originally posted by nightlight
The Aspect's experiment, or any other attempt in over three decades of trying, have not confirmed Bell's inequality. See the above discussion of the "fair" sampling hypothesis (that all such experiments assume upfront) and what it means.

1. The predictions of QM are confirmed by Aspect's experiments, there is no question about this point. Period.

2. The only question - as nightlight argues to the negative - is whether ALL Local Hidden Variable theories are excluded as a result of Aspect's experiments. The reference paper cited (Santos) asserts there ARE at least some LHV theories which would yield predictions in line with the results of the Aspect experiments. (Personally, I question the conclusion but am still reviewing this.)

Nightlight is pushing a point of view which is not generally accepted. It may well be right, but remains to be seen.
 
  • #17


1. The predictions of QM are confirmed by Aspect's experiments, there is no question about this point. Period.

The QM prediction which violates Bell's inequality has not been confirmed by the measured data, by Aspect or any other experiment. Only the adjusted data under:

a) "fair" sampling hypothesis
b) subtraction of "accidental" coincidences

violate Bell's inequality. Both of these assumptions are outside of QM and even though there were proposals (for over a decade, see refs in Santos & Khrennikov) for experiments to verify them, no group has reported performing them.

The theoretical prediction itself requires, among others, distant collapse of the composite state, a part of "measurement theory" of QM, which is not a generally accepted addition to the dynamical postulates of QM. The groups which reject assumptions (a) and (b), also question the "measurement theory," the distant instantaneous composite state collapse which Bell assumed. For them there is no such prediction (and everyone agrees that, so far, there is no _measured_ data confirming it).

2. The only question - as nightlight argues to the negative - is whether ALL Local Hidden Variable theories are excluded as a result of Aspect's experiments. The reference paper cited (Santos) asserts there ARE at least some LHV theories which would yield predictions in line with the results of the Aspect experiments. (Personally, I question the conclusion but am still reviewing this.)

All sides agree that not all LHV theories are excluded by the experiments. What Santos points out in the paper is that LHVs which are excluded are the most absurd subset of the conceivable LHV theories (there is no actual theory constructed, not even partial one, which satisfies "fair" sampling hypothesis), i.e. the experiment topples merely a strawman made up by the experimenter.

The actual alternative LHV theories (or the QM extensions/completions) which exist (whether they are ultimately right or wrong in their full scope), such as stochastic electrodynamics (SED) and self-field electrodynamics, are not being addressed by these experiments -- these theories are waved off by hand upfront by an ad hoc "fair sampling" assumption, which is outside QM and which somehow no one wants to put to test. These LHV theories agree perfectly with the EPR-Bell experiments (as Marshall, Santos and their sudents have shown in numerous papers).

Nightlight is pushing a point of view which is not generally accepted. It may well be right, but remains to be seen.

Among the people doing the experiments and their critics, there is no dispute as what is being excluded by the experiments themselves. They all know what the assumptions (a) and (b) sweep away upfront and they know that the actual alternatives from the opposition are not being tested. They all know they could test assumption (a) and that no one wants to report whether they have done it and what was the result.

The only disagreement on the experimental side is in the prediction what will happen as the technology improves -- the state collapse supporters believe Bell inequality will be ultimately violated as detectors improve (without "loopholes" i.e. without the need to adjust data via (a), (b) and such). The opponents believe it won't be violated.

On the theoretical side, the contention is the "measurement theory", specifically the postulate on the composite system state collapse, and there is no generally accepted single view on that. Nothing in day to day use of QM/QED depends on that postulate, so the vast majority of physicists ignore the subject altogether -- it doesn't affect their work either way. If it turns out falsified, there won't be any experimental consequences in anything anyone has done so far (the only experiment which could confirm it, excluding alternatives, would be a loophole free EPR-Bell test). The main effect would be on the EPR-Bell storyline and on the so-called Quantum Computing (which would lose its non-classical "magic" powers attributed to it by the present state collapse proponents, as being right around the corner, as soon as the 'decoherence' is taken care of and the detectors improve).

In summary, the only disagreement is in what will be measured/found in the future. What has actually been measured is known to those in the field and is not a matter of belief or taste. You only need to read carefully, bracket out the hype, euphemisms and the unspoken or footnoted limitations (which have been largely known since mid-1970s), to see that there is no actual disagreement between Santos/Marshall group and the EPR-Bell experimenters, as to what exactly has been excluded by the data and what by the additional assumptions. It is only in what will happen in the future that they can really disagree about, and the time is on the skeptics' side.
 
Last edited:
  • #18


Originally posted by nightlight
1. The predictions of QM are confirmed by Aspect's experiments, there is no question about this point. Period.

The QM prediction which violates Bell's inequality has not been confirmed by the measured data, by Aspect or any other experiment. Only the adjusted data under:

a) "fair" sampling hypothesis
b) subtraction of "accidentall" coincidences

violate Bell's inequality. Both of these assumptions are outside of QM and even though there were proposals (for over a decade, see refs in Santos & Khrennikov) for experiments to verify them, no group has reported performing them.

While I disagree with your characterization of the state of the current evidence, the above is just plain wrong.

Bell's Inequality has little or nothing to do with testing the predictions of quantum theory, although Aspect's experiments do confirm the predictions of QM as a by-product. The Bell Inequality requires only the essential beliefs in local reality and follows classical reasoning. If you accept that the two emitted photons carry a crossed polarization, the inequality can be deduced.

Quantum mechanics does not assume that the photons have definite polarization independent of their measurement. Classical reasoning requires this, and that is what leads to the Inequality, which is ultimately a reformulation of the idea that every measured permutation must have a likelihood of occurance between 0 and 100%. If this were true (which is the point being debated and which the Aspect experiments indicate are in fact false) then QM would not be a complete theory. Maybe. But it would not indicate that QM is "wrong". That could never happen, any more than you might consider Newton's gravitational laws "wrong".

On the other hand, the reason some people are so emotional about the Aspect experiments is this: once all "objections" are dealt with, all LHV theories must be excluded from consideration. They would be rendered totally untenable, essentially "wrong". So the issue has different stakes depending on which side you are on.

Aspect must be getting rather tired of hearing that his experiments have shown nothing. At any rate, I can agree that all voices are not in agreement on the interpretation of the results at this time. The most common conclusion I have heard is that locality has been violated, although that is not a strict conclusion from the results. And some, such as yourself, are not comfortable with the experimental procedure.

Fine, perhaps there is a flaw. I don't see the angle of attack, but perhaps it is there.
 
  • #19


Bell's Inequality has little or nothing to do with testing the predictions of quantum theory, although Aspect's experiments do confirm the predictions of QM as a by-product. The Bell Inequality requires only the essential beliefs in local reality and follows classical reasoning. If you accept that the two emitted photons carry a crossed polarization, the inequality can be deduced.

Of course it has to do -- the whole point was to produce a prediction of QM which no local deterministic theory would be able to. The QM predicition asserted by Bell was that QM would violate inequality that no local deterministic theory could violate. The whole excercise would have been pointless without the QM prediction falling on the opposite side of the Bell inequality from any LHV theory.

Quantum mechanics does not assume that the photons have definite polarization independent of their measurement.

That (the assumption of the lack of definite polarization) by itself doesn't imply violation of the Bell inequality. What does imply the violation is the projection postulate, part of the QM measurement theory, when applied to the entangled state.

Classical reasoning requires this, and that is what leads to the Inequality,

That alone, without also deducing a QM prediction which will violate inequality, would be pointless.

On the other hand, the reason some people are so emotional about the Aspect experiments is this: once all "objections" are dealt with, all LHV theories must be excluded from consideration.

Emotions have nothing to do with experimental facts. If you study this subject beyond the popular literature and hype, you can find out for yourself which class of LHV theories were excluded by the experimental data and which were excluded upfront (as not being the objective of the experiments). The status is as stated in my earlier posts (or as Santos states). If you find out that I have misclassified them (as described in previous posts), I would be glad to see the correction here.

The most common conclusion I have heard is that locality has been violated, although that is not a strict conclusion from the results. And some, such as yourself, are not comfortable with the experimental procedure.

Again, this is not a discussion of your or my inner "comfort". It is a simple straightforward question as to what has been excluded by the experimental data and what was taken out of consideration upfront. The plain fact, known to everyone in the field (since mid 1970s, although not emphasized equally by everyone) is that the "fair sampling" constraint on LHVs implies LHV theories in which the local variables determining the detection probabilities are independent of the variables determining the +/- outcome. It just happens that no such theories were constructed and that the actual LHV alternatives/extensions of QM (which can make predictions) do not satisfy the "fair sampling" constraint and their predictions agree with the experimental data.

You seem to be confusing the LHVs excludied by the experiments with those excluded by the Bell's inequality -- indeed all LHVs are excluded by the Bell inequality, i.e. all LHVs satisfy the inequality. The only problem is that what Bell claimed to be a QM prediction violating the inequality (deduced via the projection postulate and measurement "theory") has not panned out in the experiments -- no experimental data has violated the inequality despite over three decades of trying. Only the data filtered through the additional ad hoc assumptions (always the "fair sampling" and often some others), which are outside the QM and are untested on their own, violate the inequalities.

The point I brought up in this thread (along with Santos, Marshall, Barut, Jaynes, Khrenikov,... and other skeptics) is that if one looks closer at the experiments and the "fair sampling" assumption, it turns out that all the actual LHV alternatives (those actually constructed and developed, the theories making concrete predictions) are excluded by the "fair sampling" hypotheses all by itself, before any laser was turned on and before any detector counted a single count.

If you wish to draw some other line among the LHVs excluded and those not excluded by the actual data, please, go ahead (without the mixup between the QM prediction asserted by Bell and the actual experimental data). Explain what kind of LHVs does the "fair sampling" hypothesis exclude all by itself?

Lets hear your version and how does your separation line show that the experimental data (and not the "fair sampling" hypothesis) exclude the "pool ball logic" which started this thread.
 
Last edited:
  • #20
Nightflight:

QM does not violate Bell's Inequality because the Inequality does not apply. QM makes predictions for actual experiments of photon beams with 2 polarizers. The QM prediction for a photon beam passing through both polarizers is a function only of the angle between the polarizers. The same formula applies whether you are talking about photons in an entangled state, such as the Aspect experiment measures, or a single beam passing through consecutive polarizers. In fact, the formula is the same in classical optics too, but only when light is treated like a wave.

The problem from a LHV perspective is that if the beam is postulated to have a) an orientation which exists independently of the measurement apparatus which was b) determined at the time the photon was created. These 2 conditions are too severe to survive. You don't need the Aspect setup to see that something is wrong with that anyway. It follows from experiments anyone can do with 2, 3 and more polarizers in a single beam too. I will explain in a separate post.

The Aspect experiments are simply the logical extension of the measurement process issues which were quickly evident as QM was being formulated, a la the double slit experiment. Clearly reality does not act as it does in the classical world, and I don't understand why this point is a topic of debate. Next you will be telling me that the double slit experiment does not prove anything, either. The fact is that any way you cut it, the Heisenberg Uncertainty Relations apply and there is no observable deeper level of local reality.
 
  • #21
Originally posted by DrChinese
The problem from a LHV perspective is that if the beam is postulated to have a) an orientation which exists independently of the measurement apparatus which was b) determined at the time the photon was created. ... It follows from experiments anyone can do with 2, 3 and more polarizers in a single beam too. I will explain in a separate post.
Can't wait for the separate post.

Originally posted by DrChinese
The fact is that any way you cut it, the Heisenberg Uncertainty Relations apply and there is no observable deeper level of local reality.
Can you give a little detail how the Heisenberg Uncertainty Principle relates to the double slit experiment? Seem like there should be something else at work here.

Thanks
 
  • #22
Originally posted by DrChinese
Nightflight:
QM does not violate Bell's Inequality because the Inequality does not apply.
? It appears we not in the same discussion any more. You even changed my the screen name from the one in the rest of the thread.

Next you will be telling me that the double slit experiment does not prove anything, either.

If it did prove anything regarding LHVs there would have been no need for EPR-Bell theorem and related tests. Anything with double slit or simple polarizers, single or consecutive, has trivial LHV explanations. Other than popular literature, mostly for non-physicists, no one any more seriously claims there is some great mystery in such phenomena. There is a difference between suggestive and a proof. A $9.95 tape recorder will be suggestive of spritis or some other great mystery to a stone age tribe not familiar with technology.
 
  • #23
Nightlight: My apologies for a mistake on your name.

Follow up to previous post:

1. In the view of local reality, by definition, a photon has spin orientation independent of its measurement ("reality") and the observation at one point cannot affect the measurement at another point which is spacelike separated ("locality"). Even in the case of a single photon passing through multiple polarizers, there are things that happen that really don't fit the concept of local reality.

2. Take the situation in which there are two polarizers, A and Z which are crossed (90 degrees apart). A beam of photons will pass 0% of these. Presumably, any photon which passes through the first has an orientation which will not allow it to pass the second.

But what if we put a polarizer N at 45% degrees between them? In this situation the .5 *(cos(45 degrees)^2) percent will be detected on the other side, i.e. 25%. The only reality here is that the angle between subsequent polarizers is relevant. If reality held, 0% would appear on the other side because if two polarizers stopped all light, three would not pass more through as occurs in reality. After all, the middle polarizer is passive and should not affect the "intrinsic" and "determinate" spin of the photons.

3. This is an exact analog to the Aspect experiment, in which case we detect 25% of the pairs (ideally) as being at both sides if the orientation of the polarizers is 45 degrees apart. In fact, every permutation of the Aspect experiment essentially mimics the case described above... i.e that the only "reality" and the only relevant parameter for calculating the percentage of photons passing through ANY 2 polarizers is the angle between the 2 polarizers.

In other words, the photon pair acts like a single photon going through 2 polarizers. The only thing gained from the Aspect setup is the knowledge that a photon which failed to pass both polarizers failed at polarizer A or Z, or both. Regardless, the concept that the photon polarity is "determinate" and "instrinsic" makes no sense, so naturally the experimental results do not support this hypothesis.

After all, you could place an increasing number of polarizers N between A and Z... And if they were spaced evenly across 90 degrees, the number of photons which would ultimately pass through - ideally, of course - would approach 50% in the limit as N approaches infinity. (For example if N=2, there are 4 total polarizers each 30 degrees apart from the previous and forming 3 pairs. f(N)=.5 *(cos(90%/(N+1)))^(N+1) = .325 approx. for N=3.

4. Try to explain this result with a LHV theory. More blocking polarizers yields more hits. Go figure. Admittedly, this formulation is not as rigorous as the Aspect experiment in some respects, but highlights the issue more in others. Every polarizer either absorbs a photon or changes its spin orientation. Clearly, measurement and observation is somehow fundamental to the process if this occurs. Which violates what "local reality" implies in the first place, see 1. (The spin orientation is independent of the measurement. Not.) Which is why the Aspect-like experiments always violate reality constraints, but QM has no problem.

As stated previously, the Bell Inequality is a test of local reality and not a test of QM. I really don't follow the logic on this point, it is very clear from the literature (including Bell's original work) that the constraint of local reality is an additional requirement not present in QM.
 
  • #24
Originally posted by dlgoff
Can't wait for the separate post.


Can you give a little detail how the Heisenberg Uncertainty Principle relates to the double slit experiment? Seem like there should be something else at work here.

Thanks

EPR contended that there would be violation of the HUP in the case of the entangled particles, as one could be used to gain "extra" (and "prohibited") information about the other. The HUP acts as a barrier to such knowledge, always foiling our perception of the underlying "reality" we strive to see - if there was such a thing.

If the photon spin is truly determinate (as postulated in LHV theories), then isn't the path of the particle in the double slit experiment determinate as well? If so, how does it interfere with itself? We already the answer: neither are determined until observation/measurement.

Don't shoot me, I'm just the messenger.
 
  • #25
4. Try to explain this result with a LHV theory. More blocking polarizers yields more hits. Go figure.

There is nothing non-classical in any of this. The described behavior of polarizers is entirely within the classical EM theory and no one in 19th century would have been surprised by the 1,2 or N polarizer results. The EM wave leaving the polarizer has E vector aligned with the polarizer's axis and intensity E(a)=E*cos(a) (where 'a' is the angle between incident E and the polarizer axis). The resulting EM wave energy density is rho(a)=const*E(a)^2. With 2 perpendicular polarizers, the 'a' on the 2nd polarizer is 90 degrees, leaving the outgoing wave energy = 0. The detector simply triggers with the probability proportional to the accumulated incident energy.

The problem you're having is due to your insistence on picturing throughout a marble-like photon which goes one way or the other way (or gets absorbed by the polarizer). There is no reason for that i.e. no empirical fact makes such marble-like photon picture necessary.

In popular accounts of polarized light going through the Nicol prism (or a light through half-silvered mirror) the usual argument in support of marble-like photon is that if two detectors, A and B, are placed on each outgoing path, it is asserted that only one detector will trigger in each try, which indeed looks marble-like.

In fact that is not what happens -- in each try you get one among all 4 combination of the (AB) triggers: (00),(01),(10),(11), and they are distributed exactly as if the energy of the incident packet was split into sin^2(a) and cos^2(a) outgoing wave packet fragments, and each fragment had triggered (or not triggered) its own detector, with probability proportional to the fragment's energy, entirely independently of the other, i.e. no collapse of the fragment A occurs when the fragment B triggers its detector. The AB trigger distribution is exactly what a 19th century physicist would predict -- each wave packet fragment travels along its own path, strikes detector and based on energy of the fragment and the momentary state of fluctuations of the detector atoms and fields, it triggers or it doesn't trigger the avalanche regardless of what the other fragment does in its own detector.

The standard explanation for this behaviour is that the incoming photons are not produced one by one but in a Poisson distribution. Thus the case (11), where both detectors triggered, simply means there were two photons in that try. It has been shown (Sudarshan & Glauber theory of optical coherence in 1963, the foundation of modern quantum optics) that in these types of experiments the quantum optics predictions are precisely identical to the semi-classical predictions (as sketched above, i.e. each packet fragment reaching and triggering, or not triggering, independently its own detector). The "semi" prefix refers to the fact that atoms in the detectors are treated via QM (to deduce probability and intensity of the avalanche/trigger), while the light is treated entirely classically.

In order to obtain the marble-like (either-or) AB triggers requires source to produce pairs in a setup similar to the Apsect's EPR-Bell experiment (e.g. see P.Grangier, G.Rogers, A.Aspect 1986 paper in Europhysics Letters, Vol. 1(4), pp 173-179 "Experimental Evidence for a Photon Anticorrelation Effect on a Beam Splitter: A New Light on Single-Photon Interferences"). In that setup the pair source is the atomic cascade. One photon of the pair is branched into AB paths (via polarizer or half-silvered mirror), while the other photon is detected separately. Only through the coincidence measurement and the post-selection with "accidental" coincidence subtractions (which together amounts to removing, in effect, by hand the (11) case) they get the "ideal" marble-like (sub-Poissonian) behavior, (10) and (01) on the "adjusted" data. The non-adjusted data is, of course, doing the same (classical) thing as before. Marshall, Santos and their grad students had over years worked out purely classical models (based on stochastic optics, a subset of SED) for these (and for the newer PDC pair based) anticorrelation experiments reproducing exactly the measured data.

Admittedly, this formulation is not as rigorous as the Aspect experiment in some respects, but highlights the issue more in others.

Agree, the Bell inequality and its tests are the only sharp criteria which explicitly place limits on LHV theories. As explained before, so far the EPR-Bell experiments have failed to exclude the already existent LHVs (which reproduce exactly the measured data). While this fact is not disputed, it is not publicized much either (it is especially underplayed in the teaching QM, and entirely ignored in the 'quantum mystery' popular science genre).

As stated previously, the Bell Inequality is a test of local reality and not a test of QM. I really don't follow the logic on this point, it is very clear from the literature (including Bell's original work) that the constraint of local reality is an additional requirement not present in QM.

The point of Bell's theorem was to repair the flaws of the earlier von Neumann's and Kochen-Specker's proofs which have claimed to show that QM predictions are incompatible with (cannot be reproduced by) any hidden variable theory. Since by this time there were already hidden variable theories (such as de Broglie-Bohm pilot wave theory) which reproduced all QM predictions exactly, it was obvious that the earlier "proofs" were flawed -- the de Broglie-Bohm counter-example shows that QM predictions do not exclude all conceivable hidden variable theories.

Therefore Bell's objective was to find some subset of the "all conceivable HV theories" (those that are in some way 'reasonable') plus a QM prediction which cannot be reproduced by any HV theory from this subset. The subset he came up was "Local Hidden Variable theories". He then used EPR-Bohm spin experiment and showed that all LHV theories had to satisfy his inequality and that QM prediction for the same setup will violate the inequality.

Without the QM prediction which fell on the opposite side of the inequality from LHV theories there would have been no point to the whole excercise. His objective was to show that QM makes empirical predictions which no LHV theory can reproduce. That was what his inequality was for -- QM fell on one side (labeled by convention as "violation of Bell's inequality") and LHV theories on the other side ("satisfying Bell's inequality").

In order to deduce his QM prediction, Bell had used "measurement theory," specifically the projection postulate for the composite state. If that postulate had been experimentally verified by that time the LHV theories would have been done with. It turns out that there was no experiment at the time which directly tested this postulate, even though the postulate goes back to 1920s (von Neumann's measurement chain) -- nothing that QM was used for all those decades depended in any essential way on that postulate -- it was a backwater addon to QM without any consequences that mattered (other than the semi-philosophical QM measurement debate, which rested squarely on the none other but the flawed von Neumann's "HV impossibility" proof, i.e. without that "HV impossibility" there would have been no need for "measurement theory" or "collapse" since objects could have definite properties all along, just as they do in classical physics).

Therefore the EPR-Bell test had to be done to verify that the Bell's QM "prediction" actually matches what the nature does (this would also be an indirect test for the projection postulate). As explained at length in earlier posts, so far that QM prediction was not reproduced by the experimental data. Only the "data" post-selected 'by hand' using the fair-sampling hypthesis (and often additional ones) can be stretched to fit the Bell's QM prediction (thus made to violate the inequality).

Of course, an EPR-Bell experimenter could view his job as taking the Bell's inequality and simply trying to get the result which violates the inequality, thus the experimental data all on their own would exclude all LHV theories. Maybe this is what you're talking about when stating that QM prediction wasn't being tested? In that sense, yes, the experiment all by itself will say whether LHVs can reproduce its data or not. And so far the existent LHVs do reproduce the measured data. Only when these LHV theories (which work) are excluded by the "fair sampling" hypothesis and the data reshaped according to this hypothesis, then the reshaped data violates the inequality.

But this violation is not any more the same one figuring in Bell's paper (which, if violated, excludes all LHV theories) -- it is much weaker exclusion, since the domain of LHVs tested is reduced from "all LHV theories" in Bell's paper to "a subset of LHV theories satsfying the fair sampling hypthesis" (a subset which conveniently doesn't include any existent LHV theory).
 
Last edited:
  • #26
Originally posted by DrChinese
Don't shoot me, I'm just the messenger.
Not a chance.
Originally posted by DrChinese
The HUP acts as a barrier to such knowledge, always foiling our perception of the underlying "reality" we strive to see - if there was such a thing.
Do you think that we will ever understand the undelying "reality"?
Originally posted by nightlight
There is nothing non-classical in any of this. The described behavior of polarizers is entirely within the classical EM theory and no one in 19th century would have been surprised by the 1,2 or N polarizer results.
I see what you mean. So in the double slit experiment, you don't have just one photon at a time (particle discription), you just have a really small amplitude wave which can interfere with itself?

Thank you both for such a good discussion.
 
  • #27
I see what you mean. So in the double slit experiment, you don't have just one photon at a time (particle discription), you just have a really small amplitude wave which can interfere with itself?

That is correct. There is no empirical fact as yet requiring a marble-like photon. There are, of course, suggestive experiments (such as the anticorrelation experiments) but all of those on closer inspection have contained "loopholes" allowing purely wave model explanations.

There is one interesting and not very obvious mechanisms in Stochastic Electrodynamics (SED) used to explain many marble-like photon behaviors. But first a brief intro.

The SED is a plain classical ED with a particular set of boundary & initial conditions - the small background fluctuations, Zero Point Field (ZPF), which is a classical counterpart of the Quantum Electrodynamics (QED) vacuum fluctuations (equivalent to extra 1/2 photon per mode). The ZPF frequency distribution is Lorentz invariant, with energy density depending on frequency w as w^3 (this is the only Lorentz invariant class of distributions, i.e. ZPF is quite a unique, highly constrained, distribution).

The photodetection is in SED model merely a measurement of the excess EM energy over the ZPF level. This normalization to ZPF level corresponds exactly to the 'normal ordering' procedure of the QED, which amounts to subtracting precisely the divergent QED vacuum energy (the 1/2 add-on to photon number operator).

The non-obvious mechanism of SED producing an appearance of marble-like photons is the introduction of the ZPF defects, i.e. the wave packets which are regular EM wave packets, except that the energy within them is below ZPF level. This is a simple mathematical consequence of the EM field produced by photon sources superposing with the random ZPF waves -- some superpositions generate packets with the above-ZPF energy, some with the below-ZPF energy, depending on the exact phases & amplitudes of the superposed components. Since the ZPF is not controlled by the experimenter, the net results always need to be averaged over all possible ZPF amplitudes/phases (weighed by the ZPF w^3 density distribution).

The ZPF defects produced via ZPF mixing (superposition) with the external wave packets on the half-silvered mirrors and polarizers, propagate coherently with the matching ZPF excesses, i.e. they can interfere if their paths are again brought together. Since the ZPF-defects are regular EM waves, any optical devices, mirrors, prisms, polarizers,... etc. affect them exactly the same way as they do the ZPF-excesses.

The key observation leading to the marble-like (either-or) behavior of the wave packets is that ZPF defects do not trigger photodetectors -- the photodetector's trigger thresholds are always calibrated to exclude the sub-ZPF fluctuations (as much as possible; they always have spontaneous background trigger rate, even at temperature of 0 K, the so-called "dark current", without any external EM waves -- this background count rate is always explicitly subtracted from the measured count rates). The anticorrelation (either-or) phenomenon on a half-silvered mirror is then merely the case of ZPF-defect propagating along one path and ZPF-excess along the other path. The defect doesn't trigger a detector while the excess does (statistically, of course) so it will superficially appear as if "the whole photon" went just one way (on the side of the excess). At the same time, even though seemingly "nothing" went the other way (no detector count produced on that side), this strange "nothing" is fully sensitive to any optical elements placed in its path (which manifests in the final interference, when the paths are brought together, being sensitive to the optical elements placed along one of the paths).

The popular accounts often embelish the story by claiming that in each try the photon "went just one way" so it must be definitely a particle. That doesn't actually happen every try, but only in a fraction of of tries, precisely as indicated by the ZPF distribution. Some superpositions produce no ZPF defect, some can produce two defects and some a single defect. The total measured counts always match the average over all cases weighed by the ZPF distribution.

Further refinements are obtained if one notices that the "dark rate" subtractions above, in the case of ZPF defects, result in the negative above-ZPF energy i.e. apparent negative detector counts. The actual counts are, of course, always positive or zero. The negative counts simply mean that the count rates are below the "dark current" rates. It has in fact been known since 1930s that QM probabilities would behave just like classical ones, if one could allow for negative probabilities (of the joint probability distributions for the complementary observables, these are so-called Wigner distributions; e.g. joint distribution for simultaneous exact position and momentum). These "negative"/sub-ZPF counts are precisely the physical interpretation of the negative joint probability distributions (check numerous papers by Marshall-Santos group on LANL server for computations of the Wigner distributions in this context).
 
Last edited:
  • #28
Originally posted by nightlight
The problem you're having is due to your insistence on picturing throughout a marble-like photon which goes one way or the other way (or gets absorbed by the polarizer). There is no reason for that i.e. no empirical fact makes such marble-like photon picture necessary.

In popular accounts of polarized light going through the Nicol prism (or a light through half-silvered mirror) the usual argument in support of marble-like photon is that if two detectors, A and B, are placed on each outgoing path, it is asserted that only one detector will trigger in each try, which indeed looks marble-like.

In fact that is not what happens -- in each try you get one among all 4 combination of the (AB) triggers: (00),(01),(10),(11), and they are distributed exactly as if the energy of the incident packet was split into sin^2(a) and cos^2(a) outgoing wave packet fragments, and each fragment had triggered (or not triggered) its own detector, with probability proportional to the fragment's energy, entirely independently of the other, i.e. no collapse of the fragment A occurs when the fragment B triggers its detector. The AB trigger distribution is exactly what a 19th century physicist would predict -- each wave packet fragment travels along its own path, strikes detector and based on energy of the fragment and the momentary state of fluctuations of the detector atoms and fields, it triggers or it doesn't trigger the avalanche regardless of what the other fragment does in its own detector.

...

Of course, an EPR-Bell experimenter could view his job as taking the Bell's inequality and simply trying to get the result which violates the inequality, thus the experimental data all on their own would exclude all LHV theories. Maybe this is what you're talking about when stating that QM prediction wasn't being tested? In that sense, yes, the experiment all by itself will say whether LHVs can reproduce its data or not. And so far the existent LHVs do reproduce the measured data. Only when these LHV theories (which work) are excluded by the "fair sampling" hypothesis and the data reshaped according to this hypothesis, then the reshaped data violates the inequality.

But this violation is not any more the same one figuring in Bell's paper (which, if violated, excludes all LHV theories) -- it is much weaker exclusion, since the domain of LHVs tested is reduced from "all LHV theories" in Bell's paper to "a subset of LHV theories satsfying the fair sampling hypthesis" (a subset which conveniently doesn't include any existent LHV theory).

Well-written, and I agree with most everything you are saying here. I guess if I accepted the objections to the Aspect experiment then I would agree with your conclusion that the measurement at one polarizer is independent of the measurement at the other polarizer. But I think the evidence clearly falls the other way.

BTW, I would be the last person to think of the photon as a marble. In fact, our thread started out talking about pool balls and I have pointed out my objections to using this analogy when discussing the Bell Inequality because I don't think it is appropriate. But I am not opposed to a good visualization when there is one.

Also, for those who are interested, you can read Aspect's paper at this link: Experimental Tests of Bell's Inequalities in PDF format. His "adjusted results" were as follows:

S(experimental)= 0.101 +/- .02

a. where the LHV theories predicted value (Bell's Inequality) of S<=0 was violated by 5 standard deviations.
b. where the QM predicted value was 0.112, within in the experimentally observed range.

So nightlight, if I understand your position correctly: if the results were not adjusted and instead the raw data were used, would we have seen an experimental result for S below 0? Or are you saying the S<=0 requirement of the Inequality should be adjusted upward or thrown out altogether?

Thanks, and good discussion.
 
Last edited:
  • #29


Originally posted by DrChinese
2. The only question - as nightlight argues to the negative - is whether ALL Local Hidden Variable theories are excluded as a result of Aspect's experiments. The reference paper cited (Santos) asserts there ARE at least some LHV theories which would yield predictions in line with the results of the Aspect experiments. (Personally, I question the conclusion but am still reviewing this.)

Bell's inequality integrates over the domain of hidden variables -- this is only meaningful if the domain for said hidden variables is measurable. Someone named Pitowsky published a paper about a moduled that uses non-measurable spin functions in the 80's.
 
  • #30
His "adjusted results" were as follows:

S(experimental)= 0.101 +/- .02

a. where the LHV theories predicted value (Bell's Inequality) of S<=0 was violated by 5 standard deviations.
b. where the QM predicted value was 0.112, within in the experimentally observed range.

So nightlight, if I understand your position correctly: if the results were not adjusted and instead the raw data were used, would we have seen an experimental result for S below 0? Or are you saying the S<=0 requirement of the Inequality should be adjusted upward or thrown out altogether?
Of course. The "S" above is based on less then 10 percent of the actual singlets (representing in turn just around 1-2 percent of pairs produced), the rest being thrown out or missed, or as they put it euphemistically (p 1806):
"At the end of each 400-s period, the raw data were stored for subsequent processing with the help of a computer.[...] The statistical accuracy is evaluated according to standard statistical methods for photon counting.
A student reading the paper would skim over these two bits of apologia/obfuscation with no clue as to how much data was thrown out or missed (almost all) or on what basis, and no sense what the seemingly small error figure means. Even going back through their citations won't clarify much, unless you know upfront what you're looking for. Much clearer picture can be seen in the actual Aspect's PhD thesis (it's in French, I don't think it was translated into English), where all the details and raw data are given. Or you can seek the raw data from the recent PDC experiments, they are much more likely to be available.

If one were to plot the full data, their S(theta) function would have been entirely and safely within the classical/LHV region, with barely visible (few percent) sinusoidal deviation (all still well within LHV bounds) corresponding to their filtered data variation (matching the QM prediction). The Aspect's cascade based setup, due to three-body dynamics wasn't anywhere near to closing the "loopholes."

And that is the problem with getting only one side of the story. In my grad school, I had spent couple years (while doing the master's thesis on EPR-Bell and Measurement problems; later I switched to gravity) and even after reading hundreds of papers, preprints, books, I came out with purely conventional conclusions. It was only few years later, after I left academia and worked in industry, while visiting an industrial lab where my wife (also a physicist) was working down-and-dirty with the photon coincidence setups (for pollution detection instruments) that it suddenly dawned on me "wait a sec, that's not at all what I imagined all this time these detections and photons were about." I spent few days going over my earlier notes, ran few simulations in mathematica and C, and rediscovered what, I soon found out, has been known, but little publicized, since the early 1970s (the Pearle's missing data model). Then I found and tracked down the rest of the trail of the opposing views, with Marshall-Santos, Jaynes and Barut groups being the three most articulate and productive groups. My advisor (basically a mathematical physicist, playing with mathematical poperties of the entangled states) never mentioned any of that substantial body of work.
 
Last edited:
  • #31
Nightlight:

Thanks for the information on your background in the subject, obviously very impressive. Like many, I had not looked at the counting methods in too much detail assuming others had visited this ground before. (Shows what happens when I assume...)

If I understand your response then, you believe that S(experimental) would have actually been in the (S<=0) region? I would be quite surprised if the raw data pointed in one direction and the adjusted data another, so I want to be sure I understand specifically what you are saying on this point. Thanks.
 
  • #32
Originally posted by DrChinese
If I understand your response then, you believe that S(experimental) would have actually been in the (S<=0) region? I would be quite surprised if the raw data pointed in one direction and the adjusted data another, so I want to be sure I understand specifically what you are saying on this point. Thanks.
Yes, definitely S(theta)<=0 for all theta points on full data. This was their second version (with varying polarizers, and it was weaker regarding other loopholes than the Aspect's fixed polarizer case).

Right after these experiments, Marshall and Santos, having examined the raw data, pointed out that the "accidental" coincidence subtractions, all by themselves (ignoring the even larger fair sampling problem), were taking the S into the non-LHV region. The detector thresholds were tuned by Aspect's group just so, that if one were to leave out the "accidentals" subtraction, the data would fit exactly the most simple-minded classical model (curiously, this seems to be overall the most efficient setting, the sweet spot for getting the best statistics in the shortest time). The debate between the two groups went on few rounds (mostly in Nuovo Cimento, till about 1985), resulting in Aspect's team redoing the fixed polarizer experiment with different detector thresholding to get around the "accidentals", which indeed they did.

Their cascade source based setup, though, has much more serious problem with three-body dynamics, resulting in very small fraction of produced pairs being counted at all (<2%; which is where the "fair sampling" comes in). So the S(theta) curve is always well within classical region and only the adjustement based on fair sampling assumption (which allows extrapolation or conjecture as to what are the missing pair data like) raises it into the QM region.

Marshall & Santos subsequently worked out a more detailed stochastic optics (LHV) model which reproduced exactly what Aspect's group improved experiments did. By that time, though, journal editors were not interested any more (replying in effect "everyone knows there is a loophole, nothing new here, nothing to see, move on") which ended up leaving Aspect's group with the last word in the major journals.
 
Last edited:
  • #33
Originally posted by nightlight
Yes, definitely S(theta)<=0 for all theta points on full data. This was their second version (with varying polarizers, and it was weaker regarding other loopholes than the Aspect's fixed polarizer case).

...

Their cascade source based setup, though, has much more serious problem with three-body dynamics, resulting in very small fraction of produced pairs being counted at all (<2%; which is where the "fair sampling" comes in). So the S(theta) curve is always well within classical region and only the adjustement based on fair sampling assumption (which allows extrapolation or conjecture as to what are the missing pair data like) raises it into the QM region.

Sounds like the experiments will need updating. I am kind of surprised this experiment is not repeated regularly in university labs, it seems like the kind that would fall in that category. I will try and study some of your references more thoroughly so I can discuss the weaknesses of the testing methodlogy more intelligently.
 
  • #34
The AB trigger distribution is exactly what a 19th century physicist would predict -- each wave packet fragment travels along its own path, strikes detector and based on energy of the fragment and the momentary state of fluctuations of the detector atoms and fields, it triggers or it doesn't trigger the avalanche regardless of what the other fragment does in its own detector.

How does this explain why all the packet fragments are exactly the same size (for a given wavelength)?
 
  • #35
How does this explain why all the packet fragments are exactly the same size (for a given wavelength)?
They are not of the same size (in total energy or in spatial extent) from try to try. Even in case of 50% half-silvered mirror and non-polarized light (or similarly, with polarized light at Pi/4 angle to the polarizer axis), the A and B fragments are same only on average. In each individual try the A and B wave packet fragments are different since they suporpose/mix with the random ZPF. Note also that the ZPF add-ons may mutually cancel each other when the A-B paths cross again, since they are coherent to each other (as are the source "photon" fragments, i.e. on separate paths you can have A+Z and B-Z, but when they cross later, you can get (A+Z)+(B-Z)=A+B, as if there was no ZPF; note that Z is random ZPF add-on so the total statistics is obtained by averaging over the ZPF distribution). While the A/B paths are separate, though, the ZPF mixing has the maximum effect since it can make a difference as to whether the fragment will trigger a single path detector or not (i.e. if A+Z or B-Z is well below the detector threshold, which is normally calibrated to ignore the sub-ZPF fluctuations).

See my post after the one you quoted where more details are given on ZPF superposition and its consequences. When popular/didactic descriptions speak of the same size fragments which later interfere, they are omitting details of the much more complicated real setup (more similar to anticorrelation setup) and are presenting the net result of already heavily post-filtered statistics (where the effects of averaging, "accidental" and multiphoton coincidences, Poissonian or super-Poissonian actual source,.. etc are subtracted away) giving a misleading impression on what actually happens in each try or in the full unfiltered sequence. Check the Grangier et al. paper I cited earlier on the anticorrelations and you will realize how far the typical popular/didactic claim that in each try "photon always goes just one way" is from the actual data.
 

Similar threads

  • Quantum Physics
Replies
4
Views
859
  • Quantum Physics
Replies
3
Views
1K
Replies
1
Views
813
  • Quantum Physics
Replies
27
Views
2K
  • Quantum Physics
3
Replies
81
Views
7K
  • Precalculus Mathematics Homework Help
Replies
7
Views
901
  • Quantum Physics
Replies
6
Views
1K
Replies
7
Views
1K
  • Quantum Physics
7
Replies
225
Views
11K
  • Quantum Physics
Replies
12
Views
2K
Back
Top