Cramer's Backward Causality Experiment

In summary: In virtually all of these sorts of experiments bell inequality tests are performed to demonstrate that the photons are in fact entangled, and it's not possible to do that without coincidence counting. Also, using coincidence circuits is the easiest way to avoid noise. Especially when Type I downconversion is used, where the signal to noise ratio is exceptionally low. But I don't see how the Cramer setup would be any different.In summary, the Cramer experiment is a new attempt to demonstrate non-locality using photons. It uses a similar setup to the Delayed Choice Quantum Eraser, but with a few modifications. The goal is to show that the "spooky action at a distance" is actually an effect that happens after the
  • #1
sanman
745
24
Anyone read about this?

http://cosmiclog.msnbc.msn.com/archive/2007/07/17/274531.aspx

Now brace yourself for the backward-causality part: Because Signal B followed a shorter route to its detector, the fiddling in Signal A could theoretically show up in Signal B before Cramer actually fiddles with Signal A. It would be as if Cramer's actions had an effect that worked backward in time.

I'm not sure I agree with that part. I'd say that the effect in signal B would at best show up simultaneously as Cramer tampers with signal A.

Creating that longer circuitous route for signal A could be a way to simulate non-locality. After all, if signal A has to travel a longer route, it's as if it ends up at some distant location. So I suppose, in a technical way, simultaneity across non-local distances is a facsimile of time travel, but it isn't really genuine time travel.

This is like when Stephen Hawking said, "I think we can look into the past, but we can't alter it."

The Cramer experiment would work along the lines of Hawking's statement. Agree or disagree? Comments?

(Gee, how come nobody thought to do this experiment before, given all the investigation into these issues?)
 
Last edited by a moderator:
Physics news on Phys.org
  • #2
IMHO, the most accurate statement is that the effect doesn't show up until after the experiment is performed; no effect is visible until you have access to both data sets and can analyze correlations.
 
  • #3
I actually wrote to Cramer, after https://www.physicsforums.com/showthread.php?t=166364" linked to specific details he has given for the new experiment. I basically said that his apparatus looks exactly the same as the Delayed Choice Quantum Eraser (an experiment already done, which he did not reference). He replied to me, saying the DCQE is irrelevant because of the coincidence correlation requirement: a requirement which, importantly, his experiment removes. He emphasised that the new experiment is a synthesis of a previous experiment by the DCQE group and an experiment from a particular German PhD thesis (which, strangely, seems not to have led to any journal publications).

I replied back to ask about a specific detail* of his apparatus, but also noted that the "ghost fringes" in the experiments he referenced were only observed in coincidence** measurements. He didn't respond again.

Anyway, his apparatus is conceptually clear (even if it fails as I expect), so once published the experiment should have pedagogical value (especially when students misinterpret the DCQE). It's good news if it will be finished soon this year.

*If I knew what he was using as the optical combiner, I think I could show where the path-information goes, and hence that it doesn't disappear. His one reply sort-of gave the impression he thought the coincidence requirement basically came only from "noise" photon pairs that don't interact with the slits.
**A colleague thought Cramer may have mistakenly overlooked this aspect.
 
Last edited by a moderator:
  • #4
Anyway, do you think that "long circuitous route" can be used to simulate "non-local scales"?

For instance, if you have a beam of light travel straight out for a distance of 1-light-year, then you know that's obviously a non-local scale of distance. But if you have your beam of light travel the same amount of distance along some long circuitous route which merely ends up taking it only a few inches away from the original starting point, then is it still roughly the same thing?

It seems to me that the Cramer experiment could merely serve as a partial disproof. If you tamper with signal A, and then your signal B which has taken the long circuitous route exhibits the same tampering effect much later, then you have disproven that "spooky action" is instantaneous. But if signal B demonstrates a tampering effect immediately, that result may not be conclusive, because it might merely be due to the long circuitous route not truly faking the non-locality.
 
  • #5
How does this differ from what was done in http://arxiv.org/PS_cache/quant-ph/pdf/0106/0106078v1.pdf" by Walborn? They used a coincidence circuit in that setup, but in principle it's not necessary, right?

I'm curious if Cramer's setup can collect enough photons, even with a 300 mW pump, to generate an interference pattern within the short period of time between s1 and s2 to demonstrate "backwards causality".
 
Last edited by a moderator:
  • #6
ged94 said:
They used a coincidence circuit in that setup, but in principle it's not necessary, right?
Not right.

(Please let me know if you think there is any published record of entangled particles producing interference patterns outside of coincidence.)
 
  • #7
cesiumfrog said:
Not right.

(Please let me know if you think there is any published record of entangled particles producing interference patterns outside of coincidence.)

In virtually all of these sorts of experiments bell inequality tests are performed to demonstrate that the photons are in fact entangled, and it's not possible to do that without coincidence counting. Also, using coincidence circuits is the easiest way to avoid noise. Especially when Type I downconversion is used, where the signal to noise ratio is exceptionally low. But I don't see how the coincidence counting itself somehow retroactively changes the Bell state after both photons have already been registered. That seems even spookier than what Cramer is trying to demonstrate.

I do think that the "backwards causality" leg of the experiment will probably fail, since IMO they are misinterpreting the delayed erasure phenomenon; ie the wavefunction collapses at the interference pattern and coincidence counting is necessary to "pick and choose" the corresponding idler photons. But if the wavefunction collapses with the detection of the idler photons first then the signal photons should form the appropriate dispersion pattern irrespective of coincidence counting. And this positively demonstrates the "nonlocal communication" leg of the experiment.

Without the coincidence circuit they'll need to subtract the noise out after the fact, and this will surely require more delicate and expensive equipment, not to mention longer collection periods. But I don't see why it shouldn't work.

Or am I missing something?
 
  • #8
It seems you would agree with Cramer that coincidence counting is there just to reduce noise, whereas the mainstream community thinks coincidence counting is there to pick a fringe pattern out of a Gaussian distribution (leaving an anti-fringe pattern behind).

What you're missing is that regardless of how long the collection period is, and even with theoretically zero noise, there is no existing reason to expect the gross signal to show an interference pattern, ever. As far as we know, no such thing has ever been demonstrated, despite that many such experiments have already been performed.
 
Last edited:
  • #9
I think we're talking about two different things here. With the 3 or 4 detector scheme outlined in Kim's "Delayed Choice Quantum Eraser" I agree that the fringes and antifringes are distinguished only when the which-way path has been determined after the fact. A classical communication channel is obviously necessary there. But with the setup I referenced above, the interference pattern forms no matter what the final state of the idler photon turns out to be. The only thing that matters is the simple fact that the idler has been detected -- accordingly, no additional information is needed to be exchanged classically to determine the dispersion pattern.

If coincidence counting were necessary in this particular design, the coincidence circuit itself would be determining the Bell state that had already collapsed. How would that be possible?
 
  • #10
In the experiment that you mention (though it has not yet passed peer review, and noting it is preferable for several reasons to please http://arxiv.org/abs/quant-ph/0106078" only), the which path information for the signal photon can in principle be determined from its initial polarisation along a special axis (or equivalently, from the polarisation of its entangled idler photon "p" along that axis). Consequently, there is no gross interference (regardless of whether the signal photon is measured before or after the idler) as shown in figures 3 and 7.

If a polariser is placed in the idler beam path, only half of the idler photons will be detected (but by rotating the polariser, you can choose which half). By appropriate choice of rotation, you will only detect idler photons for which the corresponding signal partner-photon's initial polarisation (along the important axis) can not be determined, and this half of the signal photons will produce fringes (fig 4 and 8). However, the opposite half of the idler photons (which are blocked by the polariser) would have corresponded to signal partner-photons that produced *antifringes* (for proof see figures 5 and 9). There is no way to distinguish these two sets of signal photons without using the measurement results from the idler photons (ie. that's why correlations/coincidence-counting *is* necessary). So either the idler photons are not measured for polarisation (and you get fig. 7), or the idler photons are measured for polarisation (and you still get fig. 7, which is just the sum of a fringe pattern superimposed with an anti-fringe pattern), either way you can't tell the difference by only looking at the signal photons.

In summary, I'm saying you've not understood your experiment, because it only lends further evidence to my argument. uh.. "Next!" :smile:
 
Last edited by a moderator:
  • #11
Sorry to re-open this old thread but it's the most recent I could find on the subject.

What's the bottom line here? Referring to Cramer's (apparently dead) experiment:

http://faculty.washington.edu/jcramer/Nonlocal_2007.pdf

Will the photons passing through S1 always display an interference pattern to the naked eye / camera? The answer would seem to have to be yes, because despite the fact that members of the same pair will go through the same slit, which slit they go through is still governed by wave mechanics, and so as soon as more than one photon is used the interference pattern emerges.

Subsequently, when the switch is set to "0", it becomes impossible to correlate the VLP photon to the HLP photon, similar to the signal photon in the DCQE choosing the "eraser." If the switch is set to "1", however, it would be possible to correlate the photons, and thus extract the particle-like pattern from the observed interference pattern, just as DCQE does, _if_ there was a coincidence circuit.

In fact, the only difference between this and DCQE is that there are actually two erasers - the first, setting the switch to "0", is akin to the DCQE signal photon striking the eraser; but the second erasure occurs as soon as the photons at S1 strike the CCD without being recorded. Thus, even when the switch is set to "1", it is impossible to extract the particle-like image from the camera image because there is no record of photons from which to make the correlations. And consequently the camera always sees an interference pattern.

Is this not the flaw?

The only way to make this experiment work would be to force the photons at S2 to pass through one slit or the other without breaking the entanglement - not possible AFAIK.
 
  • #12
So does anyone know the final word on this? Has Cramer officially abandoned the project?

Also, doesn't the interference pattern ALWAYS show up without coincidence counting? That would make a helluva lot more sense than A. Zellinger's statement that the pattern disappears when the entangled twin's slit is determined. Should be the other way around.
 
  • #13
peter0302 said:
That would make a helluva lot more sense than A. Zellinger's statement that the pattern disappears when the entangled twin's slit is determined. Should be the other way around.
Exactly what statement are you talking about?
Your describing a statement about a condition that uses correlation counts, and the pattern disappearing is just what is expected ala standard Quantum Erasure.

If you don’t quote or reference what your talking about how can anyone follow what your saying?
 
  • #14
Gee, no need to be snippy about it.

A. Zellinger - Experiment and the foundations of quantum physics. Part III. A statement which you had quite a lot to say about in another thread!

My point is that if you believe Zellinger, Cramer's detector/camera will always register a Gaussian pattern. But I'm not clear as to why.
 
  • #15
peter0302 said:
Gee, no need to be snippy about it.

A. Zellinger - Experiment and the foundations of quantum physics. Part III. A statement which you had quite a lot to say about in another thread!
Don’t intend to be snippy,
just brief in asking you to make your posts more complete for the benefit of other readers, rather than assume everyone is current with what you are reading.
Those that wish to look won’t find anything under “Zellinger” if you mean Anton Zeilinger.

If you are referring to my thread https://www.physicsforums.com/showthread.php?t=144792 that was about a published statement by Anton Zeilinger assuming results without correlation test actually being made, only that they could be made even though they are not.

Your comment was about a statement on actual correlations being made “the pattern disappears when the entangled twin's slit is determined”.

If you don’t have the quote available now I wait till you can find it. You are in a much better position to find it the complete statement your talking about.
 
  • #16
peter0302 said:
Gee, no need to be snippy about it.

A. Zellinger - Experiment and the foundations of quantum physics. Part III. A statement which you had quite a lot to say about in another thread!

My point is that if you believe Zellinger, Cramer's detector/camera will always register a Gaussian pattern. But I'm not clear as to why.

This (Cramer's experiment) seems to be pretty much the "double-double-slit" experiment which I was talking about in the other thread. A.Zeilinger describes it in the book "Einstein's Schleier" (currently available only in german, even though it was written in english, I think). I'll try once more to present his explanation, since I don't know whether it is available in english, anywhere. If you want to keep looking for alternative takes on this qustion, do let me know if you find any.

The reason such experiments will always register an at first random looking set of data, according to AZ, is because, as I've said in the other thread, for such an experiment the source of light requires a minimum size (in order to produce impulse-entangled particles where the impulse of one is precisely enough the opposite of the other, so that one can conclude which-way-info of one from the other particle). And this minimum size will always create overlapping patterns, in so far as one can say it creates patterns.

Then depending on whether both particles are allowed to act wave-like, or not, there will be interference pattern hidden in the random looking data set (or not), but available for analysis only with information from both measurement sites. This way it comes to the same situation as usually with entanglement: There is an instant effect, but it can't be used to send arbitrary messages since the measurement results are randomized, and the effect can only be discovered with the help of classically transmitted additional data which allows discovering the relationships between the two random looking data sets (when there is any).

You might want to look into "quantum teleportation" (and perhaps "entanglement swapping"), since they go slightly further than "plain" entanglement, in transmitting qubits and such, and it becomes somewhat more obvious that there is 'something' being transmitted, which otherwise becomes more apparent only if one has an experiment using variable measurement angles, or such.
 
  • #17
colorSpace said:
This (Cramer's experiment) seems to be pretty much the "double-double-slit" experiment which I was talking about in the other thread.
NO its not.
Cramer only uses a single double-slit in the path to the “Camera” area.

The “Image Slits” are locations where he intends to capture photons for his switching system. He is not subjecting those photons to a real double slit.
 
  • #18
Actually, given the nature of the discussions surrounding this topic, I should have been more cautious when saying the double-double-slit experiment would be like Cramer's.

I can only say that the intent textually described in the first link in the first post in this thread sounds very much like what I understand to be the purpose of A.Zeilinger's double-double-slit experiment (which has actually been performed).
 
  • #19
RandallB said:
NO its not.
Cramer only uses a single double-slit in the path to the “Camera” area.

The “Image Slits” are locations where he intends to capture photons for his switching system. He is not subjecting those photons to a real double slit.

Yes, I just posted a response saying this (before reading yours).

Otherwise my longer previous post still applies, IMHO.
 
  • #20
Ok - so - will the camera always see a Gaussian pattern or not??! :) Now I'm even more confused. :/
 
  • #21
colorSpace said:
... this thread sounds very much like what I understand to be the purpose of A.Zeilinger's double-double-slit experiment (which has actually been performed).
“Actually been performed” ? Where and by who?
Do you have a reference or link detailing the results of such a double-double-slit experiment where both “entangled” beams go through their own set of real double slits?
Did they give detailed results for non-correlated results for at least one side (that should answer Peters question) along with correlation results between the patterns produced in the two observation areas.

My main complain about Zeilinger was how he extrapolated from a single double slit experiment (a student Thesis Paper by Dopher) using correlations to speculate about how in his opinion a double-double-slit should work. And from those speculations he then presumed to make definitive conclusions without any direct experimental evidence to support his claims.

I’ve not seen where Zeilinger has attempted such an experiment.
Let us know where you found this experiment actually performed by someone.
 
  • #22
RandallB said:
I’ve not seen where Zeilinger has attempted such an experiment.

My source is his book "Einstein's Schleier" (AFAIK not available in english). I'll look up to see whether it mentions a specific implementation on a specific date. The description is rather detailed, but I don't remember a date or such, right now. It sounds like an experiment simple to perform, if one has the correct source of light. So I'd assume that it should be easily reproducible.
 
  • #23
In his Foundations article he says the reaosn the pattern is always seen in the one photon if you don't observe the other in a destructive way is because, otherwise, it's possible in principle for someone else to construct which-slit using the photon you didn't observe. You can't just non-observe it; you have to destroy it. Then and only then is there an interference pattern.

It's an interesting explanation, and it jives with the Copenhagen Interpretation, but I don't know of any experimental evidence confirming this.

Inicdentally I wrote to Cramer asking him what the deal with the experiment was and he hasn't replied...
 
Last edited:
  • #24
colorSpace said:
My source is his book "Einstein's Schleier" (AFAIK not available in english). I'll look up to see whether it mentions a specific implementation on a specific date. The description is rather detailed, but I don't remember a date or such, right now. It sounds like an experiment simple to perform, if one has the correct source of light. So I'd assume that it should be easily reproducible.

No, sorry, the book itself actually talks explicitly about 'thought experiments', in this chapter. I'm not sure whether I got the impression that there was 'real' experiment somewhere else, or whether that was simply a false impression on my part.

[Edit:] (Which surprises me since the book repeatedly states that the experiment would be simple.)
 
Last edited:
  • #25
Cramer Update

colorSpace said:
No, sorry, the book itself actually talks explicitly about 'thought experiments', in this chapter. I'm not sure whether I got the impression that there was 'real' experiment somewhere else, or whether that was simply a false impression on my part.

[Edit:] (Which surprises me since the book repeatedly states that the experiment would be simple.)
colorSpace said:
No, sorry, the book itself actually talks explicitly about 'thought experiments', in this chapter. I'm not sure whether I got the impression that there was 'real' experiment somewhere else, or whether that was simply a false impression on my part.

[Edit:] (Which surprises me since the book repeatedly states that the experiment would be simple.)
I figured something was wrong. For my own use I don’t find I can rely on Zeilinger information without carefully researching it for supporting detail, which I usually find missing. Much of that problem might be due to German to English translations. So, even though a lot of folks on this forum hold Zeilinger opinions in high regard - I do not, mostly because so many are lead to that same "false impression" by his writings.

At the same time, many that hold Zeilinger in high regard consider Cramer (the subject of this Thread) next to being a crackpot for his TI retro causality ideas. To me Cramer seems very much the real scientist willing to put his ideas to what could be a falsifiable scientific test. From Cramer’s own writing he is clear there is only a small chance his experiment might succeed.
Personal I’m convinced, as are most, there is no chance he can succeed; but that is based on my interpretation of existing results of other experiments that do not directly address his questions. At least Cramer is at least willing to back up his ideas by proposing and even attempting to run a real experiment.

Cramer Update

I was able to speak to Cramer for a few minutes yesterday and got an update on his experiment.
The experiment does continue but must be fit around other commitments – like his real job, relocation it to another lab in the building, travel comments etc.

The prep testing last Summer convinced him that the “Camera” he had obtained was not good enough and is changing to an array of detectors.

He expects receiving as few as ten photons in the “Camera” area with none in the “dark areas” will be enough for a statistical confirmation of an interference pattern, but only if “noise” can be first tested as at a low enough level.

He is confident the size of the down conversion (type 2) crystal and distance used well avoid any “Walkoff” issues. But producing an adequate volume of test photons is yet to be confirmed using the special cut SPDC. It is to produce the H V photon pairs in a single beam together. I didn’t even know there was an option other than the double cone (figure 8) distribution you find in most Type II Down Conversion experiments. That explains a lot as to how and why the prism will remove the remaining pump beam and the polar prism separates the H & V beams into separate testable areas. I had always assumed both type of PDC produced “cones” and had not been able to figure out how those prisms would work.

So, the experiment is still there, has a lot of work to do in confirming the set up is valid, but is not on a fast track as he works through the details mostly on his own with some other help as needed.
No projected time frame for when to expect results.
 
  • #26
I agree with your sentiments re: Zelinger. He seems like a brilliant and well-respected scientist but he says things in his articles that just don't have experimental support. Kudos to Cramer for actually putting his money where his mouth is and testing his ideas!

My money is on there always being an interference pattern.
 
  • #27
RandallB said:
I figured something was wrong. For my own use I don’t find I can rely on Zeilinger information without carefully researching it for supporting detail, which I usually find missing. Much of that problem might be due to German to English translations. So, even though a lot of folks on this forum hold Zeilinger opinions in high regard - I do not, mostly because so many are lead to that same "false impression" by his writings.

At the same time, many that hold Zeilinger in high regard consider Cramer (the subject of this Thread) next to being a crackpot for his TI retro causality ideas. To me Cramer seems very much the real scientist willing to put his ideas to what could be a falsifiable scientific test. From Cramer’s own writing he is clear there is only a small chance his experiment might succeed.
Personal I’m convinced, as are most, there is no chance he can succeed; but that is based on my interpretation of existing results of other experiments that do not directly address his questions. At least Cramer is at least willing to back up his ideas by proposing and even attempting to run a real experiment.

I haven't read many articles by Zeilinger, but mostly his two books and reports of his experiments. But the chapter on the double-double slit experiment is actually very clear on being a thought experiment, and my impression that there was a real experiment might have been due to the many details that it discusses. His books usually reference many experiments, both from others and his own. Perhaps it his writing style that sometimes isn't very clear on details. Still, his two books are the most informative writing on the subject of entanglement that I have found so far, without comparison.

Most of his work is based on conducting experiments, and in the field of entanglement, they are often cutting edge work. After all he is an experimental physicist, not a theoretical physicist.

As far as Cramer 's experiment is concerned: it hasn't been performed yet, so in this regard he still has to earn the title you are already assigning to him.The experiment itself:

I find Zeilinger's argument quite convincing: he argues that the source of light needs a minimum size to guarantee the the impulses (momentum) are precise enough opposite of each other, so that path information about one photon also applies to the other. The reason is that Heisenberg Uncertainty requires an uncertain position for the source point of the photon, in order to guarantee the precision of opposite impulse.

Once the source of light has a certain minimum size, it is quite obvious that patterns from different points will overlap and eventually, if the size is large enough, produce only a blur.

If the source of light is smaller, according to this argument, then the entanglement on the impulse isn't strong enough to cause an effect of one photon on the other, in this regard, and the interference will always be there, not usable to send a signal.

RandallB said:
Cramer Update

I was able to speak to Cramer for a few minutes yesterday and got an update on his experiment.
The experiment does continue but must be fit around other commitments – like his real job, relocation it to another lab in the building, travel comments etc.

The prep testing last Summer convinced him that the “Camera” he had obtained was not good enough and is changing to an array of detectors.

He expects receiving as few as ten photons in the “Camera” area with none in the “dark areas” will be enough for a statistical confirmation of an interference pattern, but only if “noise” can be first tested as at a low enough level.

I don't know... 10 photons sounds like a very small number, given the interference images I've seen so far.

RandallB said:
He is confident the size of the down conversion (type 2) crystal and distance used well avoid any “Walkoff” issues. But producing an adequate volume of test photons is yet to be confirmed using the special cut SPDC. It is to produce the H V photon pairs in a single beam together. I didn’t even know there was an option other than the double cone (figure 8) distribution you find in most Type II Down Conversion experiments. That explains a lot as to how and why the prism will remove the remaining pump beam and the polar prism separates the H & V beams into separate testable areas. I had always assumed both type of PDC produced “cones” and had not been able to figure out how those prisms would work.

So, the experiment is still there, has a lot of work to do in confirming the set up is valid, but is not on a fast track as he works through the details mostly on his own with some other help as needed.
No projected time frame for when to expect results.

Well, I'm all in favor of exploring the possibility of non-random FTL effects, however it seems that requires a theoretical advancement first. Entanglement as described by current theory seems to always have the randomness built in, since entanglement seems to require Heisenberg Uncertainty where the wave-functions are dependent.

My impression (which, as now has been established :wink:, can be false) is that the success of this experiment depends on the possibility to build a source of light (or in this case, it seems, a crystal) which will strongly entangle the impulse (momentum), without causing a blur of always overlapping patterns, when there are patterns. (As described above).
 
  • #28
peter0302 said:
I agree with your sentiments re: Zelinger. He seems like a brilliant and well-respected scientist but he says things in his articles that just don't have experimental support. Kudos to Cramer for actually putting his money where his mouth is and testing his ideas!

My money is on there always being an interference pattern.

I agree with you, especially on the ALWAYS part.
Provided the PDC is positioned far enough away for a “far field” to avoid the “walk off” problem.
Meaning I disagree with Prof. Cramer about being able to lose that interference pattern in the signal branch (the “camera” test area) to a dispersion pattern, by doing something in the other leg (idler branch or switching area) of the experiment.
IMO only when correlations are done between the two areas to select a smaller set of photons in the "Camera area" can the interference pattern seem to be erased into the needed standard dispersion pattern "signal". And of course the whole point is to get the signal before idler information can be transmitted to the signal area to do such correlations to confirm his reverse causality.

I actually think there are some other experiments that would be more worthy of his time, and have suggested them to him. But certainly my conviction that his experiment at best can only falsify his TI ideas, is no reason for him to change his mind about doing the work. As long as he feels there is some doubt, although slim, and his idea might be provable he should continue his work even if only to get a firm falsification.

colorSpace said:
As far as Cramer 's experiment is concerned: it hasn't been performed yet, so in this regard he still has to earn the title you are already assigning to him.

I don't know... 10 photons sounds like a very small number, given the interference images I've seen so far.

What title? I did say he was right – the only title I assigned him is a scientist that does not overstate his case as fact, but proposes real experiments to test his ideas and then peruses have the tests preformed. His experiment does not need to succeed or fail to merit that well deserved “title”. While have seen many others that IMO do not merit that.

As to “10 photons sounds like a very small number …” isn’t that a rather seat of the pants, common sense evaluation on your part?
I’m sure the Prof. used the science of a statistical probability. You know, in line with a “Shut up and calculate” approach more in touch with QM HUP. If you have doubts why not try it yourself, for a interference pattern with 10 to 12 areas each of dark, gray and bright areas, then overlay those thirty some areas with a standard dispersion pattern. What do you figure the odds are of not one of ten randomly chosen photons in the dispersion pattern would overlap with one of the dark areas in the interferance pattern?

As to further comments on Zelinger I don’t have time now, but I’ll provide a link here back to the other thread you referred and comment there. That way we do not to clutter up this discussion on the Cramer Experiment (which if I get any other updates on I’ll add).
 
  • #29
RandallB said:
What title? I did say he was right – the only title I assigned him is a scientist that does not overstate his case as fact, but proposes real experiments to test his ideas and then peruses have the tests preformed. His experiment does not need to succeed or fail to merit that well deserved “title”. While have seen many others that IMO do not merit that.

This would make sense as a general judgement only if Zeilinger was someone who never makes experiments. The reality is he made tons of experiments regarding entanglement. So I don't see a big problem with him making a thought experiment once in a while. That Cramer merely 'promises' an experiment in this specific case doesn't outweigh the substantial record that Zeilinger has in actually performing experiments.

The promise of an experiment is not an experiment.

RandallB said:
As to “10 photons sounds like a very small number …” isn’t that a rather seat of the pants, common sense evaluation on your part?
I’m sure the Prof. used the science of a statistical probability. You know, in line with a “Shut up and calculate” approach more in touch with QM HUP. If you have doubts why not try it yourself, for a interference pattern with 10 to 12 areas each of dark, gray and bright areas, then overlay those thirty some areas with a standard dispersion pattern. What do you figure the odds are of not one of ten randomly chosen photons in the dispersion pattern would overlap with one of the dark areas in the interferance pattern?

Take a look at the image labelled "Electron buildup over time" at http://en.wikipedia.org/wiki/Double-slit_experiment

It shows five images "a"-"e", with progressively more particles. Image "b" already has a substantial number of particles in the dark areas. And even in image "a", where I count about 10 particles, it looks like the third and especially the fourth from the left are in dark areas.

One thing is for sure: I wouldn't believe any experiment using only 10 photons to make a statement.
 
  • #30
Small vs. large light source

Hello,

Zeilinger's use of a small versus a big light source in his book "Einstein's Schleier" always seemed a bit confusing to me. Here is why.

(1) CASE ONE: SMALL LIGHT SOURCE:

By a small light source, Zeilinger simply means a light source that is smaller than the distance between two interference fringes on the detection screen (assuming for simplicity that the distance between the light source and the double slit on the right side is roughly the same as the distance between the double slit and the detection screen).

In his book, Zeilinger in effect says that there are two reasons why a small source is necessary for interference to occur in the double-double-slit experiment:

(1a) A small source means that different points on the source are very close together. So patterns originating from different points on the source will NOT overlap on the detection screen and therefore will NOT produce a blur. So no washing out here, but interference. This is definitely true. As a side comment, this condition has nothing to do with entanglement per se. It must always be met, regardless of whether there is a second entangled photon around somewhere or not, that may or may not carry information about the original photon. For already in the simple double-slit experiment, as described by Feynman (where there is no entangled photon at all, but just a single light source emitting single photons towards a single double slit on the right side and then on to a detector) this condition must be met for interference to occur.

(1b) A small source means high uncertainty in the momentum of the emitted photons in either direction, making it impossible to determine the precise direction of photon 2 on the left hand side, and consequently the direction of photon 1 on the right hand side.

So far so good. This was the easy part.

(2) CASE TWO: LARGE LIGHT SOURCE

Here is where - at least in my opinion - the book is less clear and, perhaps, a bit misleading.

First, let's see what Zeilinger says about large light sources. Two things:

(2a) A large source means that patterns originating from different points on the source will overlap and produce a blur on the detection screen. That is, the interference patterns originating from different points on the source will wash each other out, producing a uniform grey pattern on the detection screen. So a large source can produce interference patterns from different points on the source, but they wash each other out (one can view all this differents points on the source as a set of many "mini-sources" but let's not get into that).

(2b) A large source means small uncertainty in the momentum of the two entangled photons in either direction, making it POSSIBLE to determine the precise direction of photon 2 on the left hand side (by registering it in a detector on the left hand side), and thus the direction of photon 1 on the right hand side (because they are entangled). In other words, the entangled photon 2 on the left hand side carries information about the path information of photon 1 on the right hand side, and this is why there cannot be any interference pattern on the right side: In Zeilinger's words, the which-way-information is still "present somewhere on the universe" - here in the form of the momentum carried by photon 2 on the left side.

He then describes a way to make the interference pattern re-appear, even in the case of a large light source. He says all one needs to do is to "destroy" the which-way-information of photon 2 on the left side (possibly at a later time), simply by letting it also go through a double-slit itself on the left hand side. A simple trick, he says, will allow the experimenter to "recover" the interference pattern of photon 1 on the right side. The trick consists in making precise measurement of time, allowing the experimenter to detect the twin pairs, i.e. which photon 1 on the right side belongs to which photon 2 on the left side.

MY QUESTION IS:

But what happened to reason (2a) ?? Even if the which-way-information is destroyed using the method described in Zeilinger's book, there is still a large source emitting the entangled photon pairs. So, won't the different patterns originating from the different points on the large light source STILL overlap and wash each other out on the detection screen, producing a blur?

If, what Zeilinger says, can be done in an experiment, then recovering the various interference patterns ex-post would in effect mean to recover the individual points on the (large) source, ie the interference patterns from the many mini-sources.

COMMENTS?
 
  • #31
UPDATE on Cramer's Backward Causality Experiment

A follow up on Prof. John Cramer's Backward Causality Experiment

First:
for anyone reading through this thread concerned about an expensive experiment basing conclusions on only 10 detected photons thinking --
“I wouldn't believe any experiment using only 10 photons to make a statement.”

It should be obvious he needs to minimize the number photons used in a “snapshot” to determine if he has a pattern YES or NO.
Between 10 or 100 per snapshot he still needs to something like 90 out of 100 snapshots that come up Yes in the ‘Alice’ Test Area with 10 uncertain (I would think any “sure NO’s” in the setup would call for adjustment) just to confirm the sampling is within reason.

Then once the results are established he needs to repeat the same with some change external to Alice and the source path in the Time and Distance separated ‘Bob’ test area that will have the “Backward Causality” affect of giving a NO result for the pattern. And deciding how many “sure NO’s” in a 100 snapshots is enough to indicate “Backward Causality” and if any “sure YES’s” should be allowed.
Remember The Cramer set up has no “coincidence correlation requirement” only a setting at B that should turn the pattern at A on or off which A should be able to see without any info from B about the setting. It is not a DCQE set up.

Even with a small number of photons per snapshot the complete experiment will run into 10’s if not 100’s of thousands of photons tested.

Amazes me how much work this guy is putting himself through; and at the end of the day he fully recognizes that his results may well confirm what I and others have predicted in this thread that changes in the ‘Bob’ (“Image Slits” path) test area have absolutely no affect on the observed results in the ‘Alice’ (double-slit path) test area; I.E. No Backward Causality!

the update:

I spoke Prof. Cramer today:

After discarding the “CCD Camera” in ’07 as not good enough for this experiment (Jan. 2008 update) progress this year has been stalled on on getting an array of photon detectors to replace it to perform to the standard he needs.

It is basically a technical issue that, short of super cooling the detectors which could require putting the entire experiment in a vacuum to avoid frost fogging, needs a practical economical solution.

Thus no new real progress on the experiment itself has been made this year. But has far as John is concerned he is pressing on to resolve the tech stuff to enable and get back to the real experiment, so you can’t call it a dead effort yet.

I’ll check with him again after some time and update again here, – maybe the Fall semester will bring in some fresh solutions to the tech issues.
 
  • #32
That doesn't make any sense. The experiment should generate a visible interference pattern on a screen just as easily as it can generate one on a CCD - if he's right.
 
  • #33
peter0302 said:
That doesn't make any sense. The experiment should generate a visible interference pattern on a screen just as easily as it can generate one on a CCD - if he's right.
But you missing the point and purpose of Cramer’s work and the paper you referenced early on in this thread.
Of course the pattern will be produced to give his “0” signal on a screen instead of a ‘Camera’ in the test area. And even if he were to change that “0” to an “1” signal continually displayed on the screen by changing something in the B area – what would that prove?
Nothing to him because he is trying demonstrate BACKWARDS CAUSALITY.

And if he were to produce such a change on a SCREEN I would immediately assume that some phase interference communication was traveling back up the B path to where it split from the A path to effect the display on the screen with a undetected time delay. I.E. a normal forward time causality.

If it were as simple as changing what is seen in a long time interval screen dispaly he wouldn’t be dealing with taking snapshots of patterns.
It is also why he needs to be concerned with the number of photons required to decide on a “0” or “1” pattern result in a snapshot. The longer it takes to collect enough photons the greater the optical distance between A & B needs to be to eliminate a hidden slower than light forward time solution. This is not as simple as you make it sound. And this experiment has a long way to go.

Sure you and I (probable for differert reasons) "know" he will never get a pattern to go blank without a real time connecting link as used in a DCQE. But Cramer is intent on testing what we think we “know” as “opinions” that might be wrong.
 
  • #34
What really needs to be proven first before any more money is wasted on this is whether a change at A can have any effect on the pattern at B. That's never been shown and would defy QM. Now, sure you'd assume there was some phase interference with an undetected time delay, but if can't do it my way, which is much cheaper, at all (which he can't!) then there's no reason to waste more money. If he can do it my way, then move onto eliminating loopholes as you suggest.

In other words, what inspired all this was Dopfer's experiment (Zelinger's student), which Cramer couldn't understand why required coincidence counting, so he decided to remove the coincidence counter. If he can make Dopfer's experiment work without a coincidence counter - even slower than light - I will be highly impressed. But he can't even do that, so why waste our time and a lot of other peoples' money trying to do more?
 
Last edited:
  • #35
Cramer makes a big mistake when he states that the coincidence circuit only filters a little noise. Actually the coincidence circuit is necessary to see any fringing at all. As with all the previous quantum erasure experiments, the Dopfer experiment is no different. In a quantum erasure experiment there usually is a fringe pattern and an equal antifringe pattern. The coincidence circuit removes the antifringe pattern.

Nothing here that could be used as a non-local communicator.

Check some of the early work by Mandel and his associates at Rochester to understand this more thoroughly.
 

Similar threads

  • Quantum Physics
Replies
5
Views
827
Replies
6
Views
1K
  • Special and General Relativity
2
Replies
55
Views
2K
  • Quantum Physics
2
Replies
47
Views
4K
Replies
6
Views
849
  • Quantum Interpretations and Foundations
4
Replies
138
Views
5K
  • Quantum Physics
2
Replies
69
Views
5K
  • Quantum Interpretations and Foundations
3
Replies
79
Views
5K
  • Quantum Physics
Replies
15
Views
8K
  • Special and General Relativity
Replies
5
Views
1K
Back
Top