Do we need a coincidence counter (for correlation) in delayed erasure?

In summary, the conversation discusses the need for a coincidence counter in delayed erasure experiments. The argument is made that the counter is not necessary because the experiment is repeated in the same way each time, making it unnecessary to correlate the photons. Instead, it is suggested that the noise can be removed by calculating the expected arrival time of the photons. However, it is noted that interference patterns can only be observed when looking at specific subsets of photons at the detector. The conversation also touches on the positioning of detectors and the potential use of a large screen instead. The question remains as to why the coincidence counter is not needed in this experiment due to the predetermined use of an eraser.
  • #36
unusualname said:
The walborn setup doesn't allow any possibility of classical phase relationships being in any way significant
What "classical phase relationships" are significant in the delayed choice quantum eraser? Are you referring to the fact that the D0/D1 and D0/D2 interference patterns are out-of-phase so their sum is a non-interference pattern? If so I don't see what's "classical" about this, and in any case I said near the beginning of the thread that I would guess something similar would be true in the Walborn setup if you looked at the interference patterns at Ds for different specific positions of Dp...for each specific position of Dp you would see an interference pattern at Ds, but if you summed all the interference patterns for different positions of Dp I bet they would add up to a non-interference pattern.
 
Physics news on Phys.org
  • #37
On your side about what? I don't like any interpretation that treats "collapse" as real, seems absurd to me given what's known about decoherence.

Decoherence has been observed in exactly the way that an ontological probability evolving according to scrhoedinger evolution would predict.
 
  • #38
unusualname said:
Decoherence has been observed in exactly the way that an ontological probability evolving according to scrhoedinger evolution would predict.
I don't know what that mean. Again, the Schrödinger equation gives the evolution of the quantum state vector, which only assigns complex amplitudes to different possible observable results, you can only turn those into real-valued probabilities by invoking a "collapse" due to measurement with the Born rule. And "collapse" does not follow from the deterministic evolution under the Schrödinger equation, it's an entirely separate assumption.
 
  • #39
JesseM said:
What "classical phase relationships" are significant in the delayed choice quantum eraser? Are you referring to the fact that the D0/D1 and D0/D2 interference patterns are out-of-phase so their sum is a non-interference pattern? If so I don't see what's "classical" about this, and in any case I said near the beginning of the thread that I would guess something similar would be true in the Walborn setup if you looked at the interference patterns at Ds for different specific positions of Dp...for each specific position of Dp you would see an interference pattern at Ds, but if you summed all the interference patterns for different positions of Dp I bet they would add up to a non-interference pattern.

eh? So what, you have to sum for the specific position of Dp in each case, have you actually studied the experiment? It wouldn't make sense to sum over different positions of Dp for specific position of Ds,
geez, this is an undergrad experiment, it's not so hard to analyse.

This is becoming silly, you throw in some convoluted experiment which doesn't really test anything, and then try to argue that a really simple good test isn't correct by stupid arguments.
 
  • #40
JesseM said:
I don't know what that mean. Again, the Schrödinger equation gives the evolution of the quantum state vector, which only assigns complex amplitudes to different possible observable results, you can only turn those into real-valued probabilities by invoking a "collapse" due to measurement with the Born rule. And "collapse" does not follow from the deterministic evolution under the Schrödinger equation, it's an entirely separate assumption.
There is no "collapse", trust me. The universe just evolves and we have to measure it to see what it evolved to. (Controversially , I suggest we can change the unitary evolution with free-will, but let's not go there now)
 
  • #41
unusualname said:
eh? So what, you have to sum for the specific position of Dp in each case, have you actually studied the experiment?
Yes, I understand that. My point is that there need not be any interference pattern in the total pattern of photons at Ds even with the polarizer in front of Dp, because it's only when you look at the subset of signal photons at Ds whose idlers were detected at a particular position that you get interference, if you were to add up all the different subsets in an expanded experiment where the Dp detector could detect idlers at a range of positions, then all those interference patterns would add up to a non-interference pattern. If the total pattern of signal photons (without doing any coincidence matching) could show interference depending on whether a polarizer was in place in front of Dp, then this would allow for FTL signalling, but just as in the delayed choice quantum eraser this is avoided by the fact that interference is only seen in coincidence-matched subsets.
unusualname said:
It wouldn't make sense to sum over different positions of Dp for specific position of Ds,
geez, this is an undergrad experimet, it's not so hard to analyse.
No need to be condescending, I understand the experiment but was making a point about a variant that would help explain why looking at the pattern of photons at Ds doesn't give you any FTL knowledge about whether a polarizer was placed in front of Dp.
 
  • #42
unusualname said:
There is no "collapse", trust me. The universe just evolves and we have to measure it to see what it evolved to. (Controversially , I suggest we can change the unitary evolution with free-will, but let's not go there now)
That doesn't really make any sense. If you treat the measurer as a quantum system and allow everything to evolve according to the Schrödinger equation, then measurement will simply entangle the measurer with the system being measured, causing the measurer to be in a macroscopic superposition like Schrödinger's cat. That would be OK if you were advocating a many-worlds type interpretation, but if you want there to be a single definite outcome each time you can't just have evolution according to the Schrödinger equation at all times.
 
  • #43
JesseM said:
That doesn't really make any sense. If you treat the measurer as a quantum system and allow everything to evolve according to the Schrödinger equation, then measurement will simply entangle the measurer with the system being measured, causing the measurer to be in a macroscopic superposition like Schrödinger's cat. That would be OK if you were advocating a many-worlds type interpretation, but if you want there to be a single definite outcome each time you can't just have evolution according to the Schrödinger equation at all times.

Jesus Christ, the WHOLE universe (you, me, every friggin thing) evolves unitarily seeded by RANDOM quantum "jumps".

By accepting an ontological probablility we get an easy solution to the laws of the universe, just have to add in that pesky free-will, and the next stage of our scientific understanding regarding consciousness.
 
  • #44
unusualname said:
Jesus Christ, the WHOLE universe (you, me, every friggin thing) evolves unitarily seeded by RANDOM quantum "jumps".
Calm down man, how would you expect me to know that's what you mean when you just got through saying everything evolves according to the Schrödinger equation? (you said your model involved 'an ontological probability evolving according to scrhoedinger evolution') If there are random quantum jumps, then it's not actually true that the Schrödinger equation determines the dynamics at all times. And saying the jumps are "random" is not very specific, is there supposed to be a probability distribution determining the likelihood that another quantum jump will happen a given time after the previous one? Or are you invoking consciousness/free will to determine when jumps occur? What actually happens to the quantum state vector during the jumps, does it involve jumping onto an eigenstate of some observable (or a complete set of commuting observable), if so it is always a position eigenstate or can the whole universe jump into a momentum eigenstate where there is maximal uncertainty in position, and if so what would that be like? If it does involve jumping into an eigenstate, you need to realize this is exactly what is meant by "collapse", so you were wrong to say earlier 'There is no "collapse", trust me'. If it doesn't involve jumping into an eigenstate, what prevents the universe from just continually jumping into states that involve macroscopic superpositions like Schrödinger's cat? You need specific answers to these kinds of questions for your idea to be remotely well-defined.

Anyway my critique about decoherence still applies, I find it very inelegant to assume objective "jumps" that just happen to behave exactly like the records you would get if you assumed no jumps until the very end of a long series of experiments, just continuous evolution of the quantum state vector according to the Schrödinger equation, with a final "jump"/collapse in which the measurement records go to a classical state. In this case the records will appear to show a series of previous collapses when the small quantum system interacted with a large measuring instrument, due to decoherence, even though none actually happened earlier in this model. The fact that decoherence would give "fake" collapses that happen to look just like real collapses seems too coincidental to me.
 
  • #45
JesseM said:
Calm down man, how would you expect me to know that's what you mean when you just got through saying everything evolves according to the Schrödinger equation? (you said your model involved 'an ontological probability evolving according to scrhoedinger evolution') If there are random quantum jumps, then it's not actually true that the Schrödinger equation determines the dynamics at all times. And saying the jumps are "random" is not very specific, is there supposed to be a probability distribution determining the likelihood that another quantum jump will happen a given time after the previous one? Or are you invoking consciousness/free will to determine when jumps occur? What actually happens to the quantum state vector during the jumps, does it involve jumping onto an eigenstate of some observable (or a complete set of commuting observable), if so it is always a position eigenstate or can the whole universe jump into a momentum eigenstate where there is maximal uncertainty in position, and if so what would that be like? If it does involve jumping into an eigenstate, you need to realize this is exactly what is meant by "collapse", so you were wrong to say earlier 'There is no "collapse", trust me'. If it doesn't involve jumping into an eigenstate, what prevents the universe from just continually jumping into states that involve macroscopic superpositions like Schrödinger's cat? You need specific answers to these kinds of questions for your idea to be remotely well-defined.

Anyway my critique about decoherence still applies, I find it very inelegant to assume objective "jumps" that just happen to behave exactly like the records you would get if you assumed no jumps until the very end of a long series of experiments, just continuous evolution of the quantum state vector according to the Schrödinger equation, with a final "jump"/collapse in which the measurement records go to a classical state. In this case the records will appear to show a series of previous collapses when the small quantum system interacted with a large measuring instrument, due to decoherence, even though none actually happened earlier in this model. The fact that decoherence would give "fake" collapses that happen to look just like real collapses seems too coincidental to me.

When Heisenberg had his "Heilgoland moment", he didn't quite follow it through (impossible given the circumstances), i had a similar revelation which explained how the ontological probability truly works, I have given it to you, like Heisenberg I am a bit thick about the whole mathematical implications, we just need a Dirac or Jordan figure to make it work.. I'll do it myself it they don't appear.
 
Last edited:

Similar threads

Replies
18
Views
1K
Replies
16
Views
1K
  • Quantum Physics
Replies
2
Views
373
  • Quantum Physics
Replies
2
Views
964
Replies
2
Views
742
Replies
1
Views
716
Replies
3
Views
1K
Replies
28
Views
687
  • Quantum Physics
Replies
6
Views
2K
Replies
38
Views
5K
Back
Top