Nobody understands quantum physics?

In summary, Feynman's statement that nobody understands quantum mechanics is often quoted as a witty remark, but it highlights the fact that quantum mechanics is not understood in the same way as classical mechanics. Quantum mechanics is a theory that does not assign values to all observables in the absence of measurement, unlike classical mechanics. However, it is still considered the most well-understood and rigorously tested theory ever. There have been many interpretations of quantum mechanics, and the "measurement problem" is still being debated. But for most practical applications, the minimal interpretation is sufficient. It is possible that new observational facts may one day lead to a major revision of quantum theory, as happened with the development of quantum electrodynamics. Despite these debates, quantum
  • #141
LittleSchwinger said:
That's handled with POVM tomography though, not randomised observables.
Any experiment must account for the possibility of noise and experimental error. You might devise some way to use POVM tomography to constrain the distribution of ##\eta##, but at the end of the day there will always be some statistical uncertainty.
 
Physics news on Phys.org
  • #142
Couchyam said:
Any experiment must account for the possibility of noise and experimental error. You might devise some way to use POVM tomography to constrain the distribution of ##\eta##, but at the end of the day there will always be some statistical uncertainty.
That's incorporated into the POVM.
 
  • Like
Likes vanhees71
  • #143
LittleSchwinger said:
That's incorporated into the POVM.
To my knowledge, POVM stands for "Projection Operator Valued Measure." Is the measure normalized to a particular number in the type of POVM that you have in mind?
 
  • #144
WernerQH said:
For me, this is just rationalization. Without realizing it, like so many physicists you have fallen victim to Bohr's tranquilizing philosophy ("Beruhigungsphilosophie", as Einstein put it). It's probably pointless to continue the discussion, if you refuse to even consider the possibility of a deeper understanding of quantum theory. Bell's qualms about the theory proved to be remarkably fertile, even leading to a kind of quantum information "industry", as you yourself have admitted. In his essay "Against Measurement" Bell argued that the axioms of such a fundamental theory should be formulated without vague notions like "measurement" or "system". But if you believe there is no better way, then nothing will convince you.
But particularly Bell's work shows that standard QT is all there is. There's no realism, i.e. there's precisely the randomness about the outcome of measurements that's observed in the many Bell tests based on Bell's work. For me physics is about what can be objectively observed and described by mathematical theories. Since QT does this with great accuracy I indeed don't know what's still to be "deeper understood". It's not Beruhigungsphilosophie but simply the best description of the empirical facts.

What's really not yet understood is quantum gravity, but that's a scientific and not a philosophical problem.
 
  • Like
Likes Paul Colby and LittleSchwinger
  • #145
Couchyam said:
To my knowledge, POVM stands for "Projection Operator Valued Measure." Is the measure normalized to a particular number in the type of POVM that you have in mind?
POVM=positive operator valued measure. It's a more general descriptions of measurements that are not von Neumann filter measurements described by projection operators, PVMs=projection-valued measures.
 
  • Like
Likes LittleSchwinger
  • #146
vanhees71 said:
POVM=positive operator valued measure. It's a more general descriptions of measurements that are not von Neumann filter measurements described by projection operators, PVMs=projection-valued measures.
And... are they typically normalized to something?
 
  • #147
Couchyam said:
And... are they typically normalized to something?
They sum up to the identity operator.
https://en.wikipedia.org/wiki/POVM

As per Naimark's dilation theorem one can explain the POVM from a PVM in a larger space. This in itself is easy to see, as it gives a distribution based on "ignorance" of some of hte reduced information. Also there is no conceptual problem here as long as we stick to sub atomicphysics. Ie. a dominant obsering context with a small quantum system.

My objection was that what is the agent can't encode this larger space? The usual thinking doesn't consider the explicit context, you can just imagine enlarging the hilbert space, mathematically, as you don't care about the context. It's just math! The "problem" does not lie in math, but the information processing of the implied information, and where this "physically" takes place, in the system, or in the environment, or where? In particle physics we know the answer, the application of QFT means all information processing are taking place in the classical environment, in the lab. And then none of my points make any sense! But if one start to picture including cosmological perspectives, where the observer is not surrounding a atomic scale the collision domain, but on the contrary immersed in the system of inquiry, then the method of enlarging the hilbert space becomes problematic to me at least.

The difference in relative size between the parts on each heisenberg cut seems critical to me. It's only when the observing side is dominant, that the effective classical reference is in place, and this is also when QFT as it stands indeed makes sense. Here it is also not necessary to worrt about "where information is processes" as the environment is so dominant that we can in principled do whatever we want and not be limited by computational speed or memory - until we start to speak about black holes (and include gravity).

/Fredrik

 
  • #148
A POVM is fairly mundane stuff. One doesn't need to talk about cosmology, black holes, Heisenberg cuts or anything else. A POVM models measurements that have errors, thermal noise, false positives, dark counts, indirect measurements via coupling to an ancilla and so on. It's just as vanhees71 has above, a more general description of measurements than PVMs.

Thus regarding Couchyam's earlier statement, we don't need to consider randomised operators when discussing imprecise, noisy, etc measurements. We just use POVMs.

vanhees71 said:
What's really not yet understood is quantum gravity, but that's a scientific and not a philosophical problem.
I agree.

My own take is that quantum theory was mostly sorted out conceptually by 1935. Heisenberg formulated the beginnings of the theory in 1925, but there were several conceptual points to be cleared up. These include details like the Mott paper on how lines observed in a bubble chamber were compatible with wave mechanics, von Neumann putting the theory on a sound mathematical footing, entanglement being first articulated, properly understanding scattering theory, that variables not commuting was not simply "disturbance" and so on.

What wasn't fully appreciated by 1935 were the deeper uses that could be made of entanglement and why the collective coordinates of macroscopic bodies, e.g. positions of planets, motion of a car, obey classical probability theory.

Entanglement has since been much better understood. For the latter we can now show how coarse-graining, decoherence, the exponential size of the Hilbert space and many other effects dampen coherence for macro-coordinates for a typical large object well beyond an inverse googleplex in magnitude.

Had there really been other problems they would have shown up in formulating QFT. The actual issues there however were completely separate: correctly formulating the Poincaré group for Hilbert spaces, renormalisation, the relation between particles and fields, treating couplings to massless states (i.e. gauge theories).
 
  • Like
Likes mattt, dextercioby and gentzen
  • #149
LittleSchwinger said:
A POVM is fairly mundane stuff. One doesn't need to talk about cosmology, black holes, Heisenberg cuts or anything else. A POVM models measurements that have errors, thermal noise, false positives, dark counts, indirect measurements via coupling to an ancilla and so on. It's just as vanhees71 has above, a more general description of measurements than PVMs.
I think what @Fra was alluding to is that a POVM is more clearly relevant to open quantum systems whose dynamics are induced by a larger 'coherent' quantum system (although I may have misinterpreted Fra's comment.) A POVM can always be expanded into a PVM theoretically, but practically there's a limit to the complexity of the Hilbert space of a typical laboratory or experimenter, and thus there may be some POVM's (for some especially complex systems) that cannot be expanded into a controlled, laboratory-friendly, bona fide PVM (although a PVM may still exist in principle.)
LittleSchwinger said:
Thus regarding Couchyam's earlier statement, we don't need to consider randomised operators when discussing imprecise, noisy, etc measurements. We just use POVMs.
I think I've found the source of confusion. My example of measuring ##S_{\hat x}## versus ##S_{\hat x+\eta}## may have been misleading, because from a purely quantum mechanical perspective, the randomness in any measurement could just be absorbed either by the state information itself, or the way in which the chosen observables partition state space (i.e. the POVM might not be a PVM.)
There is an important point to observe, however, which is very very loosely sort of related to how a thermal average is different from a quenched average in statistical mechanics. It's that at the end of the day, in a Stern Gerlach style apparatus, there is a definite classical direction ##\hat x+\vec\eta## to the magnetic field that deflects the beam, and that direction needn't change (randomly) between trials; in fact, it would probably remain approximately constant throughout the experiment. The perturbation ##\vec\eta## is determined when the experiment is set up, and might even change over time depending on how rambunctious the undergraduates are, and the difference between ##\hat x+\vec\eta## and ##\hat x## might not be explicable through an appropriate POVM. Similarly, the actual Hamiltonian of a particular experimental realization might differ slightly from the idealized theoretical version (two fields that are supposed to be perpendicular might be slightly aligned in a particular experimental setup, etc.) Does that partially clarify where I was coming from?
 
  • #150
vanhees71 said:
But particularly Bell's work shows that standard QT is all there is.
Does it? Hasn't it widened the field (quantum cryptography, quantum "teleportation", quantum computing)?
vanhees71 said:
There's no realism, i.e. there's precisely the randomness about the outcome of measurements that's observed in the many Bell tests based on Bell's work.
My conclusion is the exact opposite. I'd rather give up the "sacred" locality than realism. For me, realism means accepting the results of experiments as real; it does not mean we have to believe in the existence of photons with definite polarization states.
vanhees71 said:
For me physics is about what can be objectively observed and described by mathematical theories. Since QT does this with great accuracy I indeed don't know what's still to be "deeper understood". It's not Beruhigungsphilosophie but simply the best description of the empirical facts.
I agree that we are in the posession of a very good description, but I doubt that we have found the best formulation. Obviously you can't conceive of the possibility that quantum theory (after almost a century!) may be in a situation similar to that of electrodynamics before 1905.
 
  • Like
Likes physika, Couchyam, kurt101 and 2 others
  • #151
WernerQH said:
Does it? Hasn't it widened the field (quantum cryptography, quantum "teleportation", quantum computing)?
Sure, but all this is just standard quantum theory, and it's possible, because entanglement is the correct description of Nature and not "realism", i.e., all Bell experiments prove (together with the validity of locality in the sense of microcausality of relativistic QFT) that the observable don't take determined values, i.e., there's "true randomness" in Nature, which is not due to our ignorance of the exact state as in classical statistical physics. E.g., if you have a two-photon Bell state (say the polarization-singlet state) then the single-photon polarizations are maximally random, i.e., the reduced statistical operator describing them is ##\hat{1}/2##, i.e., the single photons are perfectly unpolarized.
WernerQH said:
My conclusion is the exact opposite. I'd rather give up the "sacred" locality than realism. For me, realism means accepting the results of experiments as real; it does not mean we have to believe in the existence of photons with definite polarization states.
You can't give up locality without giving up the most successful quantum theory ever, i.e., local (microcausal) relativistic QFT the Standard Model is based on, which is more successful than the HEP community wishes for!
WernerQH said:
I agree that we are in the posession of a very good description, but I doubt that we have found the best formulation. Obviously you can't conceive of the possibility that quantum theory (after almost a century!) may be in a situation similar to that of electrodynamics before 1905.
Of course, the current understanding is incomplete, but not because of some philosophical quibbles about the interpretation of the present QT formalism but because we don't have a satisfactory quantum theory of gravitation and/or a quantum theory of spacetime. It's well possible that a future solution of this problem will result in a completely new paradigm with the classical spacetime model(s) used in our contemporary physics being an "emergent phenomenon".
 
  • Haha
Likes WernerQH
  • #152
We're obviously talking past each other!
:frown:
 
  • Like
Likes Couchyam
  • #153
vanhees71 said:
After the magnet the spin component in direction of the field is (almost completely) entangled with position, i.e., in each of the two partial beams "selected" by the magnet you have a well-prepared spin component.
No. You have two well-prepared spin components, one in the up beam and one in the down beam. Prepared is a single particle in a superposition, so immediately before the measurement is taken, it "is in" both partial beams. The measurement decides on one of the two spots where the particle can possibly be recorded. (More precisely, after - and not before - the measurement has been taken, it is known at which of the two spots the screen responded.)
A. Neumaier said:
The question is when, in a quantum description of the detector, the definite value is obtained.
Thus: in the quantum description of the measurement device, at which time is the up spot (or the down spot) obtained?
Morbert said:
We should be careful not to attribute a property like ##+1## to the object of measurement. would only be attributed to the classical datum post-measurement.
But after the measurement, this property is a definite property of the quantum measurement device. Thus it must have been somehow obtained dynamically!
 
Last edited:
  • Like
Likes Lord Jestocost and mattt
  • #154
A. Neumaier said:
No. You have two well-prepared spin components, one in the up beam and one in the down beam. Prepared is a single particle in a superposition, so immediately before the measurement is taken, it "is in" both partial beams. The measurement decides on one of the two spots where the particle can possibly be recorded.
Yes, and if I select a particle from this beam, it's in a definite spin state.
A. Neumaier said:
Thus: in the quantum desxcription of the measurement device, at which time is the up spot (or the down spot) obtained?
It's obtained when the detector registers the particle, when else?
A. Neumaier said:
But after the measurement, this property is a definite property of the quantum measurement device. Thus it must have been somehow obtained dynamically!
Sure, how else?
 
  • #155
A. Neumaier said:
The measurement decides on one of the two spots where the particle can possibly be recorded.
In which interpretation? Do you have a reference to back up that assertion (with respect to a specific interpretation)?
 
  • #156
vanhees71 said:
Yes, and if I select a particle from this beam, it's in a definite spin state.
The SG experiment has no selection. How do you select a particle without adding a filter before the screen?

vanhees71 said:
It's obtained when the detector registers the particle, when else?

Sure, how else?
vanhees71 said:
The classical behavior of macroscopic matter, including matter used for measurements, can be understood from quantum many-body theory.
What is to be explained is how the definite macroscopic measurement result read from the measurement device, interpreted by quantum many-body theory, is obtained through the interaction with the silver atom prepared in a superposition.
A. Neumaier said:
The measurement decides on one of the two spots where the particle can possibly be recorded.
gentzen said:
In which interpretation? Do you have a reference to back up that assertion (with respect to a specific interpretation)?
My statement just means that after (and not before) the measurement has been taken, it is known at which of the two spots (where the particle can possibly be recorded) the screen responded. This comes directly from the observational facts, and is independent of quantum physics, hence independent of any interpretation of it.
 
  • Like
Likes Lord Jestocost and mattt
  • #157
A. Neumaier said:
The SG experiment has no selection. How do you select a particle without adding a filter before the screen?
I select a particle by taking it simply from the region, where the particles have the spin component I like to get. Of course, I can also block the other beam with some filter. That doesn't make a difference.
A. Neumaier said:
What is to be explained is how the definite macroscopic measurement result read from the measurement device, interpreted by quantum many-body theory, is obtained through the interaction with the silver atom prepared in a superposition.
It's a silver atom reacting with the detector material by e.g. kicking out an electron, which then is electronically registered or in the original setup it got stuck on the plate and could then be made visible with photographic development (or initially due to the smoke from cheap cigars ;-)).
A. Neumaier said:
My statement just means that after (and not before) the measurement has been taken, it is known at which of the two spots where the particle can possibly be recorded, the screen responded. This comes directly from the observational facts, and is independent of quantum physics, hence independent of any interpretation of it.
Of course, you don't know where each individual silver atom will end up, but you know that if you select silver atom from one of the regions that you have a particle with a determined spin component. That's always the case with probabilistic statements. This indeed has nothing specifically to do with quantum physics.
 
  • #158
A. Neumaier said:
My statement just means that after (and not before) the measurement has been taken, it is known at which of the two spots (where the particle can possibly be recorded) the screen responded. This comes directly from the observational facts, and is independent of quantum physics, hence independent of any interpretation of it.
That way to put it is fine for me. My worry with the initial statement that "the measurement decides" is that people tend to interpret that literally. And I was also unsure whether you did this too, with the intention to ridicule some specific interpretation. (That is why I asked for a reference.)
 
  • Like
Likes vanhees71
  • #159
vanhees71 said:
I select a particle by taking it simply from the region, where the particles have the spin component I like to get. Of course, I can also block the other beam with some filter. That doesn't make a difference.
If you have only one particle prepared, you don't know which region to choose.
vanhees71 said:
It's a silver atom reacting with the detector material by e.g. kicking out an electron, which then is electronically registered or in the original setup it got stuck on the plate and could then be made visible with photographic development (or initially due to the smoke from cheap cigars ;-)).
But in the quantum description, the silver atom is in a superposition of position states, whose quantum evolution in the interacting system particle+detector does not lead to a definite macroscopic measurement result of the detector.
 
Last edited:
  • Like
Likes mattt
  • #160
A. Neumaier said:
If you have only one particle prepared, you don't know which region to choose.
I know which region to choose, given the spin state I want to investigate. Of course, I don't know beforehand, whether I get something to experiment with or not. I only know the probabilities whether I get a particle or not. It's a conditional probability: If I get a particle, then I know it's in the wanted spin state. If I don't get a particle, I know it's in the other region with the other possible spin state. As you said before, that's as with any probabilistic description, not specific to "quantum probabilities". Obviously, I don't understand, where you think the problem is.
A. Neumaier said:
But in the quantum description the silver atom is in a superposition of position states, whose quantum evolution in the interwacting system particle+detector does not lead to a definite measurement result of the detector.
Let's describe the particle with a wave function for simplicity. You can of course also describe it by a general mixed state, but the qualitative results we are discussing here are the same.

You prepare a silver atom to have a pretty well defined momentum in one direction and be quite well located in the transverse direction (you can use a Gaussian wave packet). When released it's running through the magnetic field. The wave function develops such that after the magnet it's peaked around two positions due to the deflection by the inhomogeneous magnetic field dependent on the spin component in direction of the magnetic field. This implies that particles in the region around either peak has a well-determined spin component in the direction of the magnetic field, i.e., that the spin component and the position is entangled. That's all that's need to select either of the spin states, i.e., you consider simply only particles in the one or the other region.

A definite "outcome" at the detector is of course described by the interaction of the atom with the detector material, and it's random for each single atom whether the one or the other detector will detect the particle. I still don't get where the problem might be.
 
  • #161
vanhees71 said:
A definite "outcome" at the detector is of course described by the interaction of the atom with the detector material, and it's random for each single atom whether the one or the other detector will detect the particle. I still don't get where the problem might be.
I think A. Neumaier might be asking how the outcome being definite at the detector is modelled quantum mechanically, i.e. where in the model of the atom+detector is this definiteness seen.
 
  • Like
Likes dextercioby, Fra and mattt
  • #162
LittleSchwinger said:
I think A. Neumaier might be asking how the outcome being definite at the detector is modelled quantum mechanically, i.e. where in the model of the atom+detector is this definiteness seen.
It must be modelled by the hamiltonian of the detector+atom. ie. the process of measurement, is a physicainteraction in the hamiltonian of the bigger hilbert space. This is IMO a simple motivator for trying to unify physics and inference in deeper way (which is what I keep nagging about).

At each level the hamiltonian is somehow "given". But what if we ould understand the measurement process, shouldnt that help us with emergent hamiltonian of bigger systems?

It seems to me the definitness of an outcome is defined relative to the orignal observer (to which the term "outcome" refers to). Which is the detector in this example. Thus in the quantum mechanical description of detector+atom, there notion of definite outcome of an internal parts is undefined.

/Fredrik
 
  • #163
But an "outcome" is due to a macroscopic observable, e.g., a visible spot on a photoplate and as such via coarse graining a classical description is adequate. As any classical phenomenon it's the result of sufficient coarse graining and the associated decoherence which leads to a "definite classical outcome".
 
  • Like
Likes LittleSchwinger
  • #164
But in the QM description of atom+detector, the "macroscopic variable" is now also a quantum system.

/Fredrik
 
  • #165
vanhees71 said:
as any classical phenomenon it's the result of sufficient coarse graining and the associated decoherence which leads to a "definite classical outcome".
Taking this ultimate method of explanation, you are force to effective consider the whole universe as a quantum system. But, how do you confirm that model by experiment? (ie as per the high standards you are used to from particle experiments?) THIS is my problem. I understand the idea of decoherence, but that does not solve the problem.

/Fredrik
 
  • #166
A. Neumaier said:
But after the measurement, this property is a definite property of the quantum measurement device. Thus it must have been somehow obtained dynamically!

"Therefore, the program of computing what the effect of the [measurement] disturbance was and correcting for it is, in general, impossible. Accordingly, the two basic tenets of the theory of macroscopic measurement are both violated. Either the interactions cannot be made arbitrarily weak because of the phenomenon of atomicity, or if we wish to accept this and correct for it, we cannot do so because we do not have a detailed, deterministic theory of each individual event" --Schwinger

Using QM to resolve the "somehow" in your message above is presumably impossible. We can't use quantum theory to predict in detail what events will occur. We can assign probabilities to possible alternative histories of events during the measurement process, but no dynamics will ever explain why one history actually occurs over other alternatives.
 
  • Like
Likes WernerQH
  • #167
vanhees71 said:
But an "outcome" is due to a macroscopic observable, e.g., a visible spot on a photoplate and as such via coarse graining a classical description is adequate. As any classical phenomenon it's the result of sufficient coarse graining and the associated decoherence which leads to a "definite classical outcome".
Please point to a paper describing this in a measurement context.

Standard coarse-graining would produce a definite classical outcome independent of the atomic input. Thus something more must be going on that turns the unitary dynamics into a bistable system with two macroscopic outcomes that depend stochastically on the state of the atom.
 
  • Like
Likes gentzen and mattt
  • #168
Morbert said:
Using QM to resolve the "somehow" in your message above is presumably impossible.
You presume this, but many don't. The quest for answering this is the measurement problem.
Morbert said:
We can't use quantum theory to predict in detail what events will occur. We can assign probabilities to possible alternative histories of events during the measurement process, but no dynamics will ever explain why one history actually occurs over other alternatives.
According to some interpretations that you take as being unquestionably assumed - but not in all!

In Bohmian mechanics, one can (at least in principle) predict in detail what events will occur, at the expense of introducing degrees of freedom that are nowhere used in practice.

In my thermal interpretation, one can also (at least in principle) predict in detail what events will occur, without introducing degrees of freedom that are nowhere used in practice.
 
  • Like
  • Skeptical
Likes PeterDonis, WernerQH and gentzen
  • #169
A. Neumaier said:
You presume this, but many don't. The quest for answering this is the measurement problem.
I'm not going to disparage any academic projects that attempt to ground QM's probabilistic character in something more deterministic, but I will defend QM against the charge of having a measurement problem insofar as I think a probabilistic interpretation can be discussed and applied in a consistent and unambiguous manner.
A. Neumaier said:
In my thermal interpretation, one can also (at least in principle) predict in detail what events will occur, without introducing degrees of freedom that are nowhere used in practice.
I will have a look at this again. iirc The thermal interpretation frames the response rate of a detector as an "event" in and of itself, as opposed to a statement about the likelihood of events?
 
  • #170
vanhees71 said:
Perhaps in lack of interest in dealing with the real problems?
You mean like world peace and cancer cure? :wink:
 
  • Like
  • Haha
Likes physika, gentzen and vanhees71
  • #171
I meant of course real problems in physics as a natural science... Looking for cancer cure is also much more promising using biology and medicine rather than philosophy. Concerning world peace, I guess one can use philosophy for both making it better or worse... But that's now way off-topic!
 
  • #172
vanhees71 said:
But an "outcome" is due to a macroscopic observable, e.g., a visible spot on a photoplate and as such via coarse graining a classical description is adequate. As any classical phenomenon it's the result of sufficient coarse graining and the associated decoherence which leads to a "definite classical outcome".
Yeah I agree, just thought I'd clarify what I saw the question to be.
 
  • Like
Likes vanhees71
  • #173
Morbert said:
I will defend QM against the charge of having a measurement problem insofar as I think a probabilistic interpretation can be discussed and applied in a consistent and unambiguous manner.
There is no measurement problem as long as one treats the detector as a classical object, as in the Copenhagen interpretation.

But once one jointly claims (as vanhees71 does)
  • that there is no split between quantum and classical, and
  • that coarse-graining explains everything about the detector,
the measurement problem becomes unavoidable.

For then (and only then) one must explain in particular why when fed with a single particle in a spatial superposition, the detector produces a definite outcome that depends stochastically on the superposition. Decoherence is of no help here, as Max Schlosshauer (who wrote the definitive book about the matter) acknowledged.

Morbert said:
I will have a look at this again. iirc The thermal interpretation frames the response rate of a detector as an "event" in and of itself, as opposed to a statement about the likelihood of events?
No. Each single response is an event, and the rate of events is predicted by POVMs, as everywhere in quantum mechanics. What is different in the thermal interpretation is that the necessary link between measurement and eigenvalues is denied - POVMs work independent of any eigenvalue analysis.
 
Last edited:
  • Like
Likes gentzen
  • #174
(My emphasis)
A. Neumaier said:
But once one jointly claims (as vanhees71 does)
  • that there is no split between quantum and classical, and
  • that coarse-graining explains everything about the detector,
the measurement problem becomes unavoidable.

For then (and only then) one must explain in particular why when fed with a single particle in a spatial superposition, the detector produces a definite outcome that depends stochastically on the superposition. Decoherence is of no help here, as Max Schlosshauer (who wrote the definitive book about the matter) acknowledged.
Modelling the interaction between the particle in spatial superposition and the detector array with a quantum theory will result in a new superposition state that entangles the particle with the detector. Decoherence and Quantum darwinism would select the sample space of outcomes most robust to observation, but would not select a definite outcome.

My question: Why is this a problem? Why must we explain in particular the definite outcome? Why can't we accept QM as always treating all possible outcomes on equal footing apart from their probabilities? Perhaps vanhees71 has some stronger sense of "explains everything about the detector" in mind, or perhaps not.
Each single response is an event, and the rate of events is predicted by POVMs, as everywhere in quantum mechanics. What is different in the thermal interpretation is that the necessary link between measurement and eigenvalues is denied - POVMs work independent of any eigenvalue analysis.
Ok, but it is the rate of events that is explained/predicted, as opposed to the particular event of a single run right?
 
Last edited:
  • #175
Morbert said:
Why must we explain in particular the definite outcome? Why can't we accept QM as always treating all possible outcomes on equal footing apart from their probabilities?
Because one of the goals of theoretical physics is to explain observations. We observe definite outcomes, we don't observe all possible outcomes on an equal footing.
 
  • Like
Likes vanhees71 and mattt

Similar threads

  • Quantum Interpretations and Foundations
Replies
13
Views
3K
  • Quantum Interpretations and Foundations
7
Replies
218
Views
12K
  • Quantum Interpretations and Foundations
Replies
6
Views
674
  • Quantum Interpretations and Foundations
Replies
25
Views
1K
  • Quantum Interpretations and Foundations
Replies
1
Views
3K
  • Quantum Interpretations and Foundations
2
Replies
42
Views
5K
  • Quantum Interpretations and Foundations
2
Replies
45
Views
2K
  • Quantum Interpretations and Foundations
Replies
2
Views
969
  • Quantum Interpretations and Foundations
Replies
12
Views
2K
  • Quantum Interpretations and Foundations
Replies
30
Views
3K
Back
Top