Is quantum theory a microscopic theory?

In summary, the conversation discusses whether quantum theory is a theory of the microscopic world or not. While some interpretations of quantum theory explicitly deal with microscopic objects, the minimal instrumental view refrains from doing so and only focuses on predicting the probabilities of macroscopic measurement outcomes. The conversation also touches on the idea of microscopic objects being defined through their detection, which would make them not truly microscopic. Ultimately, the conversation suggests that quantum theory can only be considered a theory of the micro world if one adopts an ontic interpretation.
  • #176
vanhees71 said:
The more I listen to these philosophical debates about apparent problems of QT and it's "ontology", the less I understand them. I come more and more to the conclusion that those people who have such problems just cannot accept that nature behaves in another way than thought based on our everyday experience with "classical phenomena", which is however an apparent phenomenon due to a much coarse grained observation of the relevant macroscopic degrees of freedom.
Its not a matter of acceptance. Even macroworld, Before the realization of gravitational lensing effect. People though of twin/identical galaxies and stars when in fact it is a distortion of single image. This is an example of incomplete knowledge.

http://www.einstein-online.info/spotlights/grav_lensing_history.1.html
Fall-out of the article in "Science"

Right after Einstein's brief text had been published, it was followed by a number of articles by well-known scientists, who picked up where Einstein had left off.
Fritz Zwicky (1898-1974), an astronomer at the California Institute of Technology, discussed the possibility of observing the lensing effect in the case of the recently discovered extragalactic nebula, in other words: other galaxies. The typical masses, sizes and mutual distances of galaxies are such that double images of a distant galaxy should be significantly more frequent than double images of stars: The necessary near-alignment of a closer object, a more distant object and an observer here on Earth is much more probable for galaxies than for stars.
Henry Norris Russell, the astronomer from Princeton, published an article in which he speculated about the inhabitants of a hypothetical planet orbiting the White Dwarf companion of the star Sirius. What would they see during a total eclipse - on the occasion when, from the point of view of these inhabitants, the White Dwarf star would move in front of the more distant Sirius? As White Dwarfs are very compact objects, light from Sirius passing close to the companion would be markedly deflected. Of course, astronomers on Earth would not be able to see this relativistic gala performance, and in fact Russell cites this scenario as a perfect test of relativity theory which, regrettably, is impossible to put into practice.
Indisputably, Einstein's little publication had lent credibility to the idea of gravitational lensing, and the concept became part of the general knowledge of theoretical astronomers.
 
Physics news on Phys.org
  • #177
atyy said:
Reality is just a tool to predict the results of observations.
More properly, observations are just a tool for predicting reality.
 
  • Like
Likes Jimster41 and julcab12
  • #178
Auto-Didact said:
The uncomfortableness isn't a matter of interpretative human psychology but a matter of mathematical self-consistency; the fact that in the minimal interpretation of QM things cannot be defined without making references to macroscopic devices simply means that this theoretical construction is de facto fundamentally logically inconsistent.

The reason we don't have to talk about measurement in classical physics is the fact that we can always control and account for the influence of the measuring bodies on the objects under investigation. For example we can make the effect of the measuring bodies as small as we want, or if it is finite, we can control and take that finite effect into account in our description. This means that we can talk about the state of a system, for example the position of a particle, as something that exists independently of observation. This is not possible in quantum physics because the effect of the measuring bodies is uncontrollable. If a body is to serve as a clock, then there will be an uncontrollable exchange of energy with the clock, which cannot be separately taken into account in order to specify the state of the objects. Any attempt to do so would interfere with the capability of the body to serve its original purpose of functioning as a clock. Our inability to eliminate disturbances does not by itself imply a need to alter the classical concept of observation. The reason why we cannot talk about the behavior of the objects independently of their interaction with the apparatus is that these disturbances are uncontrollable.
 
  • Like
Likes Schwann
  • #179
Auto-Didact said:
This often repeated meme that humans have a cognitive bias against QM due to natural selection isn't actually an answer but a copout; even worse, it is an incoherent philosophical ideology parading as science. The very existence of Bohmian mechanics even reduces this meme into absurdity.
Bohmian mechanics only "exists" for the non-relativistic theory. For me that's the reason, why I don't think it's a solution to any of the (pseudo-)problems discussed in philosophical circles about the "meaning" of QT.
 
  • Like
Likes weirdoguy
  • #180
atyy said:
That is precisely why QM is weird - it doesn't allow one to describe how nature behaves as she does without caring about the observer. QM does not describe nature. As Bohr said, "There is no quantum world. There is only an abstract quantum physical description. It is wrong to think that the task of physics is to find out how nature is. Physics concerns what we can say about nature."
There is no classical world, only an abstract classical description ;-)).

The classical description, however, is inaccurate. As far as we know today, the classical notion of state (point in phase space and thus determinism for all possible observables) is not accordance with observations, but the quantum notion of state is, and according to this only such observables are determined, for which the state is prepared into be so.
 
  • #181
@vanhees71 why don't you explain how you understand complementarity. The fact that only the quantity being measured has a value. Is this just a "brute fact" that needs to be accepted or do you consider it as being a consequence of some other principle or what.
I don't mean what shows it in the mathematical formalism.
 
Last edited:
  • Like
Likes julcab12
  • #182
PrashantGokaraju said:
The reason we don't have to talk about measurement in classical physics is the fact that we can always control and account for the influence of the measuring bodies on the objects under investigation. For example we can make the effect of the measuring bodies as small as we want, or if it is finite, we can control and take that finite effect into account in our description. This means that we can talk about the state of a system, for example the position of a particle, as something that exists independently of observation. This is not possible in quantum physics because the effect of the measuring bodies is uncontrollable. If a body is to serve as a clock, then there will be an uncontrollable exchange of energy with the clock, which cannot be separately taken into account in order to specify the state of the objects. Any attempt to do so would interfere with the capability of the body to serve its original purpose of functioning as a clock. Our inability to eliminate disturbances does not by itself imply a need to alter the classical concept of observation. The reason why we cannot talk about the behavior of the objects independently of their interaction with the apparatus is that these disturbances are uncontrollable.
That still doesn't mean such an interaction free description as a mathematical model is impossible; the existence of Bohmian mechanics as such a mathematical model proves that the contrary is true. The uncontrollability argument is frankly speaking just an irrelevant artefact of orthodox QM, which occurs by not taking the mathematical structure underlying the wave function seriously.

Because of that subjectively chosen approach to the mathematics of QM, when teaching the subject there tends to be far too strong of a focus on other less important (or even irrelevant) mathematical features underlying the theory such as operator algebras, while simultaneously not taking the complex analytic structure of orthodox QM to heart. If instead the complex structure is taken seriously, a completely novel form of mechanics naturally arises, which actually directly mathematically derives the Born rule as a necessary and sufficient consequence of a new conservation law, instead of something to be assumed a priori or based a posteriori on experiment as is done in textbook QM.

Moreover, this new form of mechanics immediately leads to the establishment of a new complex analytic mathematical theory, which is of course immediately gained within mathematics itself through a lemma simply by deriving this new mechanics from first principles. This new mathematical theory directly replaces vector calculus in physics, introducing many new mathematical identities and structures unknown and unused in standard vector calculus and its covariant extensions.
vanhees71 said:
Bohmian mechanics only "exists" for the non-relativistic theory. For me that's the reason, why I don't think it's a solution to any of the (pseudo-)problems discussed in philosophical circles about the "meaning" of QT.
That argument isn't very strong, seeing that
  1. while all the equations were there, due to philosophical propaganda invented and spouted against others by Bohr, Heisenberg et al. about 'what physics is', BM still wasn't formally discovered as a physical theory until the 50s while QM flourished for decades, even until this very day,
  2. the faulty proof by Von Neumann which was accepted universally among physicists and mathematicians, that is until John Bell definitively demonstrated that Von Neumann was wrong, with the very existence of BM as a mathematical model as a direct disproof of Von Neumann,
  3. Bohm's tainted political affiliation with communism, which caused the FBI to demonize him publicly, causing Princeton to abandon Bohm, making him to have to flee the US for Brazil, where his work couldn't spread and be adopted as freely as in the US i.e. the very definition of scientific censorship at work.
Seeing the heavy amount of explicit bias levelled against Bohmian mechanics as a legitimate academic endeavor due to several historico-sociological factors in the development and natural communication of the theory in scientific circles, it is somewhat a wonder that BM today has become somewhat more widely known at all. And before you or anyone else claims that analysing historico-sociological factors for the success of a theory is not really a scientific argument, just know that this in fact is a vehemently scientific methodology better known as factor analysis of path dependent evolution of complex networks and is widely used in almost all the sciences; ironically, from my experience only physicists tend to be unfamiliar with such scientific methods.

The existence of BM as a mathematical model which literally solves all of the foundational problems which plague orthodox QM is not merely something to be brushed off lightly, especially given that all these foundational problems of QM are not solved by QFT, which is itself an effective mathematical model rife with it's own set of consistency problems.

It should be clear that any perturbative numerical scheme is de facto inadequate as a fundamental theory of physics, regardless of how precise its numerical predictions match observations. A prototypical example of this from the history of science is Ptolemaic epicycle theory (PET), which despite saying nothing about ontology - i.e. what planets were - is exquisitely numerically precise and mathematically very sophisticated, with its perturbative predictions matching astonomical observations without known error up to arbitrary degrees of precision.

In other words, from a minimal statistical point of view of celestial mechanics there are no problems with PET whatsoever. Conceptually of course i.e. as a fundamental theory of celestial mechanics, PET clearly is pure nonsense, nothing but a mathematically elaborate perturbative calculational procedure for generating reliable statistical predictions, i.e. an epistemic effective theory; yet it still took extreme effort for the astronomers and mathematicians of the day, who grew up with PET and cherished it because it was so practical for their work, to be convinced otherwise.

This is exactly the same reason why QFT is at best nothing but an epistemic effective theory, based in QM - relativistic or not - which is itself plagued by foundational problems i.e. just waiting to be superseded by a new fundamental theory. QFT, despite all of its empirical success, has no place whatsoever for the obstruction of progress in the foundations of physics of any such new theories which can actually supersede QM; once QFT starts to seriously do so, there is a strong case to make that it has began to outlive its usefulness, as eventually happens to all epistemic effective theories.
 
Last edited:
  • #183
Yes, I think the basic postulates of QT, as for any theory, have to be accepted as they are. They grow out of observational and empirical investigations as well as mathematical analysis and model building, summarizing and compressing an amazingly large collection of such empirical knowledge into a page of clear mathematical statements.

If I understand Bohr right (and I doubt that it is possible to really be sure what Bohr really wanted to say), then complementarity simply means that the accurate measurement of one quantity excludes (generally) the accurate measurement of another quantity that is not compatible with the measured one.

As usual, the Stern-Gerlach experiment is a good example. It measures the component of the magnetic moment of a neutral atom (or some other uncharged "particle") or equivalently the corresponding spin component. If you decide to measure the ##z## component of the spin accurately, you direct the magnetic field of the SG apparatus in this direction. The measurement then leads to a (very close to 100%) entanglement between position and the value of the ##z##-component of spin, and just looking, where a given atom ends up on a screen thus tells you with certainty the value of ##\sigma_z##, but the spin component in any other direction is more or less indetermined (at the extrem in any direction perpendicular to the ##z##-direction, e.g., the component ##\sigma_x## is maximally indetermined, i.e., it takes with 50% probability either of the two possible values, when measuring ##\sigma_x## on a particle prepared in a state of determined value of ##\sigma_z## (which is very easy by using the described spin-##z## SGA to filter out the unwanted particles and just keep the ones with, e.g., ##\sigma_z=\hbar/2##).

It's simply impossible to measure and determine two incompatible spin components, because for the SGE to work (as both a measurement and a preparation device for pure spin states) you have to construct your magnet in the appropriate way, i.e., pointing into the direction for the spin component you want to measure or prepare, and you simply have to choose which direction that is. You cannot have a magnetic field pointing quite accurately in two perpendicular directions at once and thus you cannot measure two perpendicular spin components accurately with such a device, and quantum theory tells you that you cannot expect to find any other clever device that could circumvent this "complementarity" of spin components in different directions.

Famously Bohr and Einstein struggled about this claim (though not so much about spin but about position and momentum, and here you can use the original discussion of the Heisenberg microscope, but be sure to take into account the important correction of Heisenbserg's original paper by Bohr) by inventing more and more gedanken experiments to realize an accurate measurement of both position and momentum, and Bohr could disprove Einstein's ideas one by one.
 
  • Like
Likes DarMM
  • #184
"..Erwin Schrödinger proposed that the stuff that makes up electrons might be smeared out in space and that this smeared electron essence might be what waves. If this idea was correct then we would expect to find all of the electron’s properties, spread out over a distance, but we never do. Every time we locate an electron, we find all of its mass and all of its charge concentrated in one tiny, point-like region.

Max Born came up with a different idea. He suggested that the wave is actually a probability wave. Einstein tinkered with a similar idea when he hypothesized that these waves were optical observations that refer to time averages rather than instantaneous values.
Inserting a probability wave (also called a state vector, or a wave function) as a fundamental aspect of Nature delivers another blow to our common-sense ideas about how things truly operate. It suggests that experiments with identical starting conditions do not necessarily lead to identical results because it claims that you can never predict exactly where an electron will be in a single instant. You can only define a probability that we will find it over here, or over there, at any given moment. Two situations with the same probabilistic starting conditions, say of a single particle, might not produce the same results, because the particle can be anywhere within that probability distribution. From a classical perspective, the discovery that the microscopic universe behaves this way is absolutely baffling. Nevertheless, it is how we have observed Nature to be."

Maybe my classical training ( Euclidean intuition) is wrong fundamentally or perhaps ill create a bit of self preservation--that 'time' is indeed weird in quantum realm which account for the weirdness(Maybe) like Einstien's early sentiment.
 
  • #185
Auto-Didact said:
That still doesn't mean such an interaction free description as a mathematical model is impossible; the existence of Bohmian mechanics as such a mathematical model proves that the contrary is true. The uncontrollability argument is frankly speaking just an irrelevant artefact of orthodox QM, which occurs by not taking the mathematical structure underlying the wave function seriously.

What mathematical structure is that? On the contrary, it seems like Bohmian mechanics wants to ignore many features of the mathematical structure of quantum mechanics, like transformation from one basis to another, etc.
 
Last edited:
  • Like
Likes dextercioby
  • #186
microsansfil said:
Moreover, the concept of existence is a concept from the field of metaphysics, but not from the field of physics.

I'm sure that there are formulations of science that neither prove nor assume existence of real things, but most of the science from Galileo to Einstein pretty much did and is summarized by this quote:

Three axioms presupposed by the scientific method are realism (the existence of objective reality), the existence of observable natural laws, and the constancy of observable natural law. Rather than depend on provability of these axioms, science depends on the fact that they have not been objectively falsified.

See: https://arxiv.org/ftp/arxiv/papers/0812/0812.4932.pdf
 
  • Like
Likes Mentz114, Schwann and Auto-Didact
  • #187
Well, nothing of this is in any way questioned by QT. To the contrary, QT is a paradigmatic example for the power of these ideas: The scientific method has brought the discovery that the classical worldview is inaccurate but has some approximate validity under certain circumstances and for a certain limited realm of pheonomena.

E.g., the discovery of the atomistic structure of matter cannot in any way be made consistent with the observed stability of this very matter without the use of QT, and indeed all these very fundamental assumptions (objective reality, i.e., the independence of phenomena from individual believes, religion, worldviews of scientists, the reproducibility of observations reflecting the "existence and constancy of observable natural laws") are described by QT very well. E.g., the "reproducibility" of the properties of fundamental particles as completely indistinguishable entitities is unexplainable within classical physics but very "natural" within quantum theory.
 
  • #188
vanhees71 said:
If I understand Bohr right (and I doubt that it is possible to really be sure what Bohr really wanted to say), then complementarity simply means that the accurate measurement of one quantity excludes (generally) the accurate measurement of another quantity that is not compatible with the measured one
This might be just phrasing but it's a bit more than just preventing an "accurate measurement" of another quantity. It's that the other quantity has no value at all. I think this is essentially what you are saying in the third paragraph.

Now we pass to the next level of difficulty, how does the classical device "create" (for lack of a better word) the value of the quantity you measure? QM doesn't really seem to give a "closed" derivation of this.
 
  • #189
DarMM said:
This might be just phrasing but it's a bit more than just preventing an "accurate measurement" of another quantity. It's that the other quantity has no value at all. I think this is essentially what you are saying in the third paragraph.

Exactly. The idea that you cannot measure both p and x at the same time was what Heisenberg originally said. As vanhees71 said, Bohr added a correction to Heisenberg's paper, which Heisenberg mentions at the end of his paper, where Bohr says that the uncertainty principle cannot be attributed to finite disturbances alone. The complementarity of concepts used to describe evidence means that p and x cannot both simultaneously have exact values.
 
  • #190
PrashantGokaraju said:
What mathematical structure is that? On the contrary, it seems like Bohmian mechanics wants to ignore many features of the mathematical structure of quantum mechanics, like transformation from one basis to another, etc.
The mathematical structure underlying the wavefunction is not Hilbert space, but a multiply connected configuration space, which can be compactified into a Riemann surface. In other words, the underlying mathematical structure of the wavefunction is a Kähler manifold, whose proper analysis necessitates the use of index theory, algebraic topology and Hodge theory.
vanhees71 said:
Well, nothing of this is in any way questioned by QT.
On the contrary, the minimal statistical interpretation of QM instrumentally taken as a complete and sufficient physical theory - i.e. the Copenhagen interpretation - contradicts all of it: this is what it means to have no ontology! I'm somewhat surprised that you don't seem to recognize this.

As a mathematical framework, minimally interpreted QM (MIQM) is actually not a physical theory at all - i.e. it doesn't belong to the same class as all other physical theories do - but roughly speaking, MIQM is simply a more elaborate version of control theory for some yet unknown underlying physical theory.

This is not in any way an argument to return to classical physics, far from it; this is instead an argument for a full departure of classical physics, while MIQM is only a half-baked departure which is clinging onto both outdated concepts and archaic mathematics belonging to classical mechanics, instead of generalizing those concepts as well as the mathematics.

BM on the other hand is a distinctly new non-classical theory of mechanics, which successfully generalizes all of classical mechanics in the form of Hamilton-Jacobi theory as a specific idealization of itself; BM may possibly even be the unique mathematical framework capable of achieving this feat which also resides within the class of physical theories.

The fact that BM has not achieved a relativistic formulation is because of the explicitly non-local mathematics involved. Seeing that the structure of configuration space is Kähler, I expect that the relativistic formulation of BM necessarily requires a full sheaf cohomological formulation of a dynamical spacetime manifold using Riemann surfaces and so that this formulation will necessarily be at least generally relativistic.
 
Last edited:
  • Like
Likes Jimster41 and DarMM
  • #191
Dr. Courtney said:
I'm sure that there are formulations of science that neither prove nor assume existence of real things, but most of the science from Galileo to Einstein pretty much did and is summarized by this quote:

Three axioms presupposed by the scientific method are realism (the existence of objective reality), the existence of observable natural laws, and the constancy of observable natural law. Rather than depend on provability of these axioms, science depends on the fact that they have not been objectively falsified.

See: https://arxiv.org/ftp/arxiv/papers/0812/0812.4932.pdf
Nothing more than a metaphysical choice made by humans and not by scientific theories that say nothing about ontology.

I can peremptorily affirm that any peremptory statement is false.

It is never Known but Is the Knower.

/Patrick
 
  • #192
Is there a definite, single "the" minimal instrumental view of quantum theory?

You say "the minimal instrumental view of quantum theory" as if saying that makes everyone know exactly what postulates and philosophy are used to define that term. But when I search for it, I get this thread and a bunch of unrelated results.

Is there a broad consensus in the physics community to the exact meaning of "the minimal instrumental view of quantum theory" as you are using the term?

If not, what postulates and philosophy are used to define the term as you are using it here?
 
  • #193
DarMM said:
This might be just phrasing but it's a bit more than just preventing an "accurate measurement" of another quantity. It's that the other quantity has no value at all. I think this is essentially what you are saying in the third paragraph.

Now we pass to the next level of difficulty, how does the classical device "create" (for lack of a better word) the value of the quantity you measure? QM doesn't really seem to give a "closed" derivation of this.
It's hard to formulate this accurately. One has to strictly distinguish between the state, i.e., the description of how the measured object is prepared from measuring. This is defined by the initial preparation procedure and is mathematically given by the statistical operator ##\hat{\rho}##. This tells you all the probabilistic properties of all observables. Of course, if some observable has not with 100% probability one of its possible values, then it's indetermined. I didn't imply to deny this simple fact of course.

Another thing is the measurement. This is much more difficult to describe, because you always must describe a concrete measurement device. If you have two incompatible observables it's impossible to measure both accurately. You have to decide which one you want to measure accurately, but this excludes the accurate knowledge about the other observable on the same individual system. To test the probabilistic predictions of QT for these two observables thus you need an ensemble of sufficient size to measure both quantitities, but for each single system you can measure always only one. So you have to make a statistical analysis on one partial ensemble measuring the one variable and then make a statistical analysis on another partial ensemble measuring the other observable.

Now, of you consider an ideal von Neumann filter measurement, there cannot occur any discrepancies as falsely claimed in the EPR paper. In their example you can for each individual particle pair measure either the position of the one particle accurately, implying that you don't know much about its momentum and thus due to the entanglement of the particle pair concerning position and momentum you now for this particle pair the position of the other particle precisely. The same holds for accurately measuring momentum, then knowing the other particle's momentum as accurately but not the position of any of both particles.

As I said, how you measure position or momentum accurately is a question of the specific measurement devices. As I said, a good example is the original Heisenberg microscope to measure either position or momentum of an electron through scattering of light (i.e., at least a single photon). There's a detailed analysis in a famous paper by v. Weizsäcker. I don't know an English translation though.
 
  • #194
Auto-Didact said:
On the contrary, the minimal statistical interpretation of QM instrumentally taken as a complete and sufficient physical theory - i.e. the Copenhagen interpretation - contradicts all of it: this is what it means to have no ontology! I'm somewhat surprised that you don't seem to recognize this.
You don't understand my argument. First of all the minimal interpretation is not "Copenhagen". There's no "Copenhagen Interpretation" but there are many, all of them burying all these issues under a montain of only vaguely defined philosophical gibberish. The worst of all was Heisenberg. Usually Bohr corrected him on the physics, but he himself also wrote not very clearly, and that's where all these ancient debates come from.

My point is that the "ontology" is given by what's observable and quantitatively measurable. Anything else is just empty speculation in the sense of science, and if you just accept that nature is on the fundamental level indeterministic, there's no "ontological problem" whatsoever. If a system is prepared in a state ##\hat{\rho}##, for which the observable ##A## doesn't take one of it's values with 100% probability when measured, then the value of this observable is simply indetermined. That's it. There's no problem with that, as long as it is in accordance with observations, and that's indeed the empirical status right now: There's no known empirical discrepancy with the predictions of the outcome of measurements given by QT.
 
  • Like
Likes Schwann
  • #195
vanhees71 said:
My point is that the "ontology" is given by what's observable and quantitatively measurable.
But nothing of that is in the formulas of quantum physics seen through the statistical interpretation.

The latter is not about what's observable in any particular case, but only about what's observable in the mean in a long run of similar experiments. Thus it cannot say what a detector made of quantum particles observes in a particular case, but only what it does on the average.

Nothing in the formalism answers DarMM's question of how the individual value of the detector result is obtained. Thus individual measurement results have no ontology in the theory. How many repetitions are needed before it gets real?
 
Last edited:
  • Like
Likes Jimster41, Auto-Didact and andrew s 1905
  • #196
Well, as well you can say that the outcome of measurements are unknown, because the observables do not take a determined value in the state the system is prepared in. Simply accepting this, resolves all pseudo-problems of "interpretation".

How the individual value of the detector result is obtained depends on the specific construction of the detector and is not part of a theoretical general description of systems. Also I don't understand what you mean by the final (rhetorical?) question. An individual measurement leads always to a well-defined outcome (in an idealized world of an ideal measurement device). This is an observational fact and due to the construction of the apparatus. Ask the experimentalist/engineer who constructed it, how he managed to construct it!
 
  • #197
The Bill said:
Is there a definite, single "the" minimal instrumental view of quantum theory?

You say "the minimal instrumental view of quantum theory" as if saying that makes everyone know exactly what postulates and philosophy are used to define that term. But when I search for it, I get this thread and a bunch of unrelated results.

Is there a broad consensus in the physics community to the exact meaning of "the minimal instrumental view of quantum theory" as you are using the term?

If not, what postulates and philosophy are used to define the term as you are using it here?
Practically speaking, the minimal interpretation is textbook QM without further commitment to any other arguments, i.e. what is used 'in practice purely empirically' by applied and experimental physicists. Here is a second order quotation of what the minimal interpretation is from the literature:
Lord Jestocost said:
Hilary Putnam in “Philosophical Papers: Volume 1, Mathematics, Matter and Method”, Second Edition, 1979, p. 147:
“To put it another way, it is a part of quantum mechanics itself as it stands today that the proper interpretation of the wave is statistical in this sense: the square amplitude of the wave is the probability that the particle will be found in the appropriate place if a measurement is made (and analogously for representations other than position representation). We might call this much the minimal statistical interpretation of quantum mechanics, and what I am saying is that the minimal statistical interpretation is a contribution of the great founders of the CI— Bohr and Heisenberg, building, in the way we have seen, on the earlier idea of Born — and a part of quantum mechanical theory itself. However, the minimal statistical interpretation is much less daring than the full CI. It leaves it completely open whether there are any observables for which the principle ND is correct, and whether or not hidden variables exist. The full CI, to put it another way, is the minimal statistical interpretation plus the statement that hidden variables do not exist and that the wave representation gives a complete description of the physical system.”
[CI means “Copenhagen Interpretation”, italic in original, principle ND: see footnote **, LJ] [bold by LJ]
In other words, the Copenhagen interpretation is a second order interpretation which says that the minimal interpretation of QM is completely sufficient as is, with no further ontology necessary - nothing behind the curtain so to speak - and that this is all that is or can be necessary for physics, with somewhat fuzzy boundaries of what else is or isn't fundamental (wavefunctions, operator algebras, POVMs, etc), which is why there is so much confusion about it.
vanhees71 said:
You don't understand my argument. First of all the minimal interpretation is not "Copenhagen". There's no "Copenhagen Interpretation" but there are many, all of them burying all these issues under a montain of only vaguely defined philosophical gibberish. The worst of all was Heisenberg. Usually Bohr corrected him on the physics, but he himself also wrote not very clearly, and that's where all these ancient debates come from.
Given that I actually give a definition above - one which can be transformed into a statement in predicate logic with modal operators - I would say that the 'vaguely defined' meme is a very weak argument: you and I and everyone else alive are able to succesfully deal with and navigate vague matters on a daily basis. Experience has taught us that one just needs reliable reproducibility to have an accurate grasp of vague matters, with precision only a secondary concern.

You also mischaracterize my point, I did not say the MI = CI; instead I quote again @Lord Jestocost, who quotes Putnam:
Lord Jestocost said:
As Hilary Putnam writes in “Philosophical Papers: Volume 1, Mathematics, Matter and Method”:

“The full CI [Copenhagen Interpretation], to put it another way, is the minimal statistical interpretation plus the statement that hidden variables do not exist and that the wave representation gives a complete description of the physical system.” [italics in the original]
vanhees71 said:
My point is that the "ontology" is given by what's observable and quantitatively measurable. Anything else is just empty speculation in the sense of science, and if you just accept that nature is on the fundamental level indeterministic, there's no "ontological problem" whatsoever. If a system is prepared in a state ##\hat{\rho}##, for which the observable ##A## doesn't take one of it's values with 100% probability when measured, then the value of this observable is simply indetermined. That's it. There's no problem with that, as long as it is in accordance with observations, and that's indeed the empirical status right now: There's no known empirical discrepancy with the predictions of the outcome of measurements given by QT.
Ontology is never given by what is measurable; instead the theory defines what exists and whether or not it is measurable, while what is actually measured is phenomenology. You cannot just shoehorn in a new definition of ontology based on pragmatics, this stuff is not just up for grabs; if you want to seriously argue your point you have to prove it using (first order) logic that your definition of ontology is self-consistent and that it is an accurate representation of knowledge representation with respect to mathematical models and empirical data; the problem is that such attempts at redefinition usually have already been tried before and disproven or shown to be a disaster practically; you just need to know how to search the literature in order to find all these failed attempts.

As it stands, your personal hermeneutics of what ontology is or should be is based on a severe miscomprehension, in fact on a premature reification of approximate models of phenomenon from a pragmatic idealization; your misunderstanding of what ontology is similar to confusing labels of elements of a set for the set itself.
 
  • #198
vanhees71 said:
An individual measurement leads always to a well-defined outcome (in an idealized world of an ideal measurement device).
But not if the measurement device is treated as a single quantum mechanical system.
 
  • #199
As a macroscopic object, it's not even possible to do so.
 
  • #200
Dr. Courtney said:
Three axioms presupposed by the scientific method are realism (the existence of objective reality), the existence of observable natural laws, and the constancy of observable natural law. Rather than depend on provability of these axioms, science depends on the fact that they have not been objectively falsified.

See: https://arxiv.org/ftp/arxiv/papers/0812/0812.4932.pdf

microsansfil said:
Nothing more than a metaphysical choice made by humans and not by scientific theories that say nothing about ontology.

While it is important to distinguish the unproven axioms of a thought system from the results of that system, it would be inaccurate to classify those axioms as "nothing more that a metaphysical choice." Or would you also argue that the axioms of Euclidean geometry are also "nothing more than a metaphysical choice"?

The axioms of Euclidean geometry are not merely a metaphysical choice, they are extremely useful for proving a wide array of things in mathematics that allowed us humans to learn lots of stuff with very close correspondence to measurements and observations.

Likewise, the axioms of natural science are more than a metaphysical choice, they are extremely useful for discovering a wide array of scientific findings and theories making predictions with close correspondence to measurements and observations.

Some metaphysical choices don't turn out to be very useful. The axioms of Euclidean geometry and natural science have turned out to be some of the most useful metaphysical choices humans have made.
 
  • Like
Likes Mentz114
  • #201
vanhees71 said:
As a macroscopic object, it's not even possible to do so.
Surely, any single macroscopic object is a single quantum mechanical system. Thus it must be possible in principle to treat it as such, even though in practice one can do it only approximately. But approximations are ubiqitous anyway; even a single photon state can be ascertained only approximately.
 
  • Like
Likes vanhees71 and Auto-Didact
  • #202
Dr. Courtney said:
Or would you also argue that the axioms of Euclidean geometry are also "nothing more than a metaphysical choice"?
Not the axioms of Euclidean geometry are just syntax ! They are neither true nor false. Theorems can be built, demonstrations can be made within the framework of the mathematical Proof theory. The link with semantics (Model theory) is through the completeness theorem.

/Patrick
 
  • Like
Likes Auto-Didact and vanhees71
  • #203
A. Neumaier said:
Surely, any single macroscopic object is a single quantum mechanical system. Thus it must be possible in principle to treat it as such, even though in practice one can do it only approximately. But approximations are ubiqitous anyway; even a single photon state can be ascertained only approximately.
Sure, and that approximation leads to the classical behavior of the relevant observables of the macroscopic system.
 
  • #204
microsansfil said:
Not the axioms of Euclidean geometry are just syntax ! They are neither true nor false. Theorems can be built, demonstrations can be made within the framework of the mathematical Proof theory. The link with semantics (Model theory) is through the completeness theorem.

/Patrick
Indeed, as any axiom system of mathematics also Euclidean geometry is an invention of human thought. Its application to the description of real-world phenomena is already physics, and it turns out that it is a pretty good description for the description of space for an inertial observer in the approximation that gravitational interactions can be neglected or treated in non-relativistic approximation(s).
 
  • #205
Mentz114 said:
Earlier you deny that perceptions are relevant to physics
I never denied that. I only denied that only perceptions are relevant to physics.
 
  • Like
Likes Mentz114
  • #206
microsansfil said:
Moreover, the concept of existence is a concept from the field of metaphysics, but not from the field of physics.
I agree with the former but not with the latter. I don't think that there is a sharp borderline between physics and metaphysics.
 
  • Like
Likes Schwann, artis and Auto-Didact
  • #207
microsansfil said:
Not the axioms of Euclidean geometry are just syntax ! They are neither true nor false. Theorems can be built, demonstrations can be made within the framework of the mathematical Proof theory. The link with semantics (Model theory) is through the completeness theorem.

/Patrick
Indeed, as Poincaré said 'fundamental principles are only conventions - adopted due to some convenience - and it is quite unreasonable to ask whether they are true or false as it is to ask whether the metric system is true or false.'
 
  • #208
DarMM said:
As I said above there's nothing preventing you from treating the device quantum mechanically, thus it's not a problem with obtaining emergent classical behavior. It's a separate problem. It's that when you do model the device with QM you invoke a second device that is treated classically. You could treat this device with qm, but you invoke a third device and so on. This is sometimes known as the Von Neumann chain.

The presence of something not modeled with QM that selects a particular Boolean frame is always assumed in typical Copenhagen presentations of the theory.

This is not the case in classical theories, where the theory is not written with reference to a system lying outside the theory.
Thinking about this I think it needs to be supplemented by some realistic details about the Von Neumann chain.

For a device consisting of ##N## particles it can be shown that the second device required to measure it must contain particle number ##N^{'}## given by:
$$N^{'} > K\exp\left(\frac{CN^{r}}{qs}\right)$$
with ##s## a constant related to how the resolution of the second device scales with its size, ##q## a constant related to how fluctuations of results for the second device scale with its size, ##r## a constant depending on the type of material the first device is made from and ##K## and ##C## constant depending on the exact chemical make up of the first device.

##q## and ##s## turn out to essentially be universal with values of ##\frac{1}{2}## and ##\frac{1}{3}## respectively. ##r = \frac{2}{3}## for fermionic matter. ##C## and ##K## are of ##\mathcal{O}\left(1\right)## for the metals most devices are made of.

Considering a measuring device in real life has around ##N \approx 10^{27}## particles this gives:
$$N^{'} > 10^{10^{18}}$$
far larger than the number of particles in the observable universe (##\approx 10^{80}##). Thus the second device would either be so massive as to collapse into a black hole even for our smallest devices, or need to be so large (bigger than superclusters) that special relativity would prevent it from working within the timescales it needs to analyse the first device.

Thus in reality the chain only has one step. The above calculations also mean that interference terms in our devices have no operational meaning as observables ##\mathcal{A}## that demonstrate them would require devices that cannot exist. This links into how in algebraic quantum field theory not every abstract operator is actually part of the local observable C*-algebra due to stress-energy constraints etc. Such interference observables ##\mathcal{A}## just don't exist physically.

It also probably means Wigner's friend isn't very meaningful for the actual classical world. (And this is ignoring how it requires macroscopic systems to basically be "frozen" to justify the pure state)
 
Last edited:
  • Like
Likes Mentz114, A. Neumaier, Auto-Didact and 1 other person
  • #210
A. Neumaier said:
@DarMM: Where is this formula proved?
I'm trying to order the original literature that gives this calculation to see where each of the constants is first calculated etc. However an essentially complete guide is given in:
Omnes, R., 1999. The Interpretation of Quantum Mechanics. 1st ed. Princeton: Princeton Univeristy Press.

Specifically Chapters 5-7. The formula itself is (7.32) in that book. Note that his derivation has gaps, e.g. he does sketch the calculation of ##C##, but ##K## is simply quoted to be of the right order.

EDIT: However all other details are basically given, e.g. ##q## and ##s## are derived
 
  • Like
  • Informative
Likes dextercioby, A. Neumaier and Auto-Didact

Similar threads

  • Quantum Physics
Replies
3
Views
356
Replies
36
Views
2K
Replies
4
Views
892
  • Quantum Physics
Replies
2
Views
1K
Replies
5
Views
366
Replies
12
Views
2K
  • Quantum Physics
Replies
7
Views
1K
  • Quantum Interpretations and Foundations
Replies
2
Views
841
  • Quantum Physics
Replies
17
Views
1K
Back
Top