Jürg Fröhlich on the deeper meaning of Quantum Mechanics

In summary, the paper by Jürg Fröhlich discusses the problems with the standard formulation of quantum mechanics and its shortcomings. He feels that the subject has better remain a hobby until later in his career, but when he was approaching mandatory retirement he felt an urge to clarify his understanding of some of the subjects he had had to teach to his students for thirty years. The paper presents a completion of QM, the ''ETH-Approach to QM,'' which is too abstract to become popular. Interesting paper.
  • #71
Good luck. I don't think that you can ever achieve this for any physical theory which describes nature. It's simply to complicated.

I never understood what Bohr precisely wanted to say, because of his too philosophical enigmatic writing style, but where for sure he is right with is that QT as a description of what's observed in nature is about the observations done finally with macroscopic measurement devices and that their workings are well-enough understood within classical physics. The validity of classical physics for macroscopic systems, as well as quantum theory (in fact any physical theory) is seen from comparison to experiment and observation. I think the paper by Englert is brilliant, cleaning up all the superfluous philosophical balast of "solving" some philosophical pseudo problems that don't have anything to do with physics nor will most probably have any merit in leading to new better theories.
 
  • Like
Likes Lord Jestocost
Physics news on Phys.org
  • #72
vanhees71 said:
QT as a description of what's observed in nature is about the observations done finally with macroscopic measurement devices and that their workings are well-enough understood within classical physics.
Thus you now endorse the Heisenberg cut (between a quantum system and classically treated detector results), for which you never before saw any use...
 
  • Like
Likes Auto-Didact
  • #73
vanhees71 said:
Good luck. I don't think that you can ever achieve this for any physical theory which describes nature. It's simply to complicated.
...
It seems that a general theory is impossible. It can be done for some simple cases but each case has to be handled individually. The big problem is probability amplitude which cannot be physical as it stands for dimensional reasons. How can there ever be a mapping from amplitude space to (say) pressure or any (thermo)dynamical variable ?
 
  • #74
@vanhees71

To me it’s merely astonishing that Renninger-type of measurements seem to be in every way equivalent to measurements in which something seems “actually” to happen. An english translation by W. De Baere of Renninger’s paper “Zum Wellen–Korpuskel–Dualismus” (Zeitschrift für Physik 136, 251-261 (1953)) can be found here: https://arxiv.org/abs/physics/0504043
 
  • #75
Mentz114 said:
It seems that a general theory is impossible. It can be done for some simple cases but each case has to be handled individually. The big problem is probability amplitude which cannot be physical as it stands for dimensional reasons. How can there ever be a mapping from amplitude space to (say) pressure or any (thermo)dynamical variable ?
I don't understand anything of this? Why should there be a problem with a "probability amplitude" for dimensional reasons? Of course the dimension of the probability and thus also the "probability amplitude" depends on for which (continuous) quantity it is given. A distribution transforms as a distribution, which is why mathematically it's called a distribution. E.g., in position representation the "probability amplitude" (usually simply called wave function), ##|\psi(\vec{x})##, of a single particle has dimension ##1/\text{length}^{3/2}##. No problem whatsoever.

I've no clue what you mean concerning thermodynamics. Quantum statistics is well-defined, and all the thermodynamical quantities you state are just thermodynamical quantities. What should they have to do with "amplitude space" (whatever this means)?
 
  • Informative
Likes Mentz114
  • #76
Lord Jestocost said:
@vanhees71

To me it’s merely astonishing that Renninger-type of measurements seem to be in every way equivalent to measurements in which something seems “actually” to happen. An english translation by W. De Baere of Renninger’s paper “Zum Wellen–Korpuskel–Dualismus” (Zeitschrift für Physik 136, 251-261 (1953)) can be found here: https://arxiv.org/abs/physics/0504043
I've looked at the German original because I thought there must be errors in the translation. To my astonishment that's not the case. I'm puzzled about the fact that such a paper could ever appear in a serious physics journal as "Zeitschrift für Physik". Nothing about "photons" he says makes any sense, nor has ever such things as a path of a single photon or a guiding wave been observed. Despite his claim no convincing pilot-wave theory a la de Broglie and Bohm has been formulated for photons nor relativistic particles in general.
 
  • #77
vanhees71 said:
... of a single particle has dimension ##1/\text{length}^{3/2}##...
How many furlongs is that ?
 
  • Like
Likes Auto-Didact
  • #78
Mentz114 said:
How many furlongs is that ?
Eight furlongs per keel
 
  • Like
Likes Auto-Didact and Mentz114
  • #79
DarMM said:
so quantum observables are as much a property of the device as the quantum system itself.
Yep. It seems to me that :

The measurement is a projection into the base defined by the measuring instrument. We measure a trace left by the system on the measuring device that makes sense to our consciousness as human observers (through the mediation of our brains).

/Patrick
 
  • #80
PeterDonis said:
The issue with the minimal interpretation is that there is no rule that tells you when a measurement occurs. In practice the rule is that you treat measurements as having occurred whenever you have to to match the data. So in your example, since nobody actually observes observers to be in superpositions of pointer states, and observers always observe definite results, in practice we always treat measurements as having occurred by the time an observer observes a result.
stevendaryl said:
So, the minimal interpretation ultimately gives a preference to macroscopic quantities over other variables, but this preference is obfuscated by the use of the word "measurement". The inconsistency is that if you treat the macroscopic system as a huge quantum mechanical system, then no measurement will have taken place at all. The macroscopic system (plus the environment, and maybe the rest of the universe) will not evolve into a definite pointer state.
Is all of this not just a problem related to QM being a probabilistic theory?

For example if I model a classical system such as a gas using statistical methods like Liouville evolution, I start with an initial state on the system ##\rho_0## and it evolves into a later one ##\rho_t##. Nothing in the formalism will tell me when a measurement occurs to allow me to reduce ##\rho## to a tighter state with smaller support (i.e. Bayesian updating). Just as nothing in a probabilistic model of a dice will tell me when to reduce the uniform distribution over outcomes down to a single outcome. Nothing says what a measurement is.

Similarly one could zoom out to a larger agent, who uses a distribution not only over the gas from the first example but also over the state space of the device used to measure it (staying within classical mechanics for now). His ##P## distribution will evolve under Liouville's equation to involve multiple detection states for the device, in contrast to my case where the device lies outside the probability model and is used to learn of an outcome.

Any probability model contains the notion of an "agent" who "measures/learns" the value of something. These ideas are primitives unexplained in probability theory (i.e. what "causes" Bayesian updating). Any "zoomed out" agent placing my devices within their probability model will not consider them to have an outcome when I do until they themselves "look".

So to me all of this is replicated in Classical probability models. It's not a privileged notion of macroscopic systems, but unexplained primitive notions of "agent" and "learning/updating" common to all probability models. Introduce an epistemic limit in the Classical case and it becomes even more similar to QM with non-commutativity, no cloning of pure states, superdense coding, entanglement monogamy, Wigner's friend being mathematically identical to the quantum case, etc

The major difference between QM and a classical probability model is the fact that any mixed state has a purification on a larger system, i.e. less than maximal knowledge of a system ##A## can always be seen as being induced by maximal knowledge on a larger system ##B## containing ##A## (D'Ariano, Chiribella, Perinotti axioms). This is essentially what is occurring in Wigner's friend. Wigner has a mixed state for his friend's experimental device because he has maximal possible knowledge (a pure state) for the Lab as a whole. The friend does not track the lab as a whole and thus he can have maximal knowledge (a pure state) for the device.

So as long as QM is based on probability theory viewed in the usual way you will always have these odd notions of "when does a measurement occur/when do I update my probabilities" and "I consider event ##A## to have occured, but somebody else might not". You could say this is a discomfort from having probability as a fundamental notion in your theory.

If one wishes a way out of this would be @A. Neumaier 's view where he reads the formalism differently and not in the conventional statistical manner.
 
Last edited:
  • Like
Likes dextercioby, akvadrako, vanhees71 and 2 others
  • #81
DarMM said:
So to me all of this is replicated in Classical probability models. It's not a privileged notion of macroscopic systems, but unexplained primitive notions of "agent" and "learning/updating" common to all probability models.
Only in classical subjective (Bayesian) probability. Frequentist interpretations have neither such notions nor the associated problems.
 
  • Like
Likes Auto-Didact
  • #82
A. Neumaier said:
Only in classical subjective (Bayesian) probability. Frequentist interpretations have neither such notions nor the associated problems.
Do you mean Subjective Bayesianism (e.g. de Finetti) or are you using "Subjective" to denote Bayesianism in general?
 
Last edited:
  • Like
Likes Auto-Didact
  • #83
DarMM said:
Do you mean Subjective Bayesianism (e.g. DeFinetti) or are you using "Subjective" to denote Bayesianism in general?
I call a probability theory subjective if the probabilities depend on an ill-defined agent (rather than on objective contexts only). Bayesian probability seems to me a synonym for that.
 
  • #84
A. Neumaier said:
Only in classical subjective (Bayesian) probability. Frequentist interpretations have neither such notions nor the associated problems.
They still have Bayesian updating and relativism of when that updating occurs. For example in the case of the classical gas in the actual model used by the observers and superobservers the distributions used have the same behavior regardless of what view one holds of probability theory.

My post uses Bayesian language, but even in the frequentist case the superobserver will continue to use a mixture over outcomes where the observer will not. Up until he views the system. That's just a feature of probability theory.

You'll still have the notion of what you don't include in the probability side of your model and updating/conditioning. I don't see what is different in the sense relevant here.

Basically you can still replicate Wigner's friend even under a frequentist view.
 
  • #85
A. Neumaier said:
I call a probability theory subjective if the probabilities depend on an ill-defined agent (rather than on objective contexts only). Bayesian probability seems to me a synonym for that.
That's not the conventional usage though right? There is Objective Bayesianism. Subjective is usually reserved for views like de Finetti and Savage.
 
Last edited:
  • #86
DarMM said:
They still have Bayesian updating and relativism of when that updating occurs.
Not really. Frequentists just have conditional probability, i.e., probabilities relative to a subensemble of the original ensemble. Nobody is choosing or updating anything; it never occurs.
DarMM said:
even in the frequentist case the superobserver
Neither are there observers or superobservers. I have never seen the notion of superobservers in classical probability of any kind.

All subjective updating happens outside probability theory when some subject wants to estimate the true probabilities about which the theory is.
DarMM said:
Basically you can still replicate Wigner's friend even under a frequentist view.
No, because both Wigner and his friend only entertain subjective approximations of the objective situation. Subjectively everything is allowed. Even logical faults are subjectively permissible (and happen in real subjects quite frequently).
 
  • #87
DarMM said:
That's not the conventional usage though right? There is Objective Bayesianism. Subjective is usually reserved for views like DeFinetti and Savage.
What is Objective Bayesianism? Where is an authoritative classification? I am not an expert in classifying interpretations...
 
  • #88
A. Neumaier said:
Not really. Frequentists just have conditional probability, i.e., probabilities relative to a subensemble of the original ensemble. Nobody is choosing or updating anything; it never occurs.
Baye's relationship leads to a symmetry noted by Laplace

244365

By rewriting it as follows

244374


which leads to return of conditional probability. Which makes it possible to calculate the probability of causes by events. Laplace called this the inverse probability (Plausibility of the hypothesis). It has a lot of application in the theory of knowledge.
Frequent statisticians refuse to resonate in the plausibility of the hypotheses.

244370


He's working on hypothesis rejection. Test a hypothesis by calculating the likelihood of its results. Frequentist do not adhere to the concept of inverse probability because of the apriori which is subjective.

244373


Subjectivity also exists with the frequentist method. She's just hiding under the carpet.

/Patrick
 

Attachments

  • 0 bytes · Views: 0
  • #89
microsansfil said:
Which makes it possible to calculate the probability of causes by events.
Nothing here is about causes. Bayes' formula just relates two different conditional probabilities.
 
  • Like
Likes vanhees71
  • #90
A. Neumaier said:
What is Objective Bayesianism? Where is an authoritative classification? I am not an expert in classifying interpretations...
Many books on the philosophy of Probability theory delve into the details, but the Wikipedia link here has the most pertinent details:
https://en.wikipedia.org/wiki/Bayesian_probability#Objective_and_subjective_Bayesian_probabilities
It's mostly about prior probabilities. Objective Bayesians build off of Cox's theorem and Subjective Bayesians off of DeFinetti's work.

The best book I think on the Objective outlook is E.T. Jaynes's "Probability Theory: The Logic of Science"

For the Subjective Bayesian outlook I like J. Kadane's "Principles of Uncertainty" or DeFinetti's "Theory of Probability: A Critical Introductory Treatment"
 
  • #91
A. Neumaier said:
Not really. Frequentists just have conditional probability, i.e., probabilities relative to a subensemble of the original ensemble. Nobody is choosing or updating anything; it never occurs
Well you learn you are in a subensemble then. Does this change much? It's still the case that the theory doesn't specify when you "learn" you're in a given subensemble.

In all views you will update your probabilities, regardless of what meaning you give to this it occurs across all views. The point is that the theory never gives formal account of how this comes about. It's just a primitive of probability theory.

A. Neumaier said:
Neither are there observers or superobservers. I have never seen the notion of superobservers in classical probability of any kind
One person is including just the system in the probability model (observer), the other is including the system and the device (superobserver). That's all a superobserver is really. The notion can be introduced easily.

A. Neumaier said:
No, because both Wigner and his friend only entertain subjective approximations of the objective situation. Subjectively everything is allowed. Even logical faults are subjectively permissible (and happen in real subjects quite frequently).
I don't understand this I have to say. The Bayesian view of probability does not permit logical faults either under de Finetti or Cox's constructions. Unless you mean something I don't understand by "logical faults". In fact the point of Cox's theorem is that Probability is Logic under uncertainty.

Regarding the sentence in bold, can you be more specific about what you mean by Wigner's friend not being possible under a frequentist view? I really don't understand.
 
  • #92
DarMM said:
Wikipedia (Bayesian probability) said:
For objectivists, interpreting probability as extension of logic, probability quantifies the reasonable expectation everyone (even a "robot") sharing the same knowledge should share in accordance with the rules of Bayesian statistics, which can be justified by Cox's theorem.
What a robot finds reasonable depends on how it is programmed, hence is (in my view) subjective.
What should count as knowledge is conceptually very slippery and should not figure in good foundations.
Wikipedia (Cox's theorem) said:
Cox wanted his system to satisfy the following conditions:
  1. Divisibility and comparability – The plausibility of a proposition is a real number and is dependent on information we have related to the proposition.
  2. Common sense – Plausibilities should vary sensibly with the assessment of plausibilities in the model.
  3. Consistency – If the plausibility of a proposition can be derived in many ways, all the results must be equal.
Even though a unique plausible concept of probability comes out after making the rules mathematically precise, I wouldn't consider this objective since it depends on ''information we have'', hence on a subject.

Rather than start with a complicated set of postulates that make recourse to subjects and derive standard probability, it is much more elegant, intuitive, and productive to start directly with the rules for expectation values featured by Peter Whittle (and recalled in physicists notation in Section 3.1 of my Part II). I regularly teach applied statistics on this basis, from the scratch.

DarMM said:
The best book I think on the Objective outlook is E.T. Jaynes's "Probability Theory: The Logic of Science"
There are no objective priors, and Jaynes' principle of maximum entropy (Chapter 11) gives completely wrong results for thermodynamics if one assumes knowledge of the wrong prior and/or the wrong expectation values (e.g., that of ##H^2## rather than that of ##H##). One needs to be informed by what actually works to produce the physically correct results from max entropy. A detailed critique is in Section 10.7 of my online book.
 
Last edited:
  • Informative
Likes Mentz114
  • #93
A. Neumaier said:
There are no objective priors, and Jaynes' principle of maximum entropy (Chapter 11) gives completely wrong results for thermodynamics if one assumes knowledge of the wrong prior and/or the wrong expectation values (e.g., that of ##H^2## rather than that of ##H##. One needs to be informed by what actually works to produce the physically correct results from max entropy. A detailed critique is in Section 10.7 of my online book.
That's pretty much the argument most Subjective Bayesians use against Objective Bayesianism. Certainly I know you do not like Probability in the Foundations, thus the Thermal Interpretation. It is for this reason I mentioned it in #80
 
  • #94
DarMM said:
you learn you are in a subensemble then. Does this change much? It's still the case that the theory doesn't specify when you "learn" you're in a given subensemble.
No. You assume that you are in a subensemble. This assumption may be approximately correct, but human limitations in this assessment are irrelevant for the scientific part.

Theory never specifies which assumptions are made by one of its users. It only specifies what happens under which assumptions.
DarMM said:
In all views you will update your probabilities
I may update probabilities according to whatever rules seem plausible to me (never fully rational), or whatever rules are programmed into the robot who makes decision. But the updating is a matter of decision making, not of science.
DarMM said:
The point is that the theory never gives formal account of how this comes about.
My point is that theory is never about subjective approximations to objective matters. It is about what is objective. How humans, robots, or automatic experiments handle it is a matter of psychology, artificial intelligence, and experimental planning, respectively, not of the theory.
DarMM said:
One person is including just the system in the probability model (observer), the other is including the system and the device (superobserver). That's all a superobserver is really.
The only observers of a classical Laplacian universe are Maxwell's demons, and they cannot be included into a classical dynamics. So their superobservers aren't describable classically.
DarMM said:
I don't understand this I have to say. The Bayesian view of probability does not permit logical faults
I was talking about my views on subjective and objective. A subject isn't bound by rules. This makes all Bayesian derivations very irrelevant, no matter how respectable the literature about it. They discuss what should be the case, not what is the case. But only the latter is the subject of science. Bayesian justifications are ethical injunctions, not scientific arguments.
DarMM said:
can you be more specific about what you mean by Wigner's friend not being possible under a frequentist view?
They are of course possible, but their assessment of the situation is (in my view) just subjective musings, approximations they make up based on what they happen to know. Thus here is no need for physics to explain their findings.

What would be of interest is a setting where Wigner and his friend are both quantum detectors, and their ''knowledge'' could be specified precisely in terms of properties of their state. Only then the discussion about them would become a matter of physics.
DarMM said:
I know you do not like Probability in the Foundations, thus the Thermal Interpretation.
I have nothing at all against probability in the frequentist sense. The only problem to have these in the foundations is that frequentist statement about systems that are unique are meaningless.
But the foundations must apply to our solar system, which is unique from the point of view of what physicists from our culture can measure.
 
  • Like
Likes Auto-Didact
  • #95
A. Neumaier said:
Theory never specifies which assumptions are made by one of its users. It only specifies what happens under which assumptions.
A. Neumaier said:
But the updating is a matter of decision making, not of science.
A. Neumaier said:
My point is that theory is never about subjective approximations to objective matters. It is about what is objective
A. Neumaier said:
I was talking about my views on subjective and objective. A subject isn't bound by rules. This makes all Bayesian derivations very irrelevant, no matter how respectable the literature about it
A. Neumaier said:
They are of course possible, but their assessment of the situation is (in my view) just subjective musings, approximations they make up based on what they happen to know. Thus here is no need for physics to explain their findings
A. Neumaier said:
I have nothing at all against probability in the frequentist sense. The only problem to have these in the foundations is that frequentist statement about systems that are unique are meaningless.
But the foundations must apply to our solar system, which is unique from the point of view of what physicists from our culture can measure
Just going these, are you basically saying your reasons for not liking the typical statistical view (either Bayesian or Frequentist) of probability in the Foundations? Probability involves updating in both views, Bayesian or Frequentist.

You are basically saying you prefer a non-statistical reading of things in the Foundations as I mentioned as an alternative in #80.
 
Last edited:
  • #96
DarMM said:
are you basically saying your reasons for not liking the typical statistical view (either Bayesian or Frequentist) of probability in the Foundations? Probability involves updating in the both views Bayesian or Frequentist.
No.

I am perfectly happy with a frequentist view of classical probability as applying exactly to (fully or partially known) ensembles, to which any observer (human or not) assigns - usually as consistently as feasible - approximate values based on data, understanding, and guesswork.

But the theory (the science) is about the true ,100% exact frequencies and not about how to assign approximate values. The latter is an applied activity, the subject of applied statistics, not of probability theory. Applied statistics is a mixture of science and art, and has - like any art - subjective aspects. I teach it regularly and without any metaphyscial problems (never a student asking!) based on Peter Whittle's approach, Probability via Expectation. (Theoretical science also has its artistic aspects, but these are restricted to the exposition of the theory and preferences in the choice of material.)

The only reason I cannot accept probability in the foundations of physics is that the latter must apply to unique large systems, while the classical notion of probability cannot do this. By axiomatizing instead of probability the more basic notion of uncertainty and treating probability as derived concept, I found the way out - the thermal interpretation.

Bayesian thinking (including any updating - exact values need no updating) is not science but belongs to 100% to the art of applied statistics, supported by a little fairly superficial theory based on ugly and contentuous axioms. I had studied these in some detail many years ago, starting with the multivolume treatise on foundation of measurement by Suppes, and found this (and much more) of almost no value - except to teach me what I should avoid.
DarMM said:
That's pretty much the argument most Subjective Bayesians use against Objective Bayesianism.
They are driving out the devil with Beelzebul.
 
Last edited:
  • Like
Likes Auto-Didact
  • #97
A. Neumaier said:
The only reason I cannot accept probability in the foundations of physics is that the latter must apply to unique large systems, while the classical notion of probability cannot do this. By axiomatizing instead of probability the more basic notion of uncertainty and treating probability as derived concept, I found the way out - the thermal interpretation.
I appreciate your post, but this does seem to me to be about not liking Probability in the foundations, Bayesian or Frequentist. My main point was that most of the issues people here seem to be having with the Minimal Statistical view or similar views like Neo-Copenhagen or QBism* reduce to the issue of having a typical statistical view (again in either sense) of Probability.

As I said understanding the probabilistic terms in a new way detached from the typical views is the only way out of these issues if one does not like this. Hence the final line of #80

*They mainly differ only in whether they like Frequentism, Objective Bayesian or Subjective Bayesian approaches. They agree with each other on virtually all other issues.
 
  • #98
DarMM said:
but this does seem to me to be about not liking Probability in the foundations, Bayesian or Frequentist.
It is not about not liking it but a specific argument why having probability in the foundations makes the foundations invalid. I'd not mind having probability in the foundations if it would appear only in properties of tiny subsystems of large unique systems.
 
  • #99
A. Neumaier said:
It is not about not liking it but a specific argument why having probability in the foundations makes the foundations invalid. I'd not mind having probability in the foundations if it would appear only in properties of tiny subsystems of large unique systems.
Yes, but that's what I was talking about. The issues here seem to be issues with probability in the Foundations. The "liking" was not meant to imply you lacked an argument or were operating purely on whimsy. :smile:
 
  • #100
DarMM said:
Is all of this not just a problem related to QM being a probabilistic theory?

For example if I model a classical system such as a gas using statistical methods like Liouville evolution, I start with an initial state on the system ρ0ρ0\rho_0 and it evolves into a later one ρtρt\rho_t. Nothing in the formalism will tell me when a measurement occurs to allow me to reduce ρρ\rho to a tighter state with smaller support (i.e. Bayesian updating). Just as nothing in a probabilistic model of a dice will tell me when to reduce the uniform distribution over outcomes down to a single outcome. Nothing says what a measurement is.

Maybe a good way for you to think about the difference is that classically, the idea of preexisting hidden variables underlying measurements is very easy and natural and intuitive, to the extent that everyone (who would want to be a realist/materialist) would simply adopt a HV interpretation of classical physics that escapes all these issues around measurement.

In QM, HVs are highly constrained and unintuitive. In response, some people bite the bullet and try to still make them work, some go to many worlds, some change the physics itself (GRW). But other would-be realists decide to give up on realism, and thus face the issues with measurement and probability being fundamental.

So, I think you are right there is a very similar set of philosophical problems for a classical antirealist as a quantum antirealist, and ultimately part of being a true antirealist is not caring about this. The difference is many quantum antirealists are not true antirealists. Many are just defeated realists who only dislike antirealism slightly less than they dislike the options in quantum realism, but still believe in all the downsides of antirealism, and think this should be broadcast. Others are okay with one or more of the quantum realist options, but are forced to learn the antirealist view in textbooks, and so will talk about the issues with antirealism to try to remedy this bias. Because of these cultural realities, this debate which you correctly identify as over antirealism writ large and not specific to QM ends up being cashed out only in the context of quantum antirealism
 
  • Like
Likes eloheim, Auto-Didact and DarMM
  • #101
A. Neumaier said:
Nothing here is about causes. Bayes' formula just relates two different conditional probabilities.
Well, yes, it has to do with the use we make of it, because otherwise it's only syntax.

To make sense you need semantics and therefore an interpretation/model.

/Patrick
 
  • #102
microsansfil said:
Well, yes, it has to do with the use we make of it, because otherwise it's only syntax.

To make sense you need semantics and therefore an interpretation/model.
Yes, but no semantics requires that one of ##A## and ##B## is the cause of the other. They can be two arbitrary statements. Taking the relative frequency of pregnancies as ##A## and the number of storks in the area as ##B## is valid semantics.
 
  • Like
Likes Auto-Didact
  • #103
https://bayes.wustl.edu/etj/articles/cmystery.pdf
The idea is that a conditional probability, depending on the context, can be used to express physical causality.

In the paper the example of BERNOULLI'S URN REVISITED (page 13) : In (18) the probability on the right expresses a physical causation, that on the left only an inference.

A conditional probability can, depending on the context, express a "physical causality" or an inference.

/Patrick
 
  • #104
microsansfil said:
https://bayes.wustl.edu/etj/articles/cmystery.pdf
The idea is that a conditional probability, depending on the context, can be used to express physical causality.

In the paper the example of BERNOULLI'S URN REVISITED (page 13) : In (18) the probability on the right expresses a physical causation, that on the left only an inference.

A conditional probability can, depending on the context, express a "physical causality" or an inference.
But only if you know already the causal connection. From probabilities alone one can never deduce a causal relation, only correlations.
 
  • Like
Likes Auto-Didact
  • #105
DarMM said:
Many books on the philosophy of Probability theory delve into the details, but the Wikipedia link here has the most pertinent details:
https://en.wikipedia.org/wiki/Bayesian_probability#Objective_and_subjective_Bayesian_probabilities
It's mostly about prior probabilities. Objective Bayesians build off of Cox's theorem and Subjective Bayesians off of DeFinetti's work.

The best book I think on the Objective outlook is E.T. Jaynes's "Probability Theory: The Logic of Science"

For the Subjective Bayesian outlook I like J. Kadane's "Principles of Uncertainty" or DeFinetti's "Theory of Probability: A Critical Introductory Treatment"
Surely probability theory is no more a part of the foundations of QT than the Fourier transform ?
They are both in the toolbox of many theories, including classical mechanics.
 

Similar threads

  • Quantum Interpretations and Foundations
11
Replies
376
Views
11K
Replies
6
Views
1K
  • Quantum Physics
Replies
31
Views
5K
  • Quantum Physics
2
Replies
69
Views
4K
  • Quantum Interpretations and Foundations
Replies
25
Views
1K
  • Quantum Interpretations and Foundations
2
Replies
61
Views
4K
Replies
17
Views
4K
  • Quantum Interpretations and Foundations
Replies
2
Views
1K
Replies
23
Views
6K
Back
Top