Statistical ensemble interpretation done right

  • #211
A. Neumaier said:
No. An ideal closed quantum system cannot be observed from the outside, hence probabilities do not apply.
What I mean is a prepared system, which is evolving as a closed system for a while and then you measure some observable of it with a measurement device. The latter part is of course an extension of the closed system by the measurement device.
A. Neumaier said:
And for observations from the inside of a quantum system, current interpretations - with sole exception of the thermal interpretations - have nothing to say. It would mean for them to give a mathematical definition of what it means for one part of a quantum systems to observe another part!
We have the interaction of the measured system with the measurement device. This is a quantum system, with the measurment device being macroscopic, which needs some approximate treatment in the sense of quantum-many-body theory. This is all obvious and doesn't touch the apparent problem you want to discuss, i.e., the idea that one has to causally explain the outcome of a single measurement by QT. This is, however, a contradiction in itself, because QT predicts that this outcome is random, except if the system is prepared in a state, where the measured observable takes a determined value (e.g., if it's prepared in an eigenstate of the self-adjoint operator representing the measured observable). Otherwise it's "irreducibly random" and thus the outcome is not determined by the state preparation under consideration.
 
Physics news on Phys.org
  • #212
vanhees71 said:
Otherwise it's "irreducibly random" and thus the outcome is not determined by the state preparation under consideration.
Can you mathematically define "irreducibly random"? Let me assume you try to do it by going with some sort of "maximal definition" (i.e. as perfect randomness as possible). Are you really sure that real world quantum randomness would manage to satisfy that "maximal condition"? But in the end, some formal mathematical definition of randomness would probably involve infinite sequences anyway, and hence not be operationally checkable.

So I guess some sort of game theoretic informal definition of "irreducibly random" would be more helpful. And there, the typical subtleties of statistics would show up again, where "causal order" of the predictions and decisions of the involved agents become important.
 
  • Like
Likes physika, lodbrok and Fra
  • #213
martinbn said:
That is statistics, but you can use any interpretation. It seems that you dont always distinguish statistics from statistical interpretation.

Perhaps it's:

initial state, i = infinite ensemble of identically prepared system
transition probability = relative frequencies over the ensemble for that transition to occur
 
  • Like
Likes dextercioby
  • #214
A. Neumaier said:
Thus the naive approach does not give the physically observed answer, and one needs something more sophisticated, something unknown so far. My informal analysis of what is needed points to chaotic motion that (due to the environment) settles quickly to an equilibrium state. But the standard decoherence arguments always take an average somewhere, hence produce only an average answer, but not the required answer in almost every single case. Thus one needs better mathematical tools that apply to the single case. I am working on these, but progress is slow.
I symphatize with part of this thinking... but I think you have a different angle still...but it is interesting and I find it interesting to conceptually understnad how it differs...

In my way of putting this "issue" I ask myself, HOW - physically can an observer INSIDE the one world we have "construct" the corrsponding "average" that otherwise has an external statistical interpreation. This is at the core of whay I also see as a key task.

By "construction" here I consider information flow, as well as mathematics. Ie, how can this observer inside the system, having limited resources, collect, process and compute the new "equivalent" object that motivates the statistical averages in _single cases_? I seek an answer in terms of information locally available to that observer, and an answer that is "computable" in terms of that observers complexity. Ie. HOW does an inside observer get informed about the whole? It obviously can't, but it can attain some kind of "equilibrium" where it is "optimally" informed; given the constraints. I suspect this is maybe conceptaully relating your "thermal picture".

A. Neumaier said:
But according to the thermal interpretation, a complete knowledge of the joint state of the detector, the atom, and the environment at the start of the experiment determines the joint state deterministically and unitarily. Thus one can in principle find out the properties of the detector at the end of the measurement. That one cannot do it in practice doesn't matter; one cannot calculate in practice the Newtonian dynamics of an N-particle system; nevertheless one can answer many qualitative questions.
But now the interesting part, do I understand you right here that your construction(that you still seek) is based on any "information" that is in principle (in analogy with newtson mechanics) avaible in ALL of the universe, seems as a total close system. And you do not view this as required to be "observed", it just "is"?

So give this what I would call "information", but that you may call objective facts(?) (but that aren't known, thus in a sense "hidden") you seek a matematical way to motivate the euiqvalent of the statistical foundatins, fro mthe perspective of a single case, as seen from a part of the whole susytem that is the "inside observer"? Is this a reasonable description?

/Fredrik
 
  • #215
gentzen said:
So I guess some sort of game theoretic informal definition of "irreducibly random" would be more helpful. And there, the typical subtleties of statistics would show up again, where "causal order" of the predictions and decisions of the involved agents become important.
I agree.

It's random to me ~= I fail to distinguish it from noise.

Randomness is just another name an observer or agent uses when failing to distinguish it from noise. It does not exclude a hidden pattern we havent decoded. This tells as much about the agents capacity of pattern matching and computational capacity than the source itself.

(The interesting in a gaming context, is that considering the descions of the agent, it obviously does not matter if the randomess is fundamental or just a manifestaion of the agents limitations in pattern recognition, as the agent acts based on what it think it knows. Wether it's right o wrong as per someone else is irrelevant. This is for me at least, directly related to the quest for motivating the foundation for the probability in single cases, ie from the perspective of one particualar observer inside the uinverse, that observes and acts here and now. Here fictional ensembles should not be used to motivate, it makes no sense or adds no explanatoty value. This also I think suggests even more far reaching stuff. We can disagree on that of course, but this is why the exact physical and operational "motivation" for the statistical methods is paramount. Everyone understands the fiction, but what about nature, which is not fiction?)

/Fredrik
 
  • Like
Likes gentzen
  • #216
gentzen said:
Can you mathematically define "irreducibly random"? Let me assume you try to do it by going with some sort of "maximal definition" (i.e. as perfect randomness as possible). Are you really sure that real world quantum randomness would manage to satisfy that "maximal condition"? But in the end, some formal mathematical definition of randomness would probably involve infinite sequences anyway, and hence not be operationally checkable.
Of course not. It's not a mathematical but a physical statement, and I can't prove it. It's just empirical evidence, i.e., for certain observations we don't have a cause. E.g., if we put a muon in a Penning trap it will decay after some time, but we cannot predict in any way when this individual muon will decay, and that's also what standard QFT predicts: The decay is due to the weak interaction, and all we can predict is the decay constant/mean lifetime.

It's of course not logically excluded that there might be cause, which we just don't know, but there's no evidence for it. What's however pretty sure is that there's no local determinstic theory that leads to the same probabilistic predictions as Q(F)T, which is very successful with its predictions, particularly how the Bell inequalities in various Bell tests occur.
gentzen said:
So I guess some sort of game theoretic informal definition of "irreducibly random" would be more helpful. And there, the typical subtleties of statistics would show up again, where "causal order" of the predictions and decisions of the involved agents become important.
With "irreducibly random" I mean the implication of QT that there's no state of a suffucienly non-trivial quantum system, where all observables take definite values, i.e., when measuring such an observable on many equally prepared systems you'll get random results with probabilities predicted by QT, and this random behavior is, according to QT, not due to some ignorance (as in classical statistical mechanics, where I can't know the exact classical state because of its complexity, but according to classical mechanics all observables always take determined values) but because the values of these observables is truely random.
 
  • Like
Likes Lord Jestocost, gentzen and Fra
  • #217
A. Neumaier said:
It is not hidden since my observables are the N-point correlation functions of QFT, as they were always used, but without the ensemble interpretation - which does not make sense for quantum fields in spacetime, since a spacetime field cannot be prepared repeatedly.
But wouldn't you say that the all the possible N-point correlation functions of the environment of the whole universe are de facto "hidden" from an actual real obsever inside? At least similar to how the initial conditions are de facto hidden from the observer even in newtons mechanics?

In classical mechanics, the observer is a fiction, that is the set of all possible measurement devices, that you can record, store and process without any constraints of storage of computational time. In this sense one does not even consider this "super observer" as something important, you can just thinkg that all this information "exists" and "is". Without requiring that it should be squeezed out of actual inferences.

Do you mean that as we do not label initialy conditions in newtons mechanics as "hidden", we just usually think of it as impractical and leading to deterministic chaos, and leave it there. You want to apply the same to QM, as well on the set of N-point correlation functions? (And the from THAT, you seek some construction to movitate the foundations of statistical without referring to "sampling" of the whole universe?)

Does this sound right, or am missing something subtle in your ideas?
(My objective here is try refine my understanding of your perspective)

/Fredrik
 
  • #218
vanhees71 said:
It's of course not logically excluded that there might be cause, which we just don't know, but there's no evidence for it. What's however pretty sure is that there's no local determinstic theory that leads to the same probabilistic predictions as Q(F)T, which is very successful with its predictions, particularly how the Bell inequalities in various Bell tests occur.

With "irreducibly random" I mean the implication of QT that there's no state of a suffucienly non-trivial quantum system, where all observables take definite values, i.e., when measuring such an observable on many equally prepared systems you'll get random results with probabilities predicted by QT, and this random behavior is, according to QT, not due to some ignorance (as in classical statistical mechanics, where I can't know the exact classical state because of its complexity, but according to classical mechanics all observables always take determined values) but because the values of these observables is truely random.
I think by "truly random" you mean simply that the uncertainty is NOT due to the physicists ignorance.
If so, I fully agree.

But there is another logical option, that is very suggestive, namely that the "randomness" is observer dependent.

I think this is not as moronic as it may sound, neither is it just empty philosophy. In game theoretic abstractions, this idea comes with lots of explanatory power. For example, consider how two agents interact, their knowledge or ignorance about the other player are keys to understand their optimal choice of strategies. One strategy is then not, randomly choosing one, but the one strategy takes into account the actual uncertainty. In a sense that is the "quantum strategy".

The equivalence of Bell's inequality and the Nash inequality in a quantum game-theoretic setting

If one goes on to object to considering observer taking part of interactions, that is already why we do in all these wigner and friend thought experiments, and at the heat of QM foundations.

/Fredrik
 
  • #219
Fra said:
I think by "truly random" you mean simply that the uncertainty is NOT due to the physicists ignorance.
If so, I fully agree.
Yes! It's hard to express this clearly ;-)).
Fra said:
But there is another logical option, that is very suggestive, namely that the "randomness" is observer dependent.
But that seems empirically not to be the case, because standard QT seems to work very well for all physicists, i.e., doing an experiment gives the same result in every lab and it's compatible with QT, independent of the individual experimentalist and/or theorists performing the experiment and/or analyzing it in terms of QT. Otherwise QT wouldn't be generally accepted (as far as the objective physical interpretation-independent content is concerned) in the physics community.
Fra said:
I think this is not as moronic as it may sound, neither is it just empty philosophy. In game theoretic abstractions, this idea comes with lots of explanatory power. For example, consider how two agents interact, their knowledge or ignorance about the other player are keys to understand their optimal choice of strategies. One strategy is then not, randomly choosing one, but the one strategy takes into account the actual uncertainty. In a sense that is the "quantum strategy".

The equivalence of Bell's inequality and the Nash inequality in a quantum game-theoretic setting

If one goes on to object to considering observer taking part of interactions, that is already why we do in all these wigner and friend thought experiments, and at the heat of QM foundations.

/Fredrik
Wigner's friend is a highly problematic issue, indeed. It's related to a very personal interpretation of QT according to Wigner. I'm not familiar enough with it, to discuss this in a clear way. For me this interpretation is on the boarder of solipsism. It's a bit like the "Princeton interpretation" a la von Neumann, who claims that quantum measurements can only be understood when the measurement results are noticed by some "conscious being", and only then the "quantum state" collapses.

The problem with this is that (a) it seems to contradict again common practice, where the measurement results are stored by some machines first and only then analyzed by physicists, i.e., "concious beings" ;-)), after the entire observed objects are long gone, and thus it's hard to make sense of what "collapse" should mean at all and (b) it's not at all clear, how to define "consciousness" in an objective way. As Bell put it ironically: When has the first state collapse occured? Was it enough that an amoeba took notice of some "measurement result" or does it need a PhD in physics?
 
  • Informative
Likes Lord Jestocost
  • #220
vanhees71 said:
But that seems empirically not to be the case, because standard QT seems to work very well for all physicists, i.e., doing an experiment gives the same result in every lab and it's compatible with QT, independent of the individual experimentalist and/or theorists performing the experiment and/or analyzing it in terms of QT.
Yes of course. But physicists is an idealized class of QT-observer, and these observers by idealization "interacts" classically (or rather, not at all).

By observer i mean any, even non-ideal real/physical observers embedded in the system. Of course QM dont handle these! Thats both the problem and the opportunity.

My post was illustrating an interesting angle with and supposedly intuitive conceptual handle to this.

/Fredrik
 
  • #221
vanhees71 said:
Wigner's friend is a highly problematic issue, indeed. It's related to a very personal interpretation of QT according to Wigner. I'm not familiar enough with it, to discuss this in a clear way. For me this interpretation is on the boarder of solipsism. It's a bit like the "Princeton interpretation" a la von Neumann, who claims that quantum measurements can only be understood when the measurement results are noticed by some "conscious being", and only then the "quantum state" collapses.

The problem with this is that (a) it seems to contradict again common practice, where the measurement results are stored by some machines first and only then analyzed by physicists, i.e., "concious beings" ;-)), after the entire observed objects are long gone, and thus it's hard to make sense of what "collapse" should mean at all and (b) it's not at all clear, how to define "consciousness" in an objective way. As Bell put it ironically: When has the first state collapse occured? Was it enough that an amoeba took notice of some "measurement result" or does it need a PhD in physics?
I by no means suggest that we need conscious observers. I have no objections to what you write here. I think I am as fed up with those confusing associations as you are o0) You surprised me and jumped into the wrong bin, in response.

Trying again without "wigner"..

The problem is when we make parts of macroscopic domain (which qualifies as an observer in QM; beeing part of Bohrs classical reality side of hte cut) a part of the "quantum system", when we in principle end up in a situation where we need to describe two interacting observers. Ie. we need to have a quantum mechanical description of the inteaction between two "macroscopic systems".

You problably would choose to describe this by decoherence, and consier the quantum system of two classical parts + some other stuff as a BIG quantum system, and suggest that QM applies to that. What I think is hte problem with that approache, is that of unification. How do you explain for example the gravity between these two systems, in a way that scales well, starting from a microscopic description (in which there is no gravity)? Here is when we have problems of divergences, fine tuning etc. And i think it's not JUST about gravity/TOE, it's problem alreadty in seeking a GUT. I think this is not a coincidence. The missing links here to me suggests there is something missing in the theory.

When observers are LARGE macroscopic systems, and the quantum systems is small (atomic scale or even a piece of metal, or a big piecve of metal for that matter- still it's "small"), the issues I mentioned go away and QFT is fine. But pressing the limits with these gedanken experiments seems to press onto the foundations, suggesting something isnt logically coherent, dont you think? The physical basis or motivation for the "statistical treatment" seems to be one thing at this core issue.

I write this in words as the problem is first of all conceptual and our understanding has serious gaps.

The mathematical side of the same issue, is that we have no unified coherent theory of all forces, that does not require finetuning, or worse finetuning in hypothetical landscapes that we can't navigate in without new data. The mathematical statement of the problem can also be easily be misunderstdood without the conceptual context.

/Fredrik
 
  • #222
Fra said:
I think by "truly random" you mean simply that the uncertainty is NOT due to the physicists ignorance.
If so, I fully agree.
If it is reducible, then it is predictable and not random. If we don't know how to reduce it, then we are ignorant of its cause. It makes no sense to suggest that "uncertainty" is NOT due to "ignorance". The very definitions of the concepts of "uncertainty" and "ignorance" depend on each other.

vanhees71 said:
Yes! It's hard to express this clearly ;-)).
Because it is internally inconsistent.
 
  • #223
Why is a world with "irreducible randomness" inconsistent?

In QT "complete knowledge" about a system means that you have prepared it in a pure state, e.g., by performing a von Neumann filter measurement of a complete set of compatible observables. This however does not imply sharp values for all observables. In general observables, which are not compatible to the determined complete set of compatible observables, do not take determined values.
 
  • Like
Likes Lord Jestocost
  • #224
lodbrok said:
If it is reducible, then it is predictable and not random. If we don't know how to reduce it, then we are ignorant of its cause. It makes no sense to suggest that "uncertainty" is NOT due to "ignorance". The very definitions of the concepts of "uncertainty" and "ignorance" depend on each other.
Perhaps it's again about words and I don't know what are the correct definitions of words... but I will explain how i distinguish between uncertainty and ignorance.

Uncertainty ~ Is a measure of (an obsevers/agents) predictive abiliy, typically measured by some "statistical uncertainty" score, say some confidence interval as per some confidence level, in predicting the future (for example the outcome of an experiment), or the responses from the environment. But without specifying WHY.

Ignorance - supposedly indicates that the explanation of lack of certainty/confidence is because the agent/observer is uninformed but where we at least in principle could have been informed. Ie that the information it needs is in principle information theoretically and computationally accessible in a given time scale.
lodbrok said:
Because it is internally inconsistent.
Where in lies the inconsistency? Do you mean because the randomness is not absolute or what else?

Why I refer to as the relativity of randomness, is nothing strange, I have in mind something similar to cryptographic classifications such as computational hardness, or even limits due to informaiton theoretic constraints. Ie. if the decrypting algorithm requires more memory and computational power, than the agent A possesses, then I would not label Agent A as ignorant, because it may well perform optimally (given the constraints, which are due to physical constraints) and yet fail to distinguish input from noise.

The presumed relevance of hte "size of the observer", as scaling from microscopic to macrodomain is that it presumably implies a bound on its computational capacity. I don't have a simple reference that gets right to the point but associating around this
https://en.wikipedia.org/wiki/Bekenstein_bound
https://en.wikipedia.org/wiki/Landauer's_principle
Gia Dvali has interesting ideas ahd talks just a few minutes about information in black holes what-fundamental-physics-behind-information-processing-black-holes, but the next logical step is, what about information storage in say any system?

The connection is that if we "ignore" the information capacity limits, and use "fictive" objects that really would imply creating black holes, then from the information theoretic persepctive, then we have an inconsistency. These problems IMO, is rooted in the foundations of QM. But the train of association may be long, but this isn't supposedto be easy, then all this would have been solved already.

/Fredrik
 
Last edited:
  • Like
Likes vanhees71
  • #225
gentzen said:
Can you explain why "a spacetime field cannot be prepared repeatedly"? I don't see why the ensemble interpretation should not work for QFT. If I were better at QFT, I probably would simply object to that statement.
The state of quantum field ##\phi(x)## determines all expectations and correlations at all spacetime points, but we can prepare only the part residing in a small spacetime region, which by far do not determine the state.
Morbert said:
Decoherent histories has been around for a good few decades at this stage, with one motivation for its development being the description of closed systems, and measurements as processes therein.
https://www.webofstories.com/play/murray.gell-mann/163
https://iopscience.iop.org/article/10.1088/1742-6596/2533/1/012011/pdf
https://arxiv.org/abs/1704.08725

It gives a clear account of what it means for a measurement to occur in a closed system.
I haven't seen this and related articles and will respond after I read them. But it is unlikely to change the picture.
vanhees71 said:
Of course, it works for QFT. It's, how QFT is used in the lab: In scattering experiments you describe asymptotic free states in the initial state,i, (
But only since QFT in these experiments does not model the measurement process (including the detector) but only the 2-particle scattering events. These have so few degrees of freedom that they can be prepared in large numbers.
vanhees71 said:
t's "irreducibly random" and thus the outcome is not determined by the state preparation under consideration.
Quantum theory has no well-defined notion of ''irreducibly random''; neither are there experiments that would distinguish ordinary randomness from ''irreducibly random''. Assuming the latter therefore is already an interpretation step.
vanhees71 said:
all we can predict is the decay constant/mean lifetime.
But this is not even an observable in Born's sense. It is something computed from the S-matrix elements....

vanhees71 said:
With "irreducibly random" I mean the implication of QT that there's no state of a suffucienly non-trivial quantum system, where all observables take definite values,
only in the minimal interpretation of QT.

In the thermal interpretation all observables take definite values (i.e., the values read from the instrument are deterministically determined by the state of the isolated system containing the experimental setting), though they are so sensitive to this state that randomness occurs. But this is ordinary randomness (like in classical mechanics), not irreducible randomness (which nobody is able to define).
 
Last edited:
  • Like
Likes physika, lodbrok, mattt and 1 other person
  • #226
A. Neumaier said:
Quantum theory has no well-defined notion of ''irreducibly random''; neither are there experiments that would distinguish ordinary randomness from ''irreducibly random''. Assuming the latter therefore is already an interpretation step.
Of course it has. According to QT, even when you prepare a system in a pure state, which is the most complete preparation you can achieve, not all observables take determined values. Without additions outside of QT there is no cause for the outcome of a measurement of such observables, and this randomness is an inherent property of Nature and not merely due to some incompleteness of information about the state, as in classical statistics.
A. Neumaier said:
But this is not even an observable in Born's sense. It is something computed from the S-matrix elements....
Observables in Born's sense are the momentum, mass, charges etc. of the particles. The S-matrix elements squared are the transition probability rates for the process under consideration.
A. Neumaier said:
only in the minimal interpretation of QT.

In the thermal interpretation all observables take definite values (i.e., the values read from the intsrument are deterministically determined by the state of the isolated system containing the experimental setting), though they are so sensitive to this state that randomness occurs. But this is ordinary randomness (like in classical mechnaics), not irreducible randomness (which nobody is able to define).
Then this is more than a new interpretation QT but a new theory. How this then is related to the known empirical facts is not clear at all. E.g., can your new theory predict, when a given radioactive nucleus decays or where a single electron in the double-slit experiment will be registered? If so, how is this in accordance with the observations showing interference effects?
 
  • #227
Fra said:
Ie, how can this observer inside the system, having limited resources, collect, process and compute the new "equivalent" object that motivates the statistical averages in _single cases_?
The formalism is omniscient in the sense that it works assuming the infinitely precise state, and hence is able to predict everything.

On the other hand, an observer can only know much less than the formalism can handle. It can know only approximations to the state (whose accuracy can also be approximately estimated only), in a small region of space (not ##\psi(x)## for all points ##x## in space!) hence must pretend that the reduced descriptions available is close enough to the unknown truth to render the predictions approximately correct. Enough opportunities for ordinary randomness to creep in....
Fra said:
But now the interesting part, do I understand you right here that your construction(that you still seek) is based on any "information" that is in principle (in analogy with newtson mechanics) avaible in ALL of the universe, seems as a total close system.
Yes. In an objective description you don't need to restrict the description to what is known to an observer, but view the system as if one were omniscient. Hence no concept of information is needed either.
 
Last edited:
  • Like
Likes lodbrok, mattt and Fra
  • #228
A. Neumaier said:
Yes. In an objective description you don't need to restrict the description to what is known to an observer, but view the system as if one were omniscient. Hence no concept of information is needed either.
I hope I understand you correctly. In my opinion, maybe, you can only hold this view if you understand Nature as a "purely physically operating machine".
 
Last edited:
  • #229
Also QT is an entirely objective description of Nature. There's no need to refer to observers any more than in classical physics. Of course, physics is about what's objectively observable by quantitative measurements.
 
  • Like
Likes Lord Jestocost
  • #230
Lord Jestocost said:
I hope I understand you correctly. In my opinion, maybe, you can only hold this view if you understand Nature as a "purely physically operating machine".
It is not Nature but a mathematical model of Nature. Like in Newtonian mechanics, except that the phase space is far bigger.
 
  • #231
vanhees71 said:
Also QT is an entirely objective description of Nature.
It is a description, but not a mathematical model of nature. There are interpretations of QT which provide such a mathematical model, but it is hard to see how those models can be made to work for QFT.
vanhees71 said:
There's no need to refer to observers any more than in classical physics. Of course, physics is about what's objectively observable by quantitative measurements.
The word "observer" can be misleading, but you still need to refer to the process how the predictions of QT can be verified. And because of its statistical nature, all subtle issues crop up.
 
  • #232
gentzen said:
It is a description, but not a mathematical model of nature.
Can conscious beings model “Nature” on base of a purely physically operating machine?
 
  • #233
Lord Jestocost said:
Can conscious beings model “Nature” on base of a purely physically operating machine?
The mathematical model only needs to satisfy the description of QT, not reproduce the behavior of nature herself. And already this can be challenging, as QFT demonstrates.
 
  • Like
Likes vanhees71
  • #234
Lord Jestocost said:
Can conscious beings model “Nature” on base of a purely physically operating machine?
I can, and I believe to be conscious.
 
  • Haha
Likes gentzen and vanhees71
  • #235
I personally do not believe that one will ever be able to describe the experiential reality - let alone reality - on the basis of deterministic, physical models.
To my mind, QT (in it’s orthodox interpretation) is thus the best objective description – available to us – of the experiential reality.
 
  • Like
Likes vanhees71
  • #236
Lord Jestocost said:
I personally do not believe that one will ever be able to describe the experiential reality - let alone reality - on the basis of deterministic, physical models.
To my mind, QT (in it’s orthodox interpretation) is thus the best objective description – available to us – of the experiential reality.
What about the more restricted domain of physical reality? It excludes subjective experiences.
 
  • Like
Likes physika, vanhees71 and Lord Jestocost
  • #237
A. Neumaier said:
What about the more restricted domain of physical reality? It excludes subjective experiences.
Not according to a physicalist. To a physicalist, "subjective experiences" are part of "physical reality" because everything is.
 
  • Like
Likes vanhees71
  • #238
Lord Jestocost said:
Can conscious beings model “Nature” on base of a purely physically operating machine?
Why not?
 
  • #239
A. Neumaier said:
What about the more restricted domain of physical reality? It excludes subjective experiences.
In case you count the radioactive decay of some isotopes to the physical reality, I doubt that you can model it on base of deterministic, physical models.
 
  • Like
Likes vanhees71
  • #240
Lord Jestocost said:
In case you count the radioactive decay of some isotopes to the physical reality, I doubt that you can model it on base of deterministic, physical models.
Why not? Because QFT is important to describe the radioactive decay of some isotopes in detail? Otherwise, Bohmian mechanics seems good enough to do the trick.
 
  • #241
And Bohmian mechanics tells you precisely when a given radium nucleus, I put this evening (9:00pm my time) in a storage ring? How?
 
  • #242
vanhees71 said:
And Bohmian mechanics tells you precisely when a given radium nucleus, I put this evening (9:00pm my time) in a storage ring? How?
The point is that often there can be a deterministic mathematical model that satisfies the description of QT, not that there would be a model which predicts the behavior of nature herself.
 
  • #243
Then it's not a deterministic model. If it were, I'd be able to precisely know, when the nucleus will decay. That's what "deterministic" means!
 
  • Like
Likes dextercioby and Lord Jestocost
  • #244
vanhees71 said:
Then it's not a deterministic model. If it were, I'd be able to precisely know, when the nucleus will decay. That's what "deterministic" means!
Of course it can be a deterministic model. If every deterministic model were required to exactly predict the behavior of nature herself, then there would exist no deterministic models at all. But then the notion of deterministic model would be vacuous.
 
  • Like
Likes mattt
  • #245
vanhees71 said:
Why is a world with "irreducible randomness" inconsistent?

In QT "complete knowledge" about a system means that you have prepared it in a pure state, e.g., by performing a von Neumann filter measurement of a complete set of compatible observables. This however does not imply sharp values for all observables. In general observables, which are not compatible to the determined complete set of compatible observables, do not take determined values.
You don't see the contradiction between "complete knowledge" and "uncertainty"? How exactly does uncertainty arise when you have "complete knowledge"?

Mathematically:
Certainty = Probability 1 or Probability 0 = Complete Knowledge
Uncertainty = 0< Probability < 1
"Complete Knowledge" and "Certainty" mean the same thing. If you disagree, provide a consistent mathematical definition for both.

The second part of your statement (bold) illustrates the problem. A pure state does not mean "complete knowledge" of the system it means complete information about the preparation procedure. That's why non-compatible observables are uncertain.
 
  • Like
Likes physika and dextercioby

Similar threads

  • Quantum Interpretations and Foundations
3
Replies
84
Views
2K
  • Quantum Interpretations and Foundations
Replies
2
Views
788
  • Quantum Interpretations and Foundations
2
Replies
47
Views
2K
  • Quantum Interpretations and Foundations
3
Replies
91
Views
5K
  • Quantum Interpretations and Foundations
Replies
3
Views
2K
  • Quantum Interpretations and Foundations
Replies
22
Views
2K
  • Quantum Interpretations and Foundations
Replies
19
Views
1K
  • Quantum Interpretations and Foundations
Replies
16
Views
2K
  • Quantum Interpretations and Foundations
Replies
14
Views
2K
  • Quantum Interpretations and Foundations
Replies
6
Views
1K
Back
Top