Consistent Histories Interpretation - History

In summary: There are multiple sets of histories because there is no projective decomposition of the identity on the state space that is a common refinement of all other decompositions.
  • #1
EclogiteFacies
77
17
TL;DR Summary
Does The Consistent Histories interpretation nullify the timing of dinosaurs and imply solipsism? Sounds like wooey nonsense to me... But I'm a geologist not a physicist so it's a bit beyond me. Would love some advice.
I have just finished reading the book 'Three Roads to Quantum Gravity' by Lee Smolin.

My question interestingly is associated with my geology background. Lee Smolin notes Fay Dowker concludes that if Consistent Histories is true then we cannot deduce the existence of dinosaurs 100 million years ago.

I must say, as a geologist I can say that I have seen first-hand that the evidence proving their existence 100Ma is flawless.

Therefore, I was wondering what you think about this? I understand a lot of time has passed since Fay Dowker illustrated this point of view (20 years) and I was wondering if the interpretation still implies that this. I personally feel this must be nonsense.

Furthermore Fay Dowker seems to think this interpretation implies some form of solipsism. Again a nonsense point of view.

A lot of these points of view are summarised in the essay by Dowker and Kent (1994). Again this is very old so I'm wondering if these interpretations have at all held up...

https://arxiv.org/abs/gr-qc/9412067

Would really appreciate hearing what you all think about this.

Thank you
 
Physics news on Phys.org
  • #2
EclogiteFacies said:
Sounds like wooey nonsense to me... But I'm a geologist not a physicist so it's a bit beyond me. Would love some advice.

I have just finished reading the book 'Three Roads to Quantum Gravity' by Lee Smolin.

My question interestingly is associated with my geology background. Lee Smolin notes Fay Dowker concludes that if Consistent Histories is true then we cannot deduce the existence of dinosaurs 100 million years ago.

We can readily deduce the existence of dinosaurs while adopting a Consistent Histories interpretation.

Dowker rightly points out that there are consistent sets of histories that permit the inference of dinosaurs from fossil records, and there are other consistent sets that do not permit the inference of dinosaurs from fossil records. But he mistakes incompatibility with contradiction. It is not the case that we can decide whether or not dinosaurs roamed the Earth by selecting the appropriate set. Instead, if we want to learn whether or not dinosaurs roamed the earth, we must use a set that contains both the propositions "Dinosaurs roamed the earth" and "Fossils exist" so that we can identify the implication "If fossils exist then dinosaurs roamed the earth".

There are sets that contain the proposition "Fossils exist" but do not contain the proposition "Dinosaurs roamed the earth", and so would be useless to a geologist. But the presence of these sets in no way contradicts the conclusions the geologist makes using a set that does contain both propositions.
 
Last edited:
  • Like
Likes gentzen, Demystifier and EclogiteFacies
  • #3
Morbert said:
There are sets that contain the proposition "Fossils exist" but do not contain the proposition "Dinosaurs roamed the earth", and so would be useless to a geologist. But the presence of these sets in no way contradicts the conclusions the geologist makes using a set that does contain both propositions.

This confuses me. As I understand it Consistent Histories is a single universe formulation. Therefore why must we adopt sets of histories?

Do sets not determine the actual history? Are sets instead tools we can use to explain the Earth's history?
So, we aren't denying that dinosaurs existed but instead in this framework saying that in order to deduce them we must consider these sets?

Thank you again for the answer. Would love some more input!
 
  • #4
EclogiteFacies said:
This confuses me. As I understand it Consistent Histories is a single universe formulation. Therefore why must we adopt sets of histories?

Do sets not determine the actual history? Are sets instead tools we can use to explain the Earth's history?
So, we aren't denying that dinosaurs existed but instead in this framework saying that in order to deduce them we must consider these sets?

Why are there multiple sets of histories? The short but unhelpful answer: There is no projective decomposition of the identity on the state space that is a common refinement of all other decompositions.

The more helpful answer: There are actually multiple sets in both classical and quantum mechanics (CM and QM respectively). For example, consider a six-sided die and a history of a single event (a singe die roll). Say you are interested in computing the relative frequency of the die landing "on either 2 or 3". Some possible sample spaces of outcomes are

A: { (1 or 2), (3 or 4), (5 or 6) }
B: { (1 or 2 or 3), (4 or 5 or 6) }
C: { (1), (2 or 3), (4 or 5 or 6) }

Notice that only sample space C contains the outcome (2 or 3), and so it is the only suitable sample space of the three. This doesn't mean A and B are "wrong". They are just useless to the particular question you are trying to answer. If you were instead interested in the relative frequency of "4 or 5 or 6", you would use B or C but not A.

The novelty in QM is that in CM we can always write down a sample space that is a common refinement of the others. In this case it's

D: { (1), (2), (3), (4), (5), (6) }

We could always default to this sample space to compute probabilities for outcomes. But in QM there is no such common refinement. We are always left with a multiplicity of incompatible sets and we must choose one suitable for our purpose. And like the die roll, this does not mean that only one set is true and the others are false.
 
  • Like
Likes EclogiteFacies
  • #5
Morbert said:
We could always default to this sample space to compute probabilities for outcomes. But in QM there is no such common refinement. We are always left with a multiplicity of incompatible sets and we must choose one suitable for our purpose. And like the die roll, this does not mean that only one set is true and the others are false.

Can this really applied to the macroscopic realm though?
Fossils wouldn't exist without previously existing animals. Therefore you can't have a history with just fossils and no dinosaurs. Dinosaurs are a predicate for fossils.

Also any opinions on Dowker's referral to solipsism. I don't think this interpretation (or anything really) implies that at all. But it's interesting regardless.

Thanks again for replying. Really interesting stuff.
 
  • #6
EclogiteFacies said:
Can this really applied to the macroscopic realm though?
Fossils wouldn't exist without previously existing animals. Therefore you can't have a history with just fossils and no dinosaurs. Dinosaurs are a predicate for fossils.

Also any opinions on Dowker's referral to solipsism. I don't think this interpretation (or anything really) implies that at all. But it's interesting regardless.

Thanks again for replying. Really interesting stuff.

Ah, I think I see where the confusion lies. Consider the three propositions

P: "The fossil record exists"
Q: "Dinosaurs roamed the earth"
¬Q: "Dinosaurs did not roam the earth"

We might use a set ##\mathcal{F}_\mu## that reveals the logical implication ##P\rightarrow Q## "If the fossil record exists then dinosaurs roamed the earth". We can then be confident that no alternative set will contradict this implication. I.e. There is no set where ##P\rightarrow ¬Q## holds. The incompatible sets will instead simply contain neither ##Q## nor ##¬Q##, and so cannot be applied to decide whether or not fossils imply dinosaurs.
 
  • Like
Likes EclogiteFacies
  • #7
Morbert said:
Ah, I think I see where the confusion lies. Consider the three propositions

P: "The fossil record exists"
Q: "Dinosaurs roamed the earth"
¬Q: "Dinosaurs did not roam the earth"

We might use a set ##\mathcal{F}_\mu## that reveals the logical implication ##P\rightarrow Q## "If the fossil record exists then dinosaurs roamed the earth". We can then be confident that no alternative set will contradict this implication. I.e. There is no set where ##P\rightarrow ¬Q## holds. The incompatible sets will instead simply contain neither ##Q## nor ##¬Q##, and so cannot be applied to decide whether or not fossils imply dinosaurs.

I think I finally understand! There are sets that are incompatible with the universe as is, but these would be inconsistent. Therefore sets must provide consistent histories. As you said ##P\rightarrow Q##
 
  • #8
Let me use this analogy that hopefully won't be too strained.

Say you come home one day and find your sofa torn up and your dog looking guilty. To find out what happened, you would obviously open your quantum mechanics textbook and construct a set of histories with the relevant propositions

"Your dog attacks your sofa"
"Your dog looks guilty"
"Your sofa is torn"

You then conclude that the probability of your dog tearing up your sofa is close to 1, based on your observation of your dog and your sofa.

Then Dowker comes in and presents you with an alternative set containing the propositions

"Your phone rings"
"Your dog looks guilty"
"Your sofa is torn"

This set contains no reference to your dog attacking your sofa, and so cannot be used to conclude your dog attacked your sofa. But notice that it also does not include the proposition "Your dog did not attack your sofa". It simply makes no reference to your dog. It doesn't mean your dog didn't exist. Your dog attacking your sofa is just not a proposition in the logic.
 
  • Like
Likes Lynch101, Demystifier and EclogiteFacies
  • #9
Morbert said:
Let me use this analogy that hopefully won't be too strained.

Say you come home one day and find your sofa torn up and your dog looking guilty. To find out what happened, you would obviously open your quantum mechanics textbook and construct a set of histories with the relevant propositions

"Your dog attacks your sofa"
"Your dog looks guilty"
"Your sofa is torn"

You then conclude that the probability of your dog tearing up your sofa is close to 1, based on your observation of your dog and your sofa.

Then Dowker comes in and presents you with an alternative set containing the propositions

"Your phone rings"
"Your dog looks guilty"
"Your sofa is torn"

This set contains no reference to your dog attacking your sofa, and so cannot be used to conclude your dog attacked your sofa. But notice that it also does not include the proposition "Your dog did not attack your sofa". It simply makes no reference to your dog. It doesn't mean your dog didn't exist. Your dog attacking your sofa is just not a proposition in the logic.

Thank you again
I think I am grasping it.
Sets are useful to deduce certain histories.
For example the second set is useful to deduce that you had a phonecall. Not that the dog ripped up the sofa.

Thanks for being patient
We don't do physics in geology :p
 
  • #10
No problem.

For posterity, I should mention that in later papers Kent does go on to argue that different sets let you make contradictory conclusions but this is based on faulty reasoning on his part that I won't go into now.
 
  • Like
Likes EclogiteFacies
  • #12
Morbert said:
No problem.

For posterity, I should mention that in later papers Kent does go on to argue that different sets let you make contradictory conclusions but this is based on faulty reasoning on his part that I won't go into now.

Lol so,
To conclude our discussion.
Supporters of CH do not deny the existence of dinosaurs millions of years ago.
I instead misunderstood the interpretation.

Thanks so much :)
 
  • Like
Likes Morbert
  • #13
It seems to me that CH is like an ontological interpretation in which the ontology is left unspecified. To explain what I mean, let me compare it with Bohmian mechanics (BM). In one version of BM, the ontology is particle trajectories. In another version of BM, the ontology is field configuration as a function of time. So someone confused by multiplicity of Bohmian ontologies might say: OK, let me not not specify one ontology, instead let me develop a frame of thinking which allows me think about any possible ontology, be it particle position ontology, field configuration ontology, or anything else which makes sense. Then, by being maximally liberate about what "makes sense" can mean, one naturally arrives at CH interpretation. @Morbert does it make sense to you?
 
  • #14
Consistent Histories or not, the fossils might have been synthesized by some well-focused cosmic rays - yes that's improbable - but, anyway, the whole evolution probability is "very very very small" - the quote is from the attached James Hartle article, page 5:

"... the probability of the accidents of several billion years of biological evolution... whatever its value is, it is very, very, very small."
 

Attachments

  • ObserverStrikesBack.pdf
    625.5 KB · Views: 214
Last edited:
  • #15
Demystifier said:
It seems to me that CH is like an ontological interpretation in which the ontology is left unspecified. To explain what I mean, let me compare it with Bohmian mechanics (BM). In one version of BM, the ontology is particle trajectories. In another version of BM, the ontology is field configuration as a function of time. So someone confused by multiplicity of Bohmian ontologies might say: OK, let me not not specify one ontology, instead let me develop a frame of thinking which allows me think about any possible ontology, be it particle position ontology, field configuration ontology, or anything else which makes sense. Then, by being maximally liberate about what "makes sense" can mean, one naturally arrives at CH interpretation. @Morbert does it make sense to you?

There's actually a few different positions re/ ontology among CH proponents. On one end (probably the more controversial end) we have Robert Griffiths who maintains that CH gives us an ontology directly represented by the formalism[1]. He maintains, for example that measurements reveal pre-existing properties *and* QM is local, even going as far as to insist on locality[2].

On the opposite end there is Roland Omnes. He believes CH does not assert facts at all. It instead provides a consistent language we can use to make predictions about facts. Quoting him:

"Physical reality is unique. It evolves in time in such a way that, when actual facts arise from identical antecedents, they occur randomly and their probabilities are the ones given by the theory."[3]

"It should first be stressed that the notion of fact does not belong properly to the domain of theory. A theory is a description. It belongs to the highest category of language, and as the saying goes, the word "dog" does not bite. What one must ask of a theory is that it provide a notion describing what we see as fact and that nothing in this theoretical notion should conflict with what we observe as facts "

Gell-Mann and Hartle never really explicitly discuss ontology in any formal capacity though based on what I've read I think they lean more towards Griffiths than Omnes. Especially re/ their extended probability ensemble version of CH, where they do actually construct a common refinement of all history sets[4]. And Hartle does discuss how language intersects with QM and what it means to say things "happen"[5].

[1] https://arxiv.org/abs/1105.3932
[2] https://journals.aps.org/pra/abstract/10.1103/PhysRevA.101.022117 arxiv: https://arxiv.org/abs/1901.07050
[3] https://www.amazon.com/dp/0691036691/?tag=pfamazon01-20
[4] https://journals.aps.org/pra/abstract/10.1103/PhysRevA.85.062120 arxiv: https://arxiv.org/abs/1106.0767
[5] https://iopscience.iop.org/article/10.1088/1751-8113/40/12/S13 arxiv: https://arxiv.org/abs/quant-ph/0610131
 
Last edited:
  • Informative
  • Like
Likes gentzen, mattt, atyy and 1 other person
  • #16
Morbert said:
On the opposite end there is Roland Omnes. He believes CH does not assert facts at all. It instead provides a consistent language we can use to make predictions about facts.
Interesting! Does it mean that Omnes thinks that a standard formulation of QM, such as the minimal statistical interpretation a'la Ballentine, does not provide a consistent language? If so, what exactly is inconsistent in standard QM, according to Omnes? What Omnes thinks about (non)locality of Bell correlations?
 
  • #17
Demystifier said:
Interesting! Does it mean that Omnes thinks that a standard formulation of QM, such as the minimal statistical interpretation a'la Ballentine, does not provide a consistent language?
afaik he doesn't believe the statistical interpretation is inconsistent. Instead it's just a bit too narrow for his purposes. One of his primary motivations in developing CH is understanding the quantum-classical relationship, and the statistical interpretation might not be suitable for that. From his book "Understanding Quantum Mechanics"

"Some people believe [...] that the probabilistic character of quantum physics is best expressed if one considers conceptually infinite ensembles of identical systems, as in statistical mechanics. This approach, however, must be excluded when one intends to include classical physics within the quantum framework. In that case, considering an infinite collection of copies of the solar system would certainly then be odd."

What Omnes thinks about (non)locality of Bell correlations?
He presents CH as contrary to separable realist interpretations. He would say distantly entangled particles do not need to communicate, but if they are e.g. prepared in a state ##|\Phi^+\rangle## then a von Neumann property like ##|\Phi^+\rangle\langle\Phi^+|## is a property of the system of particles and cannot be expressed as separate properties of the individual particles.
 
  • Like
Likes Demystifier
  • #18
Demystifier said:
It seems to me that CH is like an ontological interpretation in which the ontology is left unspecified. To explain what I mean, let me compare it with Bohmian mechanics (BM). In one version of BM, the ontology is particle trajectories. In another version of BM, the ontology is field configuration as a function of time. So someone confused by multiplicity of Bohmian ontologies might say: OK, let me not not specify one ontology, instead let me develop a frame of thinking which allows me think about any possible ontology, be it particle position ontology, field configuration ontology, or anything else which makes sense. Then, by being maximally liberate about what "makes sense" can mean, one naturally arrives at CH interpretation. @Morbert does it make sense to you?
I used to think of the relation between CH and BM in those terms. That CH didn't fix an ontology didn't bother me. After all, CH stresses the importance of such an ontology (at least compared to MWI, i.e. as a mathematical structure in addition to the Hamiltonion and the quantum state), and gives consistency conditions for it. Somehow CH felt complementary to BM, being able to work with density matrices, being applicable to degrees of freedom living in finite dimensional Hilbert spaces, and being explicitly stochastic.

Maybe most important, both CH and BM explicitly felt understandable to me, "if I invested the time to work through" the material trying to explain them. After I learned to appreciate how BM can treat degrees of freedom living in finite dimensional Hilbert spaces, and how it can treat density matrices, my focus shifted from looking for similarities between CH and BM to looking for differences: BM can violate quantum mechanic predictions by violating the equilibrium hypothesis, thereby providing some approximation of QM for small closed systems (for short finite time durations). CH on the other hand is a sort of logic, and therefore should better exactly satisfy its consistency conditions. (I am still focused on small closed systems.) Which made me look closer at those conditions, and reduce my confidence that "one naturally arrives at CH".

Then I wanted to bound the dimension of the Hibert space of CH, because I thought that might be relevant for Fra, if he wants to "evaluate" CH. While searching whether that had been done before, I found a paper with a disappointingly weak bound, but an insightfull remark following that bound:
Fay Dowker and Adrian Kent said:
In other words, if the Hilbert space of the universe is finite-dimensional there is a strict bound on the number of probabilistic physical events. Once this number has occurred, the evolution of the universe continues completely deterministically. This is mathematically an unsurprising feature of the formalism but, as far as we are aware, physically quite new: no previous interpretation of quantum theory has suggested that quantum stochasticity is exhaustible in this way.
(A paper with the tight bound was also referenced, with the slightly misleading words: "A similar result has been obtained independently by Diósi [12].")

I like Mermin's view on consistent histories:
N. David Mermin said:
I view the consistent histories interpretation as a formalization and extension of Bohr’s doctrine of complementarity. The consistent historians liberate complementarity from the context of mutually exclusive experimental arrangements, by stating the restrictions in terms of the quantum mechanical formalism itself, without any reference to measurement. This enables one within a given framework to contemplate what is whether or not anything has actually been measured — indeed measurements in the consistent histories interpretation (as in the IIQM and the Everett interpretation) are simply a special case in which some of the subsystems function as apparatuses.

The price one pays for this liberation is that the paradoxical quality of complementarity is stripped of the protective covering furnished by Bohr’s talk of mutually exclusive experimental arrangements, and laid bare as a vision of a single reality about which one can reason in a variety of mutually exclusive ways, provided one takes care not to mix them up. Reality is, as it were, replaced by a set of complementary representations, each including a subset of the correlations and their accompanying correlata. In the consistent histories interpretation it is rather as if the representations have physical reality but the representata do not.
and again Mermin, many years later:
I realize that others have used their experience with quantum computation to make similar arguments on behalf of many worlds (David Deutsch) and consistent histories (Bob Griffiths). I would challenge them to make their preferred points of view the basis for a quick practical pedagogical approach to quantum computation for computer scientists who know no physics, as I have done with Copenhagen in my quantum-computation book. The approach to quantum mechanics via consistent histories in Griffiths’s book, while something of a tour de force, does not strike me as either quick or practical.
It is also the conclusion of consistent historians (see my answer to Question 16), but their apparent conservatism hides their ontologically radical insistence that all the explanations give correct accounts of the tests to which they apply, subject to the proviso that you cannot combine ingredients of one explanation with those of any other, since their validity is in general relative to different “frameworks.”
A simple nontrivial example of a history containing many different times that exactly satisfied the consistency conditions might persuade me to take another look at consistent histories (see my answer to Question 16).
Rather than concluding from this that the project is dead in the water, the consistent historians elevate it to a fundamental ontological principle. Reality is multifaceted. There can be this reality or there can be that reality, and provided you refrain from combining actualities from mutually inconsistent realities, all of the incompatible realities have an equally valid claim to actuality. This tangle of mutually incompatible candidates for actuality (associated with different “frameworks”) constitutes the no-collapse side of consistent histories. The collapse side lies in the fact that each of these peacefully coexisting mutually exclusive actualities is associated with what from the orthodox point of view (which consistent historians reject) would be a sequence of measurements and Born-rule collapses.

This multiplicity of incompatible realities reminds me of special relativity, where there is time in this frame of reference and time in that frame of reference, and provided only that you do not combine temporal statements valid in two different frames of reference, one set of temporal statements is as valid a description of reality as the other.

But I am disconcerted by the reluctance of some consistent historians to acknowledge the utterly radical nature of what they are proposing. The relativity of time was a pretty big pill to swallow, but the relativity of reality itself is to the relativity of time as an elephant is to a gnat. (Murray Gell Mann, in his talk of “demon worlds,” comes close to acknowledging this, yet he dismisses much less extravagant examples of quantum mysteries as so much “flapdoodle.”)
 
  • Like
Likes Fra and Demystifier
  • #19
Dowker and Kent said:
In other words, if the Hilbert space of the universe is finite-dimensional there is a strict bound on the number of probabilistic physical events. Once this number has occurred, the evolution of the universe continues completely deterministically. This is mathematically an unsurprising feature of the formalism but, as far as we are aware, physically quite new: no previous interpretation of quantum theory has suggested that quantum stochasticity is exhaustible in this way

@gentzen They presumably infer this from this lemma
lemma12.png

This limit is a bound on the fine-graining of ##\mathcal{S}##. But there is also a complementary set ##\mathcal{S}'## that can return probabilities for events ##\mathcal{S}## can't address. I.e. This is less a bound on probabilistic events that can occur in the universe, and more a bound on the universe's ability to have some observable ##O = \sum_i^k \lambda_i \Pi_i## capable of recording a history.
 
Last edited:
  • #20
gentzen said:
This multiplicity of incompatible realities reminds me of special relativity, where there is time in this frame of reference and time in that frame of reference, and provided only that you do not combine temporal statements valid in two different frames of reference, one set of temporal statements is as valid a description of reality as the other.

But I am disconcerted by the reluctance of some consistent historians to acknowledge the utterly radical nature of what they are proposing. The relativity of time was a pretty big pill to swallow, but the relativity of reality itself is to the relativity of time as an elephant is to a gnat. (Murray Gell Mann, in his talk of “demon worlds,” comes close to acknowledging this, yet he dismisses much less extravagant examples of quantum mysteries as so much “flapdoodle.”)
I am symphatetic to this writin of mermin. It's exactly what this "relativity" of reality means, and should be handle from the perspective of theory building that I see as one core challenge in general, talking about CH or not.

/Fredrik
 
  • #21
Dowker and Kent said:
In other words, if the Hilbert space of the universe is finite-dimensional there is a strict bound on the number of probabilistic physical events. Once this number has occurred, the evolution of the universe continues completely deterministically. This is mathematically an unsurprising feature of the formalism but, as far as we are aware, physically quite new: no previous interpretation of quantum theory has suggested that quantum stochasticity is exhaustible in this way.
Morbert said:
@gentzen They presumably infer this from this lemma
View attachment 313649
This limit is a bound on the fine-graining of ##\mathcal{S}##. But there is also a complementary set ##\mathcal{S}'## that can return probabilities for events ##\mathcal{S}## can't address. I.e. This is less a bound on probabilistic events that can occur in the universe, and more a bound on the universe's ability to have some observable ##O = \sum_i^k \lambda_i \Pi_i## capable of recording a history.

I now tried to understand this issue better, both the insightfull remark by Dowker and Kent, and how I can think about the sharp bound itself. (I am still focused on small closed systems.) The remark goes against one of Griffiths explicit goals (at least it does so for small closed systems):
Griffiths said:
Second, quantum time development is treated as an inherently stochastic process under all circumstances, not just when measurements take place. The time-dependent Schrödinger equation provides probabilities, not a deterministic time development of the world.
CH does avoid wavefunction collapse (and its apparent nonlocality), but the remark raises the suspicion that it might not succeed to treat quantum time development as an inherently stochastic process. CH's randomness no longer feels very different from the deterministic-pseudo-randomness of Bohmian mechanics. It is different in that hidden variables are completely absent, CHs "objective" variables are rather as plainly visible as the measurement results in the Copenhagen interpretation.

For thinking about the sharp bound itself, the time-symmetric formulation of CH with two hermitian positive semidefinite matrices ##\rho_i## and ##\rho_f## satisfying ##\operatorname{Tr}(\rho_i \rho_f)=1## seems well suited to me. The decoherence functional then reads ##D(\alpha,\beta)=\operatorname{Tr}(C_\alpha\rho_i C_\beta^\dagger\rho_f)## and the bound on the number ##m## of histories ##\alpha## with non-zero probability becomes ##\operatorname{rank}(\rho_i)\operatorname{rank}(\rho_f)\geq m##. Interpreting ##\rho_i## as corresponding to pre-selection ("preparation") and ##\rho_f## as post-selection ("measurement") gives at least some intuition why there is that unexpected product in the bound.

I had planned to look into different formulation of CH since some time, for a completely different reason: vanhees71 believes that the minimal statistical interpretation can be applied to the time evolution of a single quantum system over an indeterminate time. I would like to understand whether this is indeed possible. Trying to analyse this scenario with the standard formulation of CH doesn't work well, but I knew that there were different formulations of CH, some of which seemed to incorporate features relevant for that scenario. The bound itself rather reduces my confidence that CH will convince me that this is possible. But fixing an infinite Hilbert space and only increasing ##\operatorname{rank}(\rho_f)## as time goes by at least feels somewhat well behaved and analyzable to me (at least better than trying to work without any bounds, or trying to modify the Hilbert space on the fly). And ##\operatorname{rank}(\rho_f)## also seems to allow to model the limited information processing capability of an agent (which is probably important to Fra), as opposed to the impression given by the standard formulation that there would be some agent that has access to all the information hidden somewhere in the Hilbert space.

There is also a third topic I thought about in the connection to that sharp bound. What I don't like is that even histories with a negligible but positive probability will contribute to enforce a ridiculously huge Hilbert space. So CH risks to run into similar "unrealistic modeling" issues as described here for MWI, but for other reasons. (I am still focused on small closed systems.) One reason is that dephasement is a randomization effect, and the modeling of randomness by CH is slightly unrealistic (as hinted by that insightfull remark which triggered my thoughts). Another reason is that Griffiths' hope that backward error analysis might resolve all issues of approximate consistency is slightly too optimistic. Some "manual" tweaking, like setting negligible values of the decoherence functional explicitly to zero (including values on the diagonal, i.e. probabilities) seems beneficial. Maybe working with ##\rho_i## and ##\rho_f## could help with this, by using their singular values as some guidance for how to set reasonable thresholds.
 
  • #22
N. David Mermin said:
This multiplicity of incompatible realities reminds me of special relativity, where there is time in this frame of reference and time in that frame of reference, and provided only that you do not combine temporal statements valid in two different frames of reference, one set of temporal statements is as valid a description of reality as the other.

But I am disconcerted by the reluctance of some consistent historians to acknowledge the utterly radical nature of what they are proposing. The relativity of time was a pretty big pill to swallow, but the relativity of reality itself is to the relativity of time as an elephant is to a gnat. (Murray Gell Mann, in his talk of “demon worlds,” comes close to acknowledging this, yet he dismisses much less extravagant examples of quantum mysteries as so much “flapdoodle.”)
Fra said:
I am symphatetic to this writin of mermin. It's exactly what this "relativity" of reality means, and should be handle from the perspective of theory building that I see as one core challenge in general, talking about CH or not.
I think both Griffiths and Omnès fully acknowledge the utterly radical nature of what they are proposing. Whether Gell-Mann does is not important, his role in that game was of a different nature. Whether Hartle, Halliwell, or other proponents (like Hohenberg or Friedberg) did certainly is important, because they are those who keep CH alive.

For me, this special role of the framework is the most important message of CH. My impression from what I read from Hartle, Hohenberg, and Friedberg is that they treat the framework in a similar way as the initial condition. Just like the initial condition is not always known exactly, you do need it for making definitite predictions. But you don't need to know it exactly, so bounds or a probability distribution is already good enough to get some testable predictions.

But of course, this "initial value" perspective on the framework is a much smaller pill to swallow than interpreting it as "the relativity of reality itself". Edit: A "boundary value" perspective on the framework might explain even better what is going on. This has the added advantage of recovering one of Heisenberg's perspectives on the Copenhagen interpretation.
 
Last edited:
  • #23
I enjoy your ponderings about this, you touch upon what I also see as some key things.
gentzen said:
What I don't like is that even histories with a negligible but positive probability will contribute to enforce a ridiculously huge Hilbert space. So CH risks to run into similar "unrealistic modeling" issues as described here for MWI, but for other reasons.

gentzen said:
But of course, this "initial value" perspective on the framework is a much smaller pill to swallow than interpreting it as "the relativity of reality itself".
I think you are right that the initial value problem, and thus fine tuning problem is what to expect from the current paradigm. But once you see how ridicilous this gets, if you acutally expect to predict something, based on incomplete information [which is the natural situation we are in], I have after careful consideration found that the former pill is much larger than it seems, and will not fit the agents mouth. The latter pill is smaller, but in motion, therefore less well define, but it can be chewed.

"Difficult to see. Always in motion is the future.”
– Yoda

/Fredrik
 
  • #24
Dowker and Kent said:
In other words, if the Hilbert space of the universe is finite-dimensional there is a strict bound on the number of probabilistic physical events. Once this number has occurred, the evolution of the universe continues completely deterministically. This is mathematically an unsurprising feature of the formalism but, as far as we are aware, physically quite new: no previous interpretation of quantum theory has suggested that quantum stochasticity is exhaustible in this way.
gentzen said:
CH does avoid wavefunction collapse (and its apparent nonlocality), but the remark raises the suspicion that it might not succeed to treat quantum time development as an inherently stochastic process. CH's randomness no longer feels very different from the deterministic-pseudo-randomness of Bohmian mechanics.
There is still a subtlety that I think Dowker and Kent misses, and that needs to be considered. Hopefully it can be teased out with this simple example. Consider this experiment: a particle is prepared in the state |z+⟩ and its spin-x is measured in a lab with measuring device M and environment ϵ. The measuring device is prepared in a ready state, and has two pointer states M+ and M−. A sensible framework would have a support like the following$$\left[z^+,M_\mathbf{ready},\epsilon_0\right]
\otimes\left\{\begin{array}{ll}
\left[M^+\right]\\ \left[M^-\right]
\end{array}\right.$$ Here, we see a stochastic time development: One of the two branches occurs, and with a 50/50 probability. Alternatively, we could write down a framework with no stochastic time development
$$\left[z^+,M_\mathbf{ready},\epsilon_0\right]
\otimes\left[U^+\right]$$where ##|U^+\rangle=\frac{1}{\sqrt{2}}(|M^+,\epsilon^+\rangle+|M^−,\epsilon^−⟩)##. This framework has one history, that occurs with certainty. Do we, as Dowker and Kent suggest, therefore have competing theories of the system? One stochastic and one deterministic? Which one is right?

The resolution: Neither framework concern "the system", unqualified. They instead concern complementary properties of the system. The first concerns the usual pointer states the experimenter is interested in. The second concerns the property |U+⟩⟨U+| inaccessible to any human. So if a physicist uses the 2nd framework, they can only conclude that this property, and not the entire system, evolves deterministically. If they want to make statements about M+ and M−, they have to use the other framework with its inescapable stochastic branches.

Bringing this back to the universe as a whole and Dowker and Kent's original statement. What their lemma shows is that if we have some maximally fine-grained framework and a finite Hilbert space, such that, after some point in the future, the framework can only be extended with properties that evolve unitarily, then the framework cannot make predictions about most properties of the universe after this point. However, it does not mean the universe as a whole evolves "completely deterministically". The stochastic properties of the universe after this point in time can be adddressed with an alternative, complementary framework.

gentzen said:
For me, this special role of the framework is the most important message of CH. My impression from what I read from Hartle, Hohenberg, and Friedberg is that they treat the framework in a similar way as the initial condition. Just like the initial condition is not always known exactly, you do need it for making definitite predictions.
There is one important distinction: Hartle defines the cosmological theory input as the initial state and the dynamics (see II.A here), and so changing initial conditions would be changing the cosmological theory under evaluation, but changing the framework wouldn't be. Dowker and Kent, on the other hand, present different frameworks as different theories, which I don't think is correct.
 
Last edited:
  • #25
Morbert said:
Bringing this back to the universe as a whole and Dowker and Kent's original statement. What their lemma shows is that if we have some maximally fine-grained framework and a finite Hilbert space, such that, after some point in the future, the framework can only be extended with properties that evolve unitarily, then the framework cannot make predictions about most properties of the universe after this point. However, it does not mean the universe as a whole evolves "completely deterministically". The stochastic properties of the universe after this point in time can be adddressed with an alternative, complementary framework.
gentzen said:
But of course, this "initial value" perspective on the framework is a much smaller pill to swallow than interpreting it as "the relativity of reality itself". Edit: A "boundary value" perspective on the framework might explain even better what is going on. This has the added advantage of recovering one of Heisenberg's perspectives on the Copenhagen interpretation.
Both the "initial value" and the "boundary value" perspective slightly miss the fact that any framework containing the projectors relevant to the histories we want to talk about is "good enough". But Dowker and Kent's remark is independent of this. Whatever their "goal", it hints at just how closely related CH is to the Copenhagen interpretation, and that it has a different "feeling" than realist interpretations like Bohmian mechanics.

I once wrote that Omnès book (from 1999) felt like "let me calculate and explain" to me. A better characterization of CH more generally might be "let me calculate and talk". From a "let me calculate" perspective, there is little reason to include more projectors in the history or framework than we are actually interested in. The sharp bound on the Hilbert space dimension then tells us how difficult that calculation is going to be (at least if we use CH). We see that the history in time significantly adds to that calculation cost.

I better should be honest with myself that my personal focus is on small closed systems. Infinite dimensional Hilbert spaces without appropriate structure feel misguided to me. For me, any actual infinities should arise via appropriate limits from finite descriptions. In that context, the sharp bound tells me that I should not just successively increase the spatial domain under consideration, but also the time interval. And while I am at it, I should probably similarly successively increase the allowed energy range under consideration.

Given those elaborations, it still makes sense to me to look at frameworks as similar to some sort of boundary conditions. And depending on where the atoms or quantum system under investigation is located (for example in a gas, in the bulk of a solid, or at the surface of a solid), indeed different boundary conditions are appropriate. So the lesson from CH about the importance of the framework, and the incompatibility between different frameworks remains an important one.
 
  • #26
The Dowkwer and Kent paper discussed this quotation of Gell-Mann and Hartle’s

“If two essentially distinct quasiclassical domains exist, they may overlap, in the sense of
having some features in common so that sets of histories possesses a common coarse graining.
It is then possible that IGUSes in different but overlapping domains could make use of some
of the common features and thus communicate by observing and manipulating alternatives
of this common coarse graining.”


The paragraph of Gell-Mann and Hartle’s actually sound superficially like what would mate well with how I think of things. I haven't read their whole background, but if we take IGUS ~ agent, then they are talking about interacting agents, and this similar to my preferred thinking. The big issue here is: how to you (other iguses? or what) _describe_ this interaction of other IGUSes?

These are I think good question to ask. But trying to answer them I would say immediately is more than just an interpretation.

The Dowkwer and Kent writes

"It is important to distinguish between genuine communication and the IGUSes’ beliefs about the matter. For instance, either of the interpretations we have described allows a situation in which we can describe two IGUSes — perhaps both working on the interpretation of the consistent histories formalism — each of which

believes itself to be corresponding with the other, but in which there is no agreement about the contents of the correspondence. No one, though, would describe this as communication. We would like to translate the statement that two IGUSes are communicating to mean that the formalism allows us simultaneously to describe both their experiences, including some correlated pieces of information, and gives a probability distribution for the experiences which respects the correlations. It seems to us that anything weaker can hardly be referred to as communication."

I take communication to mean physical interaction, which gets us right to a conjecture that the rules of IGUS communication, are nothing but the laws of physics. If this is what theyh actally mean, that is exactly the key of my own thinking. But that's just the idea, the problem is still, how to make sense out of this? How doe we describe such a mess? and from which perspective?

/Fredrik
 
  • Like
Likes gentzen
  • #27
Morbert said:
Dowker and Kent, on the other hand, present different frameworks as different theories, which I don't think is correct.
In the light post the content of post #26 + associating the "framework" to one IGUS, seeing them as different "theories" is the way that makes most sense to me!

But this certainly feeds even more questions, and takes us to "relativity of reality" and any interpretations that feeds good question serves a purpose.

/Fredrik
 

Related to Consistent Histories Interpretation - History

1. What is the Consistent Histories Interpretation?

The Consistent Histories Interpretation is a theory in quantum mechanics that proposes a new way of understanding the nature of reality. It suggests that the universe is made up of a series of consistent histories, or possible paths that events can take, rather than a single objective reality.

2. How does the Consistent Histories Interpretation differ from other interpretations of quantum mechanics?

The Consistent Histories Interpretation differs from other interpretations, such as the Copenhagen Interpretation, by suggesting that all possible histories exist simultaneously and that the observer's role is to choose which history they experience. This differs from the Copenhagen Interpretation, which suggests that the observer's actions can influence the outcome of events.

3. What evidence supports the Consistent Histories Interpretation?

Currently, there is no definitive evidence that supports the Consistent Histories Interpretation. However, some physicists argue that it offers a more complete and elegant explanation of quantum phenomena than other interpretations. Additionally, some experiments, such as the delayed-choice quantum eraser, have been interpreted as supporting the idea of multiple consistent histories.

4. What are the implications of the Consistent Histories Interpretation?

The Consistent Histories Interpretation has significant implications for our understanding of reality and the role of the observer in quantum mechanics. It suggests that our perception of reality is subjective and influenced by our choices and actions. It also challenges traditional notions of causality and determinism, as events can have multiple possible outcomes.

5. How does the Consistent Histories Interpretation relate to the concept of parallel universes?

The Consistent Histories Interpretation does not necessarily propose the existence of parallel universes, but it does suggest that all possible histories exist simultaneously. This could be interpreted as similar to the idea of parallel universes, where different versions of reality exist in parallel to our own. However, the Consistent Histories Interpretation does not necessarily require the existence of physical parallel universes.

Similar threads

  • Quantum Interpretations and Foundations
Replies
2
Views
785
  • Quantum Interpretations and Foundations
3
Replies
100
Views
5K
  • Quantum Interpretations and Foundations
2
Replies
43
Views
2K
  • Quantum Interpretations and Foundations
4
Replies
120
Views
6K
  • Quantum Interpretations and Foundations
Replies
1
Views
2K
  • Quantum Interpretations and Foundations
9
Replies
309
Views
8K
  • Quantum Interpretations and Foundations
Replies
19
Views
1K
  • Quantum Interpretations and Foundations
11
Replies
376
Views
11K
  • Quantum Interpretations and Foundations
Replies
7
Views
761
  • Quantum Interpretations and Foundations
Replies
25
Views
1K
Back
Top