Realism from Locality? Bell's Theorem & Nonlocality in QM

In summary, Bricmont, Goldstein, and Hemmick wrote two papers presenting a Bell-like theorem that involves only perfect correlations and does not involve any inequalities. They claim that this version proves nonlocality and that the theorem cannot be interpreted as a disproof of realism. The authors define realism as non-contextual value-maps, and state that such value-maps cannot exist. Therefore, it is not a choice between locality and realism, as both are incompatible. The authors are Bohmians and accept contextual realism. This view fits with the anti-philosophical attitude, as the minimal interpretation is not complete enough for those inclined towards philosophy. However, it is not new and has been discussed on forums like PF many
  • #176
zonde said:
There are more than one point on which your position is unscientific.
First, you believe in one true explanation. Just because you have an explanation that fits observations does not mean that there can't be other explanations.
Second, the process of gaining scientific knowledge is ... well a process, a dynamical story as you call it. What is the point of denying value of dynamical approach and then seeking justification for that from perspective of dynamical approach. It's stolen concept fallacy.
So answering your question: "Must all scientific explanations be dynamical?" - yes, all scientific explanations must be dynamical because only testable explanations are scientific and the process of testing is dynamical, you have initial conditions and then you observe what happens and if your observations agree with predictions.

There may be other explanations, but as a physicist I have to stake my approach on just one. It took my math colleague and I three months to modify and apply Regge calculus to the SCP Union2 data and it took us four months to modify and program a fit to the Planck 2015 CMB power spectrum data. These are just two examples of the many papers I have written based on my one approach. Maybe a philosopher can write one paper on a particular approach this month and turn around and write another paper on another approach next month, but we don't have that luxury in physics.

It is true that physics is done dynamically, but that doesn't mean an explanation of what we find has to be ultimately dynamical. It is also true that we do astrophysics and cosmology from Earth, seeing the sky rotate about us, but we long ago abandoned geocentricism.

Changing from dynamical to adynamical explanation is revolutionary. As Skow said in his review, "It really is necessary to understand how radical this idea is. ... You can't explain A because B and B because A." But, in adynamical explanation (such as Einstein's equations of GR), it is precisely the case that "A (the spacetime metric) because B (the stress-energy tensor) and B because A." You can't input the SET to solve EE's for the metric unless you already know how to make spatiotemporal measurements, i.e., you already have the metric. And vice-versa of course. Solutions to EE's are self-consistent sets of the spacetime metric, energy, momentum, force, etc. on on the spacetime manifold, where "self-consistent" means "satisfies the constraint, i.e., EE's." That's why our proposed approach constitutes a Kuhnian revolution. When I started in foundations 25 years ago, I too was convinced that GR and/or QM were flat out wrong. Now, I believe (base my research approach on the fact that) they are in fact both right and beautifully self-consistent.

Every physicist has to stake their research on a particular model of "the real external world." I'm very happy now with mine because it shows modern physics is in fact complete (minus QG) and consistent, i.e., it is amazingly comprehensive and coherent ... as long as you're willing to give up your anthropocentric dynamical bias.
 
Physics news on Phys.org
  • #178
vanhees71 said:
Causality is not vague

Not if you give it a precise definition, such as "operators at spacelike separated events commute", no. Which is why I've repeatedly suggested that we should all stop using vague ordinary language and start using precise math, or at least precise definitions that refer to precise math, as the one I just gave in quotes does. Then we can stop arguing about words and start talking about physics.

vanhees71 said:
Locality is another case

If you define "locality" as "does not violate the Bell inequalities", then QFT is not local. If you define "locality" as meaning the same thing as "causality" does above, then QFT is local, but you've also used two ordinary language words to refer to the same physical concept, plus you've thrown away the usual way of referring in ordinary language to another physical concept.

In any case, once again, can we please stop using vague ordinary language?

vanhees71 said:
the S-matrix provides a time ordering. You define an initial state (usually two asymptotic free particles) and then look for the transition probability rate to a given final state. This reflects how we can do experiments

In other words, it reflects how we experience things. But QFT does not explain why we experience things that way. We don't fully understand why we experience things that way.

vanhees71 said:
Of course QFT admits entangled states. We write them down all the time discussing about photons

Sure, you can write down such states, but they include operators at different spacetime events (which can be spacelike separated events). So they're not "local" in the ordinary sense of the term.

vanhees71 said:
Measurements are just usual interactions between entities described by the fields, and due to microcausality they are local

With your preferred definition of "local", yes. But they violate the Bell inequalities, which means they are not "local" in that sense of the term "local". Which, again, is why I keep saying we should stop using vague ordinary language.

vanhees71 said:
there cannot be any causal influence of one measurment event on another measurement event that is space-like separated

With your preferred definition of "causal influence", yes. But not everybody shares your preferences for definitions of ordinary language terms. Which is why we should stop using them.
 
  • Like
Likes Demystifier
  • #179
vanhees71 said:
a lot of confusion arises from the fact that too often people don't distinguish between causal effects and (predetermined) correlations.

Correlations that violate the Bell inequalities can't be "predetermined" locally. That's what Bell's Theorem shows.
 
  • Like
Likes Demystifier
  • #180
PeterDonis said:
Correlations that violate the Bell inequalities can't be "predetermined" locally. That's what Bell's Theorem shows.
They can only not be predetermined Bell-locally, by the latter's definition, as in your previous posts. But this is a tautology and means nothing.
 
Last edited:
  • Wow
  • Like
Likes vanhees71 and Demystifier
  • #181
vanhees71 said:
Hm, in my understanding, causality implies a specific time-ordering. In fact it's the only sense you can give to specific time-ordering, and thus causally connected events cannot be space-like separated. In other words event A can only be the cause of event B if it is time- or light-like separated from B and B is within or on the future light cone of A. It's not clear to me, how you define causality to begin with.
PeterDonis said:
The same way you've been defining it (you sometimes use the term "microcausality", but sometimes not): that spacelike separated measurements commute. But now you seem to be shifting your ground and giving a different definition of "causality" (the one I quoted above), the basis of which in QFT I don't understand (which is why I asked about it).
PeterDonis said:
Not if you give it a precise definition, such as "operators at spacelike separated events commute", no. Which is why I've repeatedly suggested that we should all stop using vague ordinary language and start using precise math, or at least precise definitions that refer to precise math, as the one I just gave in quotes does. Then we can stop arguing about words and start talking about physics.

If you define "locality" as "does not violate the Bell inequalities", then QFT is not local. If you define "locality" as meaning the same thing as "causality" does above, then QFT is local, but you've also used two ordinary language words to refer to the same physical concept, plus you've thrown away the usual way of referring in ordinary language to another physical concept.

In any case, once again, can we please stop using vague ordinary language?
Causality is generally defined as the universal law that causes must preceed effects in every frame of reference, and it entails nothing more. In contrast to locality, a notion with multiple, partially conflicting uses, one cannot tamper with the content of the concept of causality without misrepresenting much of classical and quantum physics.

Causality is a precise notion to the extent that cause and effect are precise notions. In physics, cause and effect are made fully precise in the context of dynamical systems. Here changes in the initial conditions of a differential equation at some time ##t_0## are the causes, and the resulting changes in the trajectory for times ##t>t_0## are the effects caused by these changes. Lorentz invariance then implies that the causes of an effect at some spacetime position ##x## must lie in the past cone of ##x##, and that the causes of an effect whose definition involves the spacetime positions from some set ##X## must lie in the union of the past cones of the points in ##X##.

The operational content of causality are expressed in terms of response functions - which embody the notion of causality in their definition - through the so-called Kramers–Kronig relations and resulting dispersion relations. In this form, the notion of causality extends to dynamical systems with memory. In quantum field theory, causality is rigorously implemented through [URL='https://www.physicsforums.com/insights/causal-perturbation-theory/']causal perturbation theory[/URL], where the dispersion relations are the essential tool that ensures a perturbationally well-defined finite and manifestly covariant renormalization process. In the operator apprach to quantum field theory, causality is implemented through the causal commutation relations of fields - i.e., the commutativity or anticommutativity of the field operators at spacelike separation. The fact that cusal commutation rules hold is called microcausality.

Operationally, the causal commutation relations assert (roughly) that states with prescribed field expectations at ##x_1,\ldots,x_n## can be prepared independently by causes near ##x_1,\ldots,x_n## whenever ##x_1,\ldots,x_n## are mutually spacelike separated. This is meant by causal independence.
Simultaneously, they express certain local independence properties. This is the reason why causal commutation relations are - in my opinion somewhat misleadingly - also referred to as local commutation relations, though there is nothing local about them as the relations involve two distinct spacetime points.
 
Last edited:
  • Like
Likes ftr, vanhees71, Auto-Didact and 1 other person
  • #182
PeterDonis said:
Correlations that violate the Bell inequalities can't be "predetermined" locally. That's what Bell's Theorem shows.
No, I didn't say "locally". In QT a state is given by a statistical operator. That's it. It's neither local nor non-local. It's just given by the preparation procedure. The most intuitive picture of time evolution is the Heisenberg picture, where also in the mathematical formulation the state is time-independent (or only time-dependent if there's explicit time-dependence).
 
  • #183
vanhees71 said:
The most intuitive picture of time evolution is the Heisenberg picture, where also in the mathematical formulation the state is time-independent (or only time-dependent if there's explicit time-dependence).
Small correction: In the Heisenberg picture, the state is never time-dependent, as any explicit time dependence is necessarily in the terms of the Hamiltonian.
 
  • #184
PeterDonis said:
Not if you give it a precise definition, such as "operators at spacelike separated events commute", no. Which is why I've repeatedly suggested that we should all stop using vague ordinary language and start using precise math, or at least precise definitions that refer to precise math, as the one I just gave in quotes does. Then we can stop arguing about words and start talking about physics.
If you define "locality" as "does not violate the Bell inequalities", then QFT is not local. If you define "locality" as meaning the same thing as "causality" does above, then QFT is local, but you've also used two ordinary language words to refer to the same physical concept, plus you've thrown away the usual way of referring in ordinary language to another physical concept.

In any case, once again, can we please stop using vague ordinary language?
In other words, it reflects how we experience things. But QFT does not explain why we experience things that way. We don't fully understand why we experience things that way.
Sure, you can write down such states, but they include operators at different spacetime events (which can be spacelike separated events). So they're not "local" in the ordinary sense of the term.
With your preferred definition of "local", yes. But they violate the Bell inequalities, which means they are not "local" in that sense of the term "local". Which, again, is why I keep saying we should stop using vague ordinary language.
With your preferred definition of "causal influence", yes. But not everybody shares your preferences for definitions of ordinary language terms. Which is why we should stop using them.
In my field, i.e., relativistic QFT, these words have a clear meaning: Causality is implemented by the microcausality constraint, and locality means the locality of interactions, i.e., the Hamiltonian density is a local polynomial of the fields and their canonical momenta, i.e., it's depending only on one space-time argument. Also proper orthochronous Poincare transformations are realized on the field operators locally, i.e., the field operators transform under the unitary representations of the proper orthochronous Poincare group as the analogous classical fields do. Together with microcausality this makes the standard QFTs successfully used to describe real-world observations consistent with the space-time structure and thus the induced meaning of causality of special relativity.

It's also clear that we have states which are describing non-local correlations as entanglement. Nothing in the formalism described above prevents this, and it's necessary to describe the very experiments we are discussing here. I've never stated otherwise. The only thing which clearly contradicts the locality of interactions, which is a very clear concept and defined above, is the nonsensical assumption as if a local measurement on one part of an entangled system leads to acausal interactions at distance or causal influence. That's indeed only put by unclear language into what some people call interpretation, and it's just contradicting the sharply defined mathematical concepts underlying the mathematical construction of the theory.

Concerning the violation of Bell's inequalities I always stressed as the main point that this doesn't prove non-locality but just the existence of long-range correlations.

I don't understand, why you think, we don't understand something. We never understand why things are as they are but we can only investigate how things are and describe our findings as accurately as possible. So far everything is described by relativistic QFT (except the unsolved question, how to describe gravitation). You cannot expect more from physics.
 
Last edited:
  • #185
[Edit: Corrected the equation of motion for the Heisenberg-picture stat. op.]

What I meant is the partial time derivative in the Heisenberg-picture equation of motion,
$$\frac{\mathrm{d}}{\mathrm{d} t} \hat{\rho}(t)=\partial_t \hat{\rho}(t)+\frac{1}{\mathrm{i} \hbar} [\hat{\rho}(t),\hat{H}(t)]=0.$$
Here the first term refers to the dependency of ##\hat{\rho}## on the "fundamental operators" of the theory which by definition are not explicitly time-dependent. These are in QFT the field operator, from which all the other operators are built as appropricate functions/functionals. The partial derivative includes a possible explicit time dependence.

An example is that sometimes you like to describe a QFT system with some time-dependent "classical background field" present. This background field brings in explicit time-dependence.

[The following is WRONG as @A. Neumaier pointed out in #187]
Another important example for an explicitly time-dependent state is local thermal equilibrium. In the grand-canonical description it reads
$$\hat{\rho}=\frac{1}{Z} \exp[-(\hat{p} \cdot u(x)-\mu(x))/T(x)], \quad Z=\mathrm{Tr} \exp[-(\hat{p} \cdot u(x)-\mu(x))/T(x)].$$
Here the explicit time dependence comes from the depencence of the local temperature and chemical potential(s) and the four-flow field ##u=\gamma(1,\vec{v})## on ##x=(t,\vec{x})##; ##\hat{p}## is the operator for total four-momentum which as a functional of the field operators is by definition not explicitly time dependent.
 
Last edited:
  • #186
RUTA said:
There may be other explanations, but as a physicist I have to stake my approach on just one. It took my math colleague and I three months to modify and apply Regge calculus to the SCP Union2 data and it took us four months to modify and program a fit to the Planck 2015 CMB power spectrum data. These are just two examples of the many papers I have written based on my one approach. Maybe a philosopher can write one paper on a particular approach this month and turn around and write another paper on another approach next month, but we don't have that luxury in physics.
In physics you always have to be ready that your model will be falsified by observation in some domain where your model has not yet been tested.

RUTA said:
It is true that physics is done dynamically, but that doesn't mean an explanation of what we find has to be ultimately dynamical.
Any scientific explanation has to give predictions that are testable within dynamical process. So even if you believe that adynamical view can explain observation better you still have to be able to translate your adynamical view into dynamical story and point out unique features that show up in dynamical story.

RUTA said:
It is also true that we do astrophysics and cosmology from Earth, seeing the sky rotate about us, but we long ago abandoned geocentricism.
Almost all observations are still geocentric. So non geocentric model still have to express it's predictions for geocentric observer.

RUTA said:
Changing from dynamical to adynamical explanation is revolutionary. As Skow said in his review, "It really is necessary to understand how radical this idea is. ... You can't explain A because B and B because A." But, in adynamical explanation (such as Einstein's equations of GR), it is precisely the case that "A (the spacetime metric) because B (the stress-energy tensor) and B because A." You can't input the SET to solve EE's for the metric unless you already know how to make spatiotemporal measurements, i.e., you already have the metric. And vice-versa of course. Solutions to EE's are self-consistent sets of the spacetime metric, energy, momentum, force, etc. on on the spacetime manifold, where "self-consistent" means "satisfies the constraint, i.e., EE's." That's why our proposed approach constitutes a Kuhnian revolution. When I started in foundations 25 years ago, I too was convinced that GR and/or QM were flat out wrong. Now, I believe (base my research approach on the fact that) they are in fact both right and beautifully self-consistent.
There is big difference between facts A and B and components of the model A and B. It seems you are mixing them together.

RUTA said:
Every physicist has to stake their research on a particular model of "the real external world." I'm very happy now with mine because it shows modern physics is in fact complete (minus QG) and consistent, i.e., it is amazingly comprehensive and coherent ... as long as you're willing to give up your anthropocentric dynamical bias.
Every scientist has to operate within common generally accepted framework. Within that framework you have a lot of freedom with your explanations, but there is one condition - your explanation has to produce predictions that are testable even for those who do not believe in your explanation. So your condition "as long as you're willing to give up your anthropocentric dynamical bias" takes you out of that framework.
 
  • #187
vanhees71 said:
An example is that sometimes you like to describe a QFT system with some time-dependent "classical background field" present. This background field brings in explicit time-dependence.
No. It changes only the Hamiltonian by a time-dependent term.

vanhees71 said:
Another important example for an explicitly time-dependent state is local thermal equilibrium. In the grand-canonical description it reads
$$\hat{\rho}=\frac{1}{Z} \exp[-(\hat{p} \cdot u(x)-\mu(x))/T(x)], \quad Z=\mathrm{Tr} \exp[-(\hat{p} \cdot u(x)-\mu(x))/T(x)].$$
No. The local equilibrium density operator is not in the Heisenberg picture, which would result in a covariant formula. The formula you give is covariant but in correct since the right hand side depends on ##x##, not on ##t##. You need to integrate over a Cauchy surface to get a valid exponent to which to apply the standard cumulant expansion. This shows that the expression is frame-dependent, hence constitutes a Schrödinger picture.
 
  • #188
The right-hand side depends on ##t## and ##x##, and it's manifestly covariant, ##u## is a four-vector field, ##\hat{p}## is a four-vector, ##T## and ##\mu## are four-vector fields.

How else would you write this standard statistical operator?
 
Last edited:
  • #189
vanhees71 said:
The right-hand side depends on ##t## and ##x##, and it's manifestly covariant, ##u## is a four-vector field, ##\hat{p}## is a four-vector, ##T## and ##\mu## are four-vector fields.

How else would you write this standard statistical operator?
I admitted that your right hand side is covariant, but it hasn't the correct form, hence is nonsense.

In nonrelativistic QFT, local equilibrium is given by the Schrödinger picture density operator ##\rho(t)=Z^{-1}e^{-S(t)/\hbar}##, where, with 3-position ##x##, 3-momentum operator density ##\hat p(t,x)##, Hamiltonian density ##\hat H(t,x)##, and number operator density ##\hat N(t,x)##,
$$S(t)=\int dx \frac{\hat p(t,x) \cdot u(t,x)+\hat H(t,x)-\hat N(t,x) \mu(t,x)}{T(t,x)}$$
and ##Z=Tr~e^{-S(t)/\hbar}##.

This does not become your formula when transformed to the Heisenberg picture.
 
Last edited:
  • Like
Likes vanhees71
  • #190
Ok, I have to think about this, though I don't understand why your ##\hat{\rho}## describes local thermal equilibrium.
 
  • #191
vanhees71 said:
Ok, I have to think about this, though I don't understand why your ##\hat{\rho}## describes local thermal equilibrium.
I corrected my formula, which was also nonsense. The new formula describes local equilibrium since if you discretize the integral into a sum over a number of mesoscopic cells, you get the formula that you would get from regarding the cells as independent and in equilibrium by taking a tensor product.

You can also get this formula from Jaynes' maximum entropy principle, assuming the local densities at fixed time to be the set of relevant variables.
 
  • Like
Likes vanhees71
  • #192
Argh. Aside the maybe wrong idea of a local thermal equilibrium stat. op. I quoted a wrong equation of motion for the statistical operator. Of course, in the Heisenberg picture the covariant total time derivative is the the usual total time derivative, and thus it must read (von Neumann equation)
$$\mathring{\hat{\rho}(t)}=\frac{\mathrm{d}}{\mathrm{d} t} \hat{\rho}(t)=\frac{1}{\mathrm{i}} [\hat{\rho},\hat{H}(t)]+\partial_t \hat{\rho}(t)=0.$$
So in the Heisenberg picture the correct EoM for the statistical operator reads
$$\partial_t \hat{\rho}(t)=\frac{1}{\mathrm{i}} [\hat{\rho}(t),\hat{H}(t)].$$
That's valid also for time-dependent Hamiltonians (that's why I wrote ##\hat{H}(t)##).
 
  • #193
A. Neumaier said:
I corrected my formula, which was also nonsense. The new formula describes local equilibrium since if you discretize the integral into a sum over a number of mesoscopic cells, you get the formula that you would get from regarding the cells as independent and in equilibrium by taking a tensor product.

You can also get this formula from Jaynes' maximum entropy principle, assuming the local densities at fixed time to be the set of relevant variables.
Sure, you are right. But if I write the statistical operator for relativistic quantum fields in the Heisenberg picture, why should it then be the statistical operator in the Schrödinger picture all of a sudden? I'd say one can just take your formula an write it down using the relativistic (canonical) energy-momentum-tensor ##\hat{\mathcal{T}}^{\mu \nu}(x)##. Then
$$\hat{\rho}=\frac{1}{Z} \exp(-\hat{S}),$$
where
$$\hat{S}=\int \mathrm{d}^3 \Sigma_{\mu} [u_{\nu}(x) \hat{\mathcal{T}}^{\mu \nu}(x)-\mu(x) \hat{\mathcal{J}}^{\mu}(x)]/T(x),$$
where ##\hat{\mathcal{J}^{\mu}}## is the four-current operator of a conserved charge (baryon number, electric charge for instance). The integral is over some spacelike hypersurface. With the operators in the Heisenberg picture this should be the stat. op. in the Hiesenberg picture too, right?

However, I must admit I've never seen this idea used in non-equilibrium QFT. As you well know, there usually one works with a general ansatz, derives the Kadanoff-Baym equations und does further approximations from there.
 
  • #194
vanhees71 said:
With the operators in the Heisenberg picture this should be the stat. op. in the Heisenberg picture too, right?
No. Apply your recipe to the Schrödinger state of a free relativistic scalar particle instead of a quantum field, and you'll see that one cannot transform from the Schrödinger to the Heisenberg picture by a relabeling of the kind you do. It still remains the Schrödinger state. You need to apply the usual unitary transformation to mediate between the pictures.

Moreover, in your ansatz, there is a redundancy in that multplying ##u,\mu## and ##T## by the same field doesn't change the result...
 
Last edited:
  • #195
vanhees71 said:
I've never seen this idea used in non-equilibrium QFT.
Well, you'd never claim something you had never seen...
vanhees71 said:
As you well know, there usually one works with a general ansatz, derives the Kadanoff-Baym equations und does further approximations from there.
This is needed in the covariant case since due to renormalization, there are no sensible dynamical equations in interacting QFTs, so one needs to find the hydrodynamic equations from the functional integral rather than via a projection operator formalism.

There should be something covering the above in older nonrelativistic work on nonequilibrium statistical mechanics, perhaps in the book de Groot and Mazur, but I don't have it available to check.
 
  • #196
In the Schrödinger picture the field operators would be time-independent, and I've always a hard time to see the Poincare transformation properties for the Schrödinger-picture operators.

The non-relativistic case is a bit simpler, but if I use Heisenberg fields, why is then the Stat. Op. all of a sudden in another picture of time evolution? In the non-relativstic case, I'd write it in the form
$$\hat{S}=\int \mathrm{d}^3 x \left [\hat{\mathcal{H}}(t,\vec{x}) - \vec{\beta}(t,\vec{x}) \cdot \vec{\mathcal{G}}(t,\vec{x})-\mu(t,\vec{x}) \hat{\rho}(t,\vec{x}) \right]/T(t,\vec{x}).$$

I didn't understand your remark about redundancy. Note that the script T (energy-momentum tensor operator) is different from the usual T (the temperature, which is a c-number field).
 
  • #197
vanhees71 said:
In the Schrödinger picture the field operators would be time-independent,
Except possibly for ##H##. I was lazy and wrote everywhere a dependence on ##t##.
vanhees71 said:
The non-relativistic case is a bit simpler, but if I use Heisenberg fields, why is then the Stat. Op. all of a sudden in another picture of time evolution?
OK, I see now what you mean. Need to think about this...
vanhees71 said:
In the non-relativistic case, I'd write it in the form
$$\hat{S}=\int \mathrm{d}^3 x \left [\hat{\mathcal{H}}(t,\vec{x}) - \vec{\beta}(t,\vec{x}) \cdot \vec{\mathcal{G}}(t,\vec{x})-\mu(t,\vec{x}) \hat{\rho}(t,\vec{x}) \right]/T(t,\vec{x}).$$
What is ##\cal G##?
vanhees71 said:
I didn't understand your remark about redundancy.
Sorry; corrected. ##u,\mu,T## are coefficient fields but only the quotients are well-determined. Thus the redundancy. To fix this, there should be no denominator ##T##, and ##T## should be computed from your ##u## as its time component.

But your covariant formula also does not have enough intensive fields since the energy-momentum tensor has more coordinates than the momentum vector in my formula. There must be a multiplier field for every field operator component.
 
  • #198
vanhees71 said:
I don't understand, why you think, we don't understand something.

We don't understand why we experience things a certain way. More precisely, we don't understand how our experiences are produced by our brains (which are in turn connected to the rest of the universe through our senses). But that's not a question of physics; it's a question of neuroscience, cognitive science, etc.

We also don't understand why we experience time to have a particular direction even though the underlying physical laws are time-symmetric (with the minor exception of weak interactions that don't play a part in the operation of our brains and bodies anyway). To some extent that is a question of physics, in that if physics can give an explanation for how time asymmetry can be produced from underlying laws that are time symmetric, we might not need to understand all the details of neuroscience, cognitive science, etc. to understand why we experience time to have a particular direction.

I am aware of only one hypothesis from physics to explain time asymmetry, namely asymmetry of initial conditions: time has an arrow in our universe because our universe started in a state with a very high degree of symmetry and uniformity. But we don't really have any way of testing this hypothesis since we can't run controlled experiments on universes.
 
  • Like
Likes julcab12 and Auto-Didact
  • #199
A. Neumaier said:
Except possibly for ##H##. I was lazy and wrote everywhere a dependence on ##t##.

OK, I see now what you mean. Need to think about this...

What is ##\cal G##?

Sorry; corrected. ##u,\mu,T## are coefficient fields but only the quotients are well-determined. Thus the redundancy. To fix this, there should be no denominator ##T##, and ##T## should be computed from your ##u## as its time component.

But your covariant formula also does not have enough intensive fields since the energy-momentum tensor has more coordinates than the momentum vector in my formula. There must be a multiplier field for every field operator component.
Well, how you choose your thermodynamical parameters is a matter of convention. I also usually prefer ##\alpha=\mu/T## (I don't even know a name for it, but calculational-wise it's often more convenient if it comes to certain quantities like susceptibilities of (conserved) charges and things like that).

It's also for sure that what I wrote down is the statistical operator in the Heisenberg picture. One has to distinguish between dynamical and explicit time dependence. In my idea of a local-thermal-equilibrium stat. op. ##\hat{\mathcal{H}}## is the Hamilton density, ##\hat{\vec{\mathcal{G}}}## the momentum density, and ##\hat{\rho}## the density of some conserved charges (if there are more then you need a separate chemical potential for each of them).

Now concerning the time dependence. That's something not well covered in almost all QM textbooks. The best one I know in this respect is

E. Fick, Einführung in die Grundlagen der Quantenmechanik, Aula-Verlag

It's the only book which clearly writes all the equations in terms of an arbitrary picture of time evolution, and of course you must stay clearly within one picture or cleanly go from one picture to the other by the appropriate time-dependent unitary transformation.

You find this formalism also in my (German) QM manuscript (however in much shorter form):

https://itp.uni-frankfurt.de/~hees/faq-pdf/quant.pdf
Now concerning dynamical and explicit time dependence: You start to develop a specific quantum theory by a set of "fundamental observables". E.g., in the non-relativistic quantum mechanics of one scalar particle you usually start in the QM 1 lecture with position and momentum ##\hat{\vec{x}}## and ##\hat{\vec{p}}##. These "fundamental operators" are by definition not explicitly time-dependent but get their time dependence from the choice of the picture of time evolution. In the Schrödinger picture, which is usually taught first, these operators are completely time-independent. All observables are then built as functions of these fundamental operators. Of course you need some algebra, which in this case is motivated by the fact that momentum should be the generators of spatial translations, leading to the assumption of the usual Heisenberg algebra. Everything else is then built as functions by educated guesses from classical mechanics, including the Hamiltonian of the system, which is the operator defining dynamical time evolution.

Now an observable ##\hat{O}## is a function of ##\hat{\vec{x}}## and ##\hat{\vec{p}}## and, maybe, explicitly on ##t## (note that ##t## is not an observable but a parameter in QT in order to have a stable ground state, i.e., the possibility to write down Hamiltonians bounded from below, an argument brought forward by Pauli in his famous encyclopedia articles on wave mechanics which still is among the best expositions of the theory ever written): ##\hat{O}=\hat{O}(\hat{x},\hat{p},t)##.

Now one must distinguish different "time derivatives". First of all there's the mathematical time dependence of the "fundamental operators", defined by an equation of motion
$$\mathrm{d}_t \hat{\vec{x}}(t)=\frac{1}{\mathrm{i} \hbar} [\hat{\vec{x}}(t),\hat{H}_0(\hat{\vec{x}},\hat{\vec{p}},t)]$$
and analogously for ##\hat{\vec{p}}(t)##.

For the "state kets" one has the Schrödinger-like equation
$$\mathrm{d}_t |\psi(t) \rangle = \frac{\mathrm{i}}{\hbar} \hat{H}_1(\hat{\vec{x}},\hat{\vec{p}}) |\psi(t) \rangle.$$
The relation to the Hamiltonian of the system is
$$\hat{H}=\hat{H}_0 + \hat{H}_1.$$
The extension to the most general case of a statistical operator (including also the case of pure states of course, for which ##\hat{\rho}=|\psi \rangle \langle \psi|##) is
$$\mathrm{d}_t \hat{\rho}(\vec{x},\vec{p},t) = \frac{1}{\mathrm{i} \hbar} [\hat{H}_1(\hat{\vec{x}},t).$$
For an arbitrary observable you get
$$\mathrm{d}_t \hat{O}(\hat{\vec{x}},\hat{\vec{p}},t)= \frac{1}{\mathrm{i} \hbar} [\hat{O}(\hat{\vec{x}},\hat{\vec{p}},t)]+ \partial_t \hat{O}(\hat{\vec{x}},\hat{\vec{p}},t).$$
The partial time derivative refers to the explicit time dependence only.

Of course, the same must hold for the statistical operator as an operator depending on the fundamental operators and explicitly on time,
$$\mathrm{d}_t \hat{\rho}(\hat{\vec{x}},\hat{\vec{p}},t)= \frac{1}{\mathrm{i} \hbar} [\hat{\rho}(\hat{\vec{x}},\hat{\vec{p}},t),\hat{H}_0(\hat{\vec{x}},\hat{\vec{p}},t)]+ \partial_t \hat{\rho}(\hat{\vec{x}},\hat{\vec{p}},t).$$
Together with the above equation for the time dependence of ##\hat{\rho}## one gets the (picture independent!) von Neumann-Liouville equation of motion
$$\frac{1}{\mathrm{i} \hbar} [\hat{\rho},\hat{H}]+\partial_t \hat{\rho}=0.$$
This describes just the time derivative of mathematical formal objects, and you can choose an arbitrary picture of time evolution, just for convenience of calculational treatment of some given problem (e.g., for scattering theory the interaction picture is most convenient, where ##\hat{H}_0## is the Hamiltonian of non-interacting particles, and ##\hat{H}_1## the interaction part of the full Hamiltonian).

Of course, on the other hand, there should also be a description of the "physical time dependence" of observables and a corresponding "covariant time derivative". This answers the question, what is for a given observable ##O## the operator which describes the observable ##\dot{O}##, i.e., the time-derivative of ##O##. That's given by the picture-independent equation
$$\mathring{\hat{O}}(\hat{\vec{x}},\hat{\vec{p}},t) = \frac{1}{\mathrm{i} \hbar} [\hat{O}(\hat{\vec{x}},\hat{\vec{p}},t),\hat{H}(\hat{\vec{x}},\hat{\vec{p}},t)]+\partial_t \hat{O}(\hat{\vec{x}},\hat{\vec{p}},t).$$
Note that the von Neumann-Liouville equation for the stat. op. then reads in this notation
$$\mathring{\hat{\rho}}=0.$$
The same scheme holds of course for (relativistic or non-relativistic) QFT. There the fundamental operators from which all observables, stat. ops., etc are built up are the fields and the canonical field momenta, and you can have functionals instead of functions of the field operators and their canonical momenta rather than simple functions. Formally there's not much difference.
 
  • #200
PeterDonis said:
We don't understand why we experience things a certain way. More precisely, we don't understand how our experiences are produced by our brains (which are in turn connected to the rest of the universe through our senses). But that's not a question of physics; it's a question of neuroscience, cognitive science, etc.

We also don't understand why we experience time to have a particular direction even though the underlying physical laws are time-symmetric (with the minor exception of weak interactions that don't play a part in the operation of our brains and bodies anyway). To some extent that is a question of physics, in that if physics can give an explanation for how time asymmetry can be produced from underlying laws that are time symmetric, we might not need to understand all the details of neuroscience, cognitive science, etc. to understand why we experience time to have a particular direction.

I am aware of only one hypothesis from physics to explain time asymmetry, namely asymmetry of initial conditions: time has an arrow in our universe because our universe started in a state with a very high degree of symmetry and uniformity. But we don't really have any way of testing this hypothesis since we can't run controlled experiments on universes.
Well, as already said above, natural sciences are one (on purpose limited!) aspect of human knowledge. They restrict themselves to describe what can be accurately observed. For the most simple systems (which are usually described by physics) there's a surprising discovery that we can describe our observations by mathematical theories and understand a lot of phenomena from a very few fundamental principles, which finally cannot be derived from even more fundamental principles and which are an abstraction from our experience. Finally physics, as all of natural sciences, is an empirical science (on purpose). To find descriptions of ever more complex systems (many-body systems) from the fundamental theories, is a creative act. Though there's a big hype about AI, machine learning, and all that, I don't think there's a automatic way to find such descriptions, and thus that will stay a human art for a long time to come.

Now, the more and more complex things get, the more difficult it is to get descriptions based on fundamental theories. Fortunately nature is kind enough to be describable very well also by "effective theories", i.e., approximations to the fundamental theories just cutting out all "irrelevant" details. To describe a container of gas under usual conditions you don't need to describe a mole of molecules in all detail, but some thermodynamic quantities will do. If it's moving, you'd need fluid dynamics or Boltzmann transport theory.

Also some quite humble-looking "complex systems" become pretty easily very complicated. E.g., in my own field of heavy-ion collisions, just smashing two large nuclei together leads to a plethora of phenomena one has to describe with a lot of different effective theories, reaching from relativistic hydrodynamics, transport models to lattice QCD and all that.

As said in some textbook on QFT, physics teaches humility.

Now, if it comes to biological systems, it's even more difficult to get a fundamental description. Though there's a lot of progress to describe some subjects from a fundamental approach using effective models (e.g., ion transport through membranes in cells, protein folding), it's still far from being "understood" in the sense of describing life as such reaching back to the fundamental physical theories as far as they are known, anyway.

Considering our own human brains, it may become even philsophical, whether it's possible to finally "understand" it in this scientific sense at all, because after all it's the human brain itself which processes all the empirical information we can get about it, and the description of a system within itself is already mathematically a quite mind boggling thing. Maybe, you can argue, it's not only one brain which studies itself but the collective endeavor of many scientists to understand it better, maybe one day in a satisfactory way from fundamental physical principles.
 
  • #201
@vanhees71 : I would be interested in an explicit reference to a place where, as you claim, a time dependent Heisenberg state is actually used. I have never seen it anywhere.
 
  • #202
Taken the total time dependence a Heisenberg state is of course time-independent, but it can be explicitly time dependent. The reason simply is that for the Heisenberg picture the "covariant" time derivative coincides with the total mathematical time derivative, i.e., you have
$$\mathrm{d}_t \hat{\rho}_H=\mathring{\hat{\rho}}_H=\frac{1}{\mathrm{i} \hbar} [\hat{\rho}_H,\hat{H}_H]+\partial_t \hat{\rho}_H=0.$$
Obviously in the Heisenberg picture the stat. op. is NOT explicitly time dependent if and only if it's an equilibrium state, i.e., if it is a function of the operators of conserved quantities only. Thus any non-equilibrium state is explicitly time-dependent.

In the Schrödinger picture you get on the one hand
$$\mathrm{d}_t \hat{\rho}_S=\partial_t \hat{\rho}_S,$$
because the fundamental operators are by definition time-independent. On the other hand from this and the von Neumann equation you get from this
$$\mathrm{d}_t \hat{\rho}_S=-\frac{1}{\mathrm{i} \hbar} [\hat{\rho}_S,\hat{H}_S].$$
As I said, the only book I know, where this is all explained in clear detail for an arbitrary picture is

E. Fick, Einführung in die Grundlagen der Quantentheorie, Akademische Verlagsgesellschaft Wiesbaden (1979)
 
  • #203
Can you give an example of a one- or two-particle system where the Heisenberg state is time dependent?
 
  • #204
zonde said:
In physics you always have to be ready that your model will be falsified by observation in some domain where your model has not yet been tested.

But, you can't allow that fact to freeze you into inaction. You have to pick a direction to do your work.

zonde said:
Any scientific explanation has to give predictions that are testable within dynamical process. So even if you believe that adynamical view can explain observation better you still have to be able to translate your adynamical view into dynamical story and point out unique features that show up in dynamical story.

Of course, the story of the experimental procedure is dynamical, but the explanation of the outcomes doesn't have to be. You're conflating those two aspects of an experiment.

zonde said:
Almost all observations are still geocentric. So non geocentric model still have to express it's predictions for geocentric observer.

Ibid.

zonde said:
There is big difference between facts A and B and components of the model A and B. It seems you are mixing them together.

Facts are necessarily context dependent, otherwise your perceptions are meaningless.

zonde said:
Every scientist has to operate within common generally accepted framework. Within that framework you have a lot of freedom with your explanations, but there is one condition - your explanation has to produce predictions that are testable even for those who do not believe in your explanation. So your condition "as long as you're willing to give up your anthropocentric dynamical bias" takes you out of that framework.

You've completely missed the point, adynamical constraints are already accepted and understood per conventional physics, it's simply a matter of whether or not you believe such constraints to be explanatory in and of themselves. If you watch my talk, it should be clear
 
  • Like
Likes julcab12
  • #205
A. Neumaier said:
Can you give an example of a one- or two-particle system where the Heisenberg state is time dependent?
Not so easily. Do you have an example of an arbitrary non-equilibrium state? Express this in the Heisenberg picture, and the stat. op. will be explicitly time-dependent.
 
  • #206
In fact it's pretty easy to find a nice example.

Take a single free particle, for simplicity in 1 dimension. In the Heisenberg picture we have, due to
$$\hat{H}=\frac{1}{2m} \hat{p}^2$$
the solution for the operator equations of motion
$$\hat{x}=\frac{1}{m} \hat{p}_0 t + \hat{x}_0, \quad \hat{p}=\hat{p_0}.$$
Here ##\hat{x}_0## and ##\hat{p}_0## are the operators representing the observables at the initial time ##t=0##. At this initial time you may assume without loss of generality that all pictures of time evolution coincide at this time, but we won't need this here, because I stick to the Heisenberg picture.

Now assume that at time ##t=0## you know that the average position of the particle is ##\langle x_0 \rangle## and its momentum is ##\langle p_0 \rangle##. Also the standard deviations may be given as ##\Delta x_0## and ##\Delta p_0##. Then at ##t=0## the statistical operator according to the maximum-entropy principle can be written in the form
$$\hat{\rho}_0=\hat{\rho}(t=0)=\frac{1}{Z} \exp [-\lambda_1 (\hat{x}_0-\langle x_0 \rangle)^2 - \lambda_2 (\hat{p}_0 - \langle p_0 \rangle)^2],$$
where ##\lambda_1## and ##\lambda_2## are Lagrange parameters. The other 3 Lagrange parameters necessary to obey the constraints of the given information and normalization of the statistical op. are lumped into ##\langle x_0 \rangle##, ##\langle p_0 \rangle##, and ##Z##.

Now in the Heisenberg picture we have (von-Neumann-Liouville equation)
$$\mathrm{d}_t \hat{\rho}(t) = \mathring{\hat{\rho}}=0,$$
and thus
$$\hat{\rho}(\hat{x},\hat{p},t)=\hat{\rho}_0(\hat{x}_0,\hat{p}_0).$$
To get ##\hat{\rho}(t)## we have to express ##\hat{x}_0## and ##\hat{p}_0## simply by ##\hat{x}## and ##\hat{p}##:
$$\hat{x}_0=\hat{x}-\hat{p}_0 t=\hat{x}-\hat{p} t, \quad \hat{p}_0=\hat{p}.$$
Thus the stat. op. at time ##t## is explicitly time-dependent in the Heisenberg picture. Of course as a whole the statistical operator is time-independent due to the von-Neumann-Liouville equation.
 
  • #207
In general one has a time-dependent Hamiltonian ##H(t)##. In the Schrödinger picture there is no doubt that the dynamics of the state is ##i\hbar \dot\rho(t)=[\rho(t),H(t)]##. If ##U(t)## denotes the solution of ##i\hbar\dot U(t)=H(t)U(t)## with ##U(0)=1##, one obtains the unique solution ##\rho(t)=U(t)\rho_0U(t)^*##.

According to what I think is standard practice (perhaps only Fick deviates from it?), the Heisenberg picture is defined by applying to everything a unitary transformation with ##U(t)##, changing all Schrödinger observables ##A(t)## to Heisenberg observables ##A_H(t)=U(t)^*A(t)U(t)##. This necessarily requires that the Schrödinger state ##\rho(t)## changes to the Heisenberg state ##\rho_H(t)=U(t)^*\rho(t)U(t)=\rho_0##, in order that one preserves the formula ##\langle A\rangle_t=Tr~\rho(t)A(t)=Tr~\rho_H(t)A_H(t)## for expectation values. Thus ##\rho_H(t)=\rho_0## is independent of ##t##. Thus the density operator in the Heisenberg picture is time-independent. Period.

In particular, the Schrödinger Hamiltonian ##H(t)## turns into the Heisenberg Hamiltonian ##H_H(t)=U(t)H(t)U(t)^*##. If we denote by ##\partial_t## is the derivative with respect to explicit occurrences of ##t## then the product rule and the Hermiticity of ##H(t)## give
$$i\hbar\dot A_H(t)
=i\hbar\dot U(t)^*A(t)U(t)+U(t)^*i\hbar\dot A(t)U(t)+U(t)^*A(t)i\hbar\dot U(t)\\
=-U(t)^*H(t)A(t)U(t)+U(t)^*i\hbar\dot A(t)U(t)+U(t)^*A(t)H(t)U(t)\\
=-U(t)^*H(t)U(t)A_H(t)+i\hbar\partial_tA_H(t)+A_H(t)U(t)^*H(t)U(t)\\
=-H_H(t) A_H(t)+i\hbar\partial_tA_H(t)+A_H(t)H(t)\\
=[A_H(t),H_H(t)]+i\hbar\partial_tA_H(t).
$$
vanhees71 said:
Of course as a whole the statistical operator is time-independent
I don't understand what do you mean. Your definition of the Heisenberg picture is apparently different from what I wrote. Does your version have two density operators at each time, the special one that you constructed, and what you call ''the statistical operator as a whole'', which is time-independent?
 
Last edited:
  • Like
Likes Auto-Didact
  • #208
vanhees71 said:
Thus the stat. op. at time ##t## is explicitly time-dependent in the Heisenberg picture. Of course as a whole the statistical operator is time-independent due to the von-Neumann-Liouville equation.
A. Neumaier said:
Does your version have two density operators at each time, the special one that you constructed, and what you call ''the statistical operator as a whole'', which is time-independent?
Ah, you write the same density operator first deduced in the time-independent form
in an equivalent explicitly time-dependent way by writing the time-independent ##x(0)## in the explicitly time-dependent form ##x(t)-p_0 t=x(t)-p(t)t##. Of course one may introduce such artificial time-dependence, but this doesn't change the fact that the one and only density operator is independent of time.

In particular, in such a rewrite you can substitute an arbitrary time and get the same time-independent result.

But your formula
vanhees71 said:
$$\hat{S}=\int \mathrm{d}^3 x \left [\hat{\mathcal{H}}(t,\vec{x}) - \vec{\beta}(t,\vec{x}) \cdot \vec{\mathcal{G}}(t,\vec{x})-\mu(t,\vec{x}) \hat{\rho}(t,\vec{x}) \right]/T(t,\vec{x}).$$
is not of this kind, as ##\hat S## produces different operators when you insert different values of ##t##. Thus it cannot be a valid formula in the Heisenberg picture!
 
  • #209
Ok, obviously my attempt to write a statistical operator for local thermal equilibrium is flawed. I'm not sure, whether it's possible to construct something like this.

Concerning the example with the statistical operator for the single-particle QM example I'm sure that it's correct.

In the Heisenberg picture the total time derivative for ##\hat{\rho}## (as for any operator) is the same as the covariant time derivative and thus it must vanish. So
$$\hat{\rho}(\hat{x},\hat{p},t)=\hat{\rho}(\hat{x}_0,\hat{p}_0)=\text{const. in time}.$$
You have to substitute ##\hat{x}## and ##\hat{p}## (time dependent in the Heisenberg picture) instead of ##\hat{x}_0## and ##\hat{p}_0##. Thus ##\hat{\rho}## is explicitly time dependent, and this explicit time-dependence precisely cancels the commutator term in the total time derivative (by construction).

From the Heisenberg statistical operator you get the Schrödinger statistical operator via the unitary transformation
$$\hat{U}(t)=\exp \left (-\frac{\mathrm{i} t}{\hbar} \hat{H} \right).$$
Note that ##\hat{H}_S=\hat{H}## (no subscript=Heisenberg picture, subscript S=Schrödinger picture).
Thus
$$\hat{\rho}_S=\hat{U} \hat{\rho} \hat{U}^{\dagger}.$$
Then you get
$$\mathrm{d}_t \hat{\rho}_S = (\mathrm{d}_t \hat{U}) \hat{\rho} \hat{U}^{\dagger} + \hat{U} \hat{\rho} \mathrm{d}_t \hat{U}^{\dagger} = \frac{1}{\mathrm{i} \hbar} \hat{U} [\hat{H},\hat{\rho}] \hat{U}^{\dagger} = \frac{1}{\mathrm{i} \hbar} [\hat{H}_S,\hat{\rho}_S],$$
as I already derived above from the general formalism.

If you have an explicitly time dependent Hamiltonian the only difference is in the solution of the equation of motion for the unitary transformation,
$$\mathrm{d}_t \hat{U}(t) = -\frac{\mathrm{i}}{\hbar} \hat{H}(t) \hat{U}(t),$$
which then formally reads
$$\hat{U}(t)=\mathcal{T}_c \exp \left (-\frac{\mathrm{i}}{\hbar} \int_0^ t \mathrm{d}t' \hat{H}(t') \right).$$
The equation of motion in the Schrödinger picture and the Heisenberg picture stays the same.
 
  • #210
vanhees71 said:
You have to substitute ##\hat{x}## and ##\hat{p}## (time dependent in the Heisenberg picture) instead of ##\hat{x}_0## and ##\hat{p}_0##.
Why do I have to? ##x_0=x(0)## is as good an operator in the Heisenberg picture as ##x(t)##, and unlike the latter, the former is time-independent. Suppressing the time dependence of the operators is dangerous in the Heisenberg picture, as operators without it are meaningless.

For a given relativistic quantum field ##\phi(x)##, you surely consider expressions like ##\int dx \phi(x) e^{ip\cdot x}## as valid operators in the Heisenberg picture. Why then not their integral over ##p##, which gives ##\phi(0)## (up to a factor). For a 1-dimensional field, this is just ##x(0)=x_0##.
 
Last edited:

Similar threads

  • Quantum Interpretations and Foundations
2
Replies
37
Views
2K
  • Quantum Interpretations and Foundations
Replies
6
Views
2K
  • Quantum Interpretations and Foundations
25
Replies
874
Views
31K
  • Quantum Interpretations and Foundations
6
Replies
175
Views
6K
  • Quantum Interpretations and Foundations
2
Replies
37
Views
2K
  • Quantum Interpretations and Foundations
7
Replies
244
Views
8K
  • Quantum Interpretations and Foundations
Replies
17
Views
2K
  • Quantum Interpretations and Foundations
4
Replies
138
Views
5K
  • Quantum Interpretations and Foundations
10
Replies
333
Views
12K
  • Quantum Interpretations and Foundations
3
Replies
76
Views
5K
Back
Top