How can entropy increase in a deterministic universe

In summary, the conversation discusses the concept of a deterministic universe, where the laws of classical physics apply and randomness is not present. However, due to the coarse-grained nature of our understanding of the universe, there is an appearance of randomness and unpredictability. This is due to the uncertainty of knowing the exact location and state of each particle, leading to an increase in entropy over time. The idea of macroscopic state variables also plays a role in explaining this increase in entropy, as they can only describe a range of possible microscopic states rather than a single one.
  • #1
IvicaPhysics
12
0
Let's imagine a deterministic universe. A one where quantum mechanics simply doesn't apply. Ok.
This was the universe of classical physics. Atoms exist, and they behave deterministically. Fine. Now, how can entropy increase in this universe, altough it has the same laws of physics. In a deterministic universe, the probability of all microstates is not equal, one is 100%, and the others are 0% chance. Since the universe is deterministic, that holds true. So the normal entropy explanation doesn't hold true. Then the question again rises, why does entropy increase and not decrease, if all microstates all not equally likelly?
 
Science news on Phys.org
  • #2
Since the particles can exchange energy, the microstate is not fixed.
 
  • #3
IvicaPhysics said:
Let's imagine a deterministic universe. A one where quantum mechanics simply doesn't apply. Ok.
This was the universe of classical physics. Atoms exist, and they behave deterministically. Fine. Now, how can entropy increase in this universe, altough it has the same laws of physics. In a deterministic universe, the probability of all microstates is not equal, one is 100%, and the others are 0% chance. Since the universe is deterministic, that holds true. So the normal entropy explanation doesn't hold true. Then the question again rises, why does entropy increase and not decrease, if all microstates all not equally likelly?
https://www.amazon.com/dp/9812832254/?tag=pfamazon01-20
 
Last edited by a moderator:
  • #4
DrClaude said:
Since the particles can exchange energy, the microstate is not fixed.
How does that cause the entropy to go in the direction of increasing?
 
  • #5
IvicaPhysics said:
How does that cause the entropy to go in the direction of increasing?

Randomness, even within deterministic laws:

 
  • #6
PeroK said:
Randomness, even within deterministic laws:


How can randomness exist in deterministic laws, that is contradictory.
 
  • #7
IvicaPhysics said:
How can randomness exist in deterministic laws, that is contradictory.

Here's the way I think about it:

In classical physics, the state of the system is determined by a point in phase space. For each particle [itex]j[/itex], you give an initial position [itex]\vec{r_j}[/itex] and an initial momentum [itex]\vec{p_j}[/itex]. That takes 6 numbers for each particle, so phase space is 6N dimensional, where N is the number of particles. If you know the system's location in phase space at one time, then you know it at all future times, so there is no nondeterminism (in classical physics, at least).

However, if you're dealing with [itex]10^{23}[/itex] particles, it's impossible to actually know the positions and locations of each particle. What you know instead is some coarse-grained state. For example, instead of knowing the location in phase space, you might know that the system is in some region [itex]R_1[/itex] in phase space. This region contains many different points in phase space. So suppose we ask the question: If we wait [itex]t[/itex] seconds, will the system be in a second region, [itex]R_2[/itex]? Well, some of the points in [itex]R_1[/itex] will make the transition to [itex]R_2[/itex] in that time, and some will not. So all we know is that the system is somewhere in region [itex]R_1[/itex], then we can make at best probabilistic predictions about whether the system will be in region [itex]R_2[/itex] at time [itex]t[/itex].

So the coarse-graining gives the appearance of nondeterminism. Now, that would not be a problem if the spread in phase space remained small for all time (or for long periods of time). But for many systems of interest, the uncertainty of where the system is located in phase space grows rapidly with time, so that quickly you get into the situation where you don't know anything about where the system is in phase space other than that it is at some point with known values of the conserved quantities: total energy, total momentum, total angular momentum, total charge, total number of particles of various types.
 
  • #8
IvicaPhysics said:
How can randomness exist in deterministic laws, that is contradictory.
Did you ever flipped a coin? Was its behavior deterministic or random?
 
  • #9
stevendaryl said:
if you're dealing with ##10^{23}## particles
Even with a small number of degrees of freedom one can loose apparent determinism when the system is sensitive to small changes in the initial conditions. That's how coin flipping works.
 
  • Like
Likes stevendaryl
  • #10
IvicaPhysics said:
How can randomness exist in deterministic laws, that is contradictory.

No. Here's another example. Imagine you have a large tray of dice (let's say just 10 dice to make it easier). They all start showing "1". Then you shake the tray. If you had a perfect deterministic theory you would be able to predict at any point what each die reads. You might predict at time ##t## you have:

1-6-2-2-5-3-1-4-1-2

And, a time later you would predict:

2-4-5-5-1-6-3-3-2-5

But, even though you have predicted this precisely (and let's say you are correct), regarding the values shown by the dice you have randomness. In other words, you deterministically predicted randomness.

Or, perhaps to be slightly more precise, you have deterministically predicted disorder.
 
Last edited:
  • #11
The answers to the OP, so far, are divided between two different explanations.
1) Imprecise knowledge of the system's state(s) explains entropy increase
2) Deterministic evolution of the systems micropscopic state results in a situation where the same macroscopic state variables describe more and more "possible" microscopic states.

It seems to me that explanation 2) either needs to appeal to explanation 1) or it needs to make sure the macroscopic state variables are defined as averages over time or as averages over a collection of different systems. If a macroscopic state variable is defined as a function of the instantaneous microscopic state(s) of the individual particles in a system then the value of the macroscopic state (at time t) would not describe more than one "possible" state of the system if the "at time t" is included as part of the information in the macroscopic state. The "at time t" alone would specify only one possible state of a deterministic system whose initial condition was precisely known.
 
  • #12
Stephen Tashi said:
The answers to the OP, so far, are divided between two different explanations.
1) Imprecise knowledge of the system's state(s) explains entropy increase
2) Deterministic evolution of the systems micropscopic state results in a situation where the same macroscopic state variables describe more and more "possible" microscopic states.

It seems to me that explanation 2) either needs to appeal to explanation 1) or it needs to make sure the macroscopic state variables are defined as averages over time or as averages over a collection of different systems. If a macroscopic state variable is defined as a function of the instantaneous microscopic state(s) of the individual particles in a system then the value of the macroscopic state (at time t) would not describe more than one "possible" state of the system if the "at time t" is included as part of the information in the macroscopic state. The "at time t" alone would specify only one possible state of a deterministic system whose initial condition was precisely known.

I'm not sure I understand that, but to borrow Mr Feynman's example. If you have plain and blue dyed water kept separate, then allowed to mix, the mixture will always become a roughly even shade of blue everywhere. There are no initial conditions that would allow the water to remain in defined blue and non-blue segments.

Chaotic behaviour makes a deterministic solution impossible. But, even if we assume we can determine the outcome, it is always a roughly even mixture.

Or, to put it another way, even in a deterministic system, once the waters have started to mix, there is no going back.
 
  • #13
PeroK said:
Chaotic behaviour makes a deterministic solution impossible. But, even if we assume we can determine the outcome, it is always a roughly even mixture.

The hypothesis in the original post is that we have a deterministic system. I think the OP intends to assume we know the initial conditions of the system. Of course, one way to answer the OP is to say that those assumptions are false. That seems to be what you assert. However, I'm curious if the OP's question can be answered if we assume those assumptions are true.

Which definition of entropy are we using? If we use a definition of entropy that employs the concept of probability, we must force probability into the situation under consideration. I agree that Bayesian probability can be introduced into a deterministic situation by considering imperfect knowledge of initial conditions. Sensitivity of the trajectory of the system to initial conditions makes it attractive to introduce Bayesian probability into a model of the system.

If we are going to exclude probability from the situation under consideration then (as far as I can see) we are talking about a measure of entropy defined in terms of the number of "possible" microstates that exist for a given value of some macrostate description of the system.

It isn't clear to me whether in your post #10, you are asserting that we can't exclude probability from our model or whether your post #10 assumes a completely deterministic (and completely known) situation.
 
Last edited:
  • #14
Stephen Tashi said:
The hypothesis in the original post is that we have a deterministic system. I think the OP intends to assume we know the initial conditions of the system. Of course, one way to answer the OP is to say that those assumptions are false. That seems to be what you assert. However, I'm curious if the OP's question can be answered if we assume those assumptions are true.

Which definition of entropy are we using? If we use a definition of entropy that employs the concept of probability, we must force probability into the situation under consideration. I agree that Bayesian probability can be introduced into a deterministic situation by considering imperfect knowledge of initial conditions. Sensitivity of the trajectory of the system to initial conditions makes it attractive to introduce Bayesian probability into a model of the system.

If we are going to exclude probability from the situation under consideration then (as far as I can see) we are talking about a measure of entropy defined in terms of the number of "possible" microstates that exist for a given value of some macrostate description of the system.

It isn't clear to me whether in your post #10, you are asserting that we can exclude probability from our model or whether your post #10 assumes a completely deterministic (and completely known) situation.

We can exclude both probability and chaos. If we take a simpler system along the same lines as the water. Perhaps 6 red and 6 blue cubes on a tray, initially arranged at rest with the blue on one side and the red on the other.

Now, if we assume the tray is going to be moved according to some known but irregular pattern, then we can deterministically predict disorder, in the sense that we will end up with blue and red cubes mixed up.

Without any unknowns or any chaotic behaviour we predict deterministically that disorder results.

In a simple system you may be able to find precise sequences of movements that maintain order, but these movements have to chosen precisely to match the existing conditions. There are games that are based on this: trying to steer balls into holes by tilting the game in just the right way. But, unless the movements are coordinated with the current conditions, disorder results.

In the water example, however, there would be no way to shake the container in order to tilt all the blue water to one side. There need be no randomness, per se, as any movement will only speed the disorder.
 
  • #15
PeroK said:
We can exclude both probability and chaos. If we take a simpler system along the same lines as the water. Perhaps 6 red and 6 blue cubes on a tray, initially arranged at rest with the blue on one side and the red on the other.

Now, if we assume the tray is going to be moved according to some known but irregular pattern, then we can deterministically predict disorder, in the sense that we will end up with blue and red cubes mixed up.
How are you defining the entropy of that system? If the entropy at time n (step n) is defined as a function of the state of the system at step n then there are certain configurations of the system that have the maximum entropy. Are you asserting that no matter what deterministic laws the system follows, it will always move from a state of maximum entropy to another state of maximum entropy? I see no reason to believe that.

In order to get your model to work, you have to introduce the notion of disorderly-ness into how the cubes move. So you are showing that entropy increases with time in a deterministic system whose deterministic laws are "disorderly". Now, can we define "disorderly" in a non-circular way ? - i.e. can we define "disorderly" deterministic laws to be distinct from "those deterministic laws that cause entropy to increase" ?
 
  • #16
Stephen Tashi said:
How are you defining the entropy of that system? If the entropy at time n (step n) is defined as a function of the state of the system at step n then there are certain configurations of the system that have the maximum entropy. Are you asserting that no matter what deterministic laws the system follows, it will always move from a state of maximum entropy to another state of maximum entropy? I see no reason to believe that.

In order to get your model to work, you have to introduce the notion of disorderly-ness into how the cubes move. So you are showing that entropy increases with time in a deterministic system whose deterministic laws are "disorderly". Now, can we define "disorderly" in a non-circular way ? - i.e. can we define "disorderly" deterministic laws to be distinct from "those deterministic laws that cause entropy to increase" ?

We may be talking at cross purposes. To expand:

By randomness I mean that an object may do one thing or another and what it will do under certain conditions is not known.

By chaos I mean that the initial conditions cannot be determined accurately enough to predict what would happen to an object (especially after a passage of time where the state of the system has many interacting objects).

The OP's point, I believe, is that if the laws of nature are deterministic there is no reason to suppose an evolution to a disordered state.

In the water example, you could still have a completely deterministic model, with no randonmess or chaos. The movement of every water molecule is known and there is a completely deterministic outcome.

It would be possible, in theory, to specify initial conditions where the water did not mix, but these would have to be very precisely coordinated. Left to itself, and totally deterministically, (almost) any initial conditions for the water results in a mixing of the blue. There is no need to assume randomness or chaos for this. It happens naturally and deterministically for (almost) any initial conditions.

Another example is mixing hot and cold water. Even with everything known and deterministic, it is impossible to specify initial conditions where the temperature did not equalise. You can't specify initial conditions for a litre of hot water and a litre of cold water and a precise, coordinated means of mixing them, even molecule by molecule, which does not result in temperature equalisation.

This aligns with the definition of entropy as a measure of thermodynamic equilibrium. Even without chaos or randonmess, thermodynamic equilibrium naturally and determkinistically results.
 
  • #17
PeroK said:
It would be possible, in theory, to specify initial conditions where the water did not mix, but these would have to be very precisely coordinated.
And it would be possible to have deterministic laws where the water mixed and then unmixed - analogous to how repeated "perfect shuffles" bring a deck of cards back to its original state. (I think it would also be possible to have probabilistic laws where this has some probability of happening.)

Left to itself, and totally deterministically, (almost) any initial conditions for the water results in a mixing of the blue.
That is empirically true and the OP doesn't appear to be a second law doubter.

(Your reference to "almost any" seems to introduce probability into the picture - as if we are no longer considering a deterministic physical system with given initial conditions, but instead are "randomly selecting" some initial conditions from a set of possible initial conditions.)

Can the OP's question be answered in any non-empirical way? Can we give an answer of the form: "Yes, some mathematically possible deterministic laws would result in an up-and-down variation of entropy with respect to time, but the deterministic followed by nature are such that ..[?]... and, as a consequence, the entropy of real physical systems does not decrease with respect to time". ?

Another example is mixing hot and cold water. Even with everything known and deterministic, it is impossible to specify initial conditions where the temperature did not equalise.
I agree - since the deterministic laws of heat transfer tell the system to equalize the temperature. This is what I mean by a circular explanation of how determinism can lead to an increase in entropy. If we assert that the deterministic laws of nature are defined by rules that tell them to increase entropy, then they obviously increase entropy.
 
  • #18
Stephen Tashi said:
I agree - since the deterministic laws of heat transfer tell the system to equalize the temperature. This is what I mean by a circular explanation of how determinism can lead to an increase in entropy. If we assert that the deterministic laws of nature are defined by rules that tell them to increase entropy, then they obviously increase entropy.

The deterministic laws don't know they are increasing entropy: when two molecules collide they follow some deterministic law of molecular collision. What is not possible is to use these laws of molecular collision to avoid heat transfer. You would need different laws than conservation of energy and momentum. You would need a law that meant when two particles collided the one with the least energy would lose energy and the one with more energy would gain energy (or, at least, that no energy exchange was possible).

But, the simple laws governing particle collisions are not in themselves defined by increasing entropy. Most obviously, if one particle is at rest and another is moving, there is no law of collison that can maintain this. Any collision must result in a gain of energy for the at-rest particle and a loss of the moving particle.

My suggestion is to watch the Feynman lecture.
 
  • #19
PeroK said:
What is not possible is to use these laws of molecular collision to avoid heat transfer. You would need different laws than conservation of energy and momentum.

Then, returning to my question:

Can we give an answer of the form: "Yes, some mathematically possible deterministic laws would result in an up-and-down variation of entropy with respect to time, but the deterministic laws followed by nature are such that ..[?]... and, as a consequence, the entropy of real physical systems does not decrease with respect to time". ?I agree that your suggestion:
You would need a law that meant when two particles collided the one with the least energy would lose energy and the one with more energy would gain energy (or, at least, that no energy exchange was possible).

is sufficient to decrease entropy, but couldn't other types of deterministic laws could do it? For example, there could be laws where there was no general policy about which particle gained or lost energy. There could be a deterministic result, but a result that depended on each possible collision.

A possible answer to the OP is:

Yes, some mathematically possible deterministic laws would result in an up-and-down variation of entropy with respect to time, but the deterministic laws followed by nature are such that the distribution of kinetic energy in the component parts of a system equalizes as time passes and, as a consequence, the entropy of real physical systems does not decrease with respect to time".

To refine that explanation we would have to be clear about the relation between "entropy" of a system and "the distribution of kinetic energy" over that system.
 
  • #20
I was only really making the point that although we have uncertainty at the quantum level and chaos at the macro level, neither of these are necessary for the increase in entropy.

Quite what laws of nature would be required to prevent the increase in entropy I'm not sure.

The other aspect, which is perhaps more important than the laws of particle collisions, would be that thermally different substances would need to be physically kept apart. In the water example it's not enough that individual molecules retain their energy but that something prevents high energy and low energy particles from getting mixed up.
 
  • #21
Consider a classical gas made up of little balls (I'll call them atoms) that only interact through collision, and put them in a sufficiently big box, with the walls of the box acting only on the balls though perfectly elastic collisions. Start with the atoms all bunched up in a corner, with each ball signed an initial velocity. Let the system evolve.

This system is purely deterministic, and for the purpose of my point, we can even assume that we have an infinite precision on the position and velocity of these atoms and neglect chaos. We can consider the distribution of density of the gas as a measure of entropy: a uniform distribution corresponds to a high entropy state, and the initial state is a very low entropy one. So when the system evolves from its initial state, entropy will increase.

Now if the system is made up of a 10 atom, we will, after a not too long time, observe a decrease in entropy: there is a high probability that the atoms will bunch together again. If the system is made up of 100 atoms, it will take longer, but we still expect to be able to measure an entropy close to the initial value.

But if you take Avogadro's number of atoms, the system will quickly reach a maximum entropy and the fluctuations around that maximum value will be so tiny to be in essence impossible to measure. The time scale for even measuring a significant variation of the entropy (and nowhere close to the initial entropy) is orders of magnitude longer than the age of the universe. And that's just for one mole of the stuff!

So on the scale of the universe, the number of particles involved is so huge that entropy will only be ever measured as increasing. The 2nd law of thermodynamics is only a statistical statement ("we more often measure more probable things"), but the numbers involved are so big that we are confident that it is a "law"
 
  • Like
Likes PeroK
  • #22
DrClaude said:
So on the scale of the universe, the number of particles involved is so huge that entropy will only be ever measured as increasing. The 2nd law of thermodynamics is only a statistical statement ("we more often measure more probable things"), but the numbers involved are so big that we are confident that it is a "law"

There are a few posts on here where people say something like: "if you wait long enough, all the air molecules in a room will spontaneously move to one corner, as the probability of this is non-zero".

First, even if you take a raw calculation that the probability of any molecule being in, say, a given 10% region of the room is 10% and multiply this up by the number of molecules, you get an extraordinarily low number.

But, thinking classically, this would need to happen in stages. Once, say, 50% of the molecules have got themselves into one 10% region, the physical difficulty increases (hence the probability decreases) of another additional molecule getting there. So, the calculation must take this into account and the probability of each additional molecule making it becomes lower and lower.

In any case, these probabilities are so low as to be irrelevant (IMO). And, the real issue for me is that, once someone has found this to be "possible" in the sense that it has some non-zero probability, they then imbue the statement with some physical significance. That, IMO, is the big mistake. That it's somehow physicallly significant that the air might behave like this, or that you might walk through a solid wall, when the probability is so low as to be physically irrelevant.
 
  • #23
Stephen Tashi said:
Can we give an answer of the form: "Yes, some mathematically possible deterministic laws would result in an up-and-down variation of entropy with respect to time, but the deterministic laws followed by nature are such that ..[?]... and, as a consequence, the entropy of real physical systems does not decrease with respect to time". ?

Hmm. To me, that's too weak. It's not just that our laws of physics happen to be entropy-increasing; it's hard to imagine laws that don't increase entropy.

Let's look at a particularly simple model
entropy.jpg


This drawing represents the space of possible states for some hypothetical system. Let's assume that there are finitely many possible states. Some of these states are blue states and some are yellow states. (It occurred to me after making the drawing that I should have chosen red and blue). Let's assume that there are 100 times as many blue states as yellow states. This is before we've said anything about what the "laws of physics" are. Let's assume that the "laws of physics" are deterministic transition rules (we'll let time be discrete) saying that if the system is currently in such-and-such state, then in the next time interval, it will be in such-and-such state. Let's also assume that the rules are reversible: every state has at most one state it could have come from.

The statistical mechanics definition of entropy tells us that the blue region has higher entropy than the yellow region, because entropy is just a counting of the number of states. (Technically, it's proportional to the log of this number.) Without saying anything more about the laws of physics, we know that 99% of the blue states must make transitions back into the blue region. So no matter what transition rules we come up with, for most states, entropy will be non-decreasing. The only way that we could have entropy decreasing, on the average, is if most blue states made a transition to yellow states. And that's not possible with reversible transition rules.

Now, we could come up with rules such that entropy remained constant: For example, if yellow states only made transitions to other yellow states, then entropy would remain constant. But microscopic reversibility implies that macroscopically, entropy is (usually) non-decreasing.
 
  • Like
Likes Nugatory
  • #24
@PeroK: I completely agree. I purposefully mentioned the probability of a very large system of significantly deviating from the equilibrium value, not even going back to the initial state, as being too small to care about.
 
  • Like
Likes PeroK
  • #25
Some comments on the previous answers:

DrClaude said:
This system is purely deterministic, and for the purpose of my point, we can even assume that we have an infinite precision on the position and velocity of these atoms and neglect chaos.

Now if the system is made up of a 10 atom, we will, after a not too long time, observe a decrease in entropy: there is a high probability that the atoms will bunch together again.

That explanation begins by assuming determinism and then refers to probability.

I agree with a probabilistic model. However, the OP asks about a deterministic scenario. If we take a deterministic scenario seriously (literally) then can we formulate a coherent argument that introduces probability into the picture? We can say that things get all complicated, so we have to talk about probability. However, that contradicts the literal interpretation of the deterministic scenario.
PeroK said:
And, the real issue for me is that, once someone has found this to be "possible" in the sense that it has some non-zero probability, they then imbue the statement with some physical significance. That, IMO, is the big mistake. That it's somehow physicallly significant that the air might behave like this, or that you might walk through a solid wall, when the probability is so low as to be physically irrelevant.

That comments on a probabilistic model without mentioning determinism. A discussion of whether events with small non-zero probabilities are physically significant would be a digression from the OP, which asks about a non-probabilistic scenario. (I agree it would be an interesting digression.)

stevendaryl said:
Let's assume that the "laws of physics" are deterministic transition rules (we'll let time be discrete) saying that if the system is currently in such-and-such state, then in the next time interval, it will be in such-and-such state. Let's also assume that the rules are reversible: every state has at most one state it could have come from.

The statistical mechanics definition of entropy tells us that the blue region has higher entropy than the yellow region, because entropy is just a counting of the number of states.

I don't understand how the number of states is related to picture. Does the blue region have a higher number of states because it is a larger area? The scenario involves a number of different states and also a "system" in one of the states or "parts of a system" that are in various states. What does a point in the diagram represent? - a state? or a state and also a part of a system that is "in" that state?

Without saying anything more about the laws of physics, we know that 99% of the blue states must make transitions back into the blue region.

How do we know that? Where does "99%" come from in a deterministic setting? Are we imagining a flow so "thing" at a point moves to a nearby point as time passes? I can see how that would imply that most things at blue points would move to other blue points over a small time increment. But that assumes more than reversibility, it assumes the "things" have a continuous trajectory. So we should make that assumption explicit.

The only way that we could have entropy decreasing, on the average, is if most blue states made a transition to yellow states. And that's not possible with reversible transition rules.

Why isn't it possible? It isn't clear what property the two colors distinguish. You mentioned 99% of the blue states must make a transition back into the blue region, so I assume 1% of the blue states make a transition from the blue state to a yellow state.

As I understand "reversibility" there can't be any "absorbing states". We can't have a transition rule like ##b_1 \rightarrow b_2 \rightarrow b_3 \rightarrow b_1 \rightarrow b_1## because if we are in state ##b_1## we wouldn't know if we got there from previous state ##b_3## or previous state ##b_1##. So if we have a finite number of states, we must have cyclic transition rules like ##b_1 \rightarrow b_2 \rightarrow b_3 \rightarrow b_1 \rightarrow b_2 ...## Do the colors in the diagram have something to do with these cycles?

Now, we could come up with rules such that entropy remained constant: For example, if yellow states only made transitions to other yellow states, then entropy would remain constant. But microscopic reversibility implies that macroscopically, entropy is (usually) non-decreasing.

"Usually" sounds like an attempt to put probability back into the scenario. I thought the purpose of discussing the diagram was to prove somehow that "microscopic reversibility implies macroscopic entropy is non-decreasing". So, in the end, must we resort to assuming that is the case in order to keep yellow states from only transitioning to other yellow states?
 
  • #26
Stephen Tashi said:
I don't understand how the number of states is related to picture. Does the blue region have a higher number of states because it is a larger area?

Yes, that's the intention. My description used a finite number of states, so that it's possible to count numbers of states. For a more realistic model with an infinite number of states, you would need a notion of "volume" in state space.

The scenario involves a number of different states and also a "system" in one of the states or "parts of a system" that are in various states. What does a point in the diagram represent? - a state? or a state and also a part of a system that is "in" that state?

It's just states. The system is in some location at every moment in time. Some of the locations are colored "yellow" and some of the locations are colored "blue".

How do we know that? Where does "99%" come from in a deterministic setting?

In my simple model, it's just counting. There are, say, 10,000 possible states. 100 of them are labeled "yellow" and 9900 are labeled "blue". Whatever transition rule you come up with for determining which states evolve into which other states, most blue states will evolve into blue states. For a typical state, entropy will not decrease.

Are we imagining a flow so "thing" at a point moves to a nearby point as time passes? I can see how that would imply that most things at blue points would move to other blue points over a small time increment. But that assumes more than reversibility, it assumes the "things" have a continuous trajectory. So we should make that assumption explicit.

No, there is no assumption about continuity (you can't really have continuity with only finitely many states, which is what I was assuming). You don't need that. If there is only 100 possible yellow states, then that means that at most 100 out of the 9900 blue states can make transitions to yellow states.

Why isn't it possible?

If the transition relation is reversible, then it means that there is a unique predecessor state for each state. So if there are 100 yellow states, then there are at most 100 states that are predecessor states to yellow states. That means that out of 10,000 states, only 100 of them make transitions to yellow states. The rest make transitions to blue states.

It isn't clear what property the two colors distinguish.

It doesn't matter. For concreteness, assume that you have a machine that has two internal counters, [itex]i[/itex] and [itex]j[/itex] It has two lights, one colored blue, and one colored yellow. For some combinations of [itex]i[/itex] and [itex]j[/itex], the blue light will be on. For all other combinations, the yellow light will be other. Which combination produce which light colors can be represented by a diagram such as the one I gave: [itex](i,j)[/itex] results in a yellow light if the pixel with coordinates [itex](i,j)[/itex] is colored yellow, and otherwise, [itex](i,j)[/itex] results in a blue light.

You mentioned 99% of the blue states must make a transition back into the blue region

At least 99%.

so I assume 1% of the blue states make a transition from the blue state to a yellow state.

Possibly.

As I understand "reversibility" there can't be any "absorbing states". We can't have a transition rule like ##b_1 \rightarrow b_2 \rightarrow b_3 \rightarrow b_1 \rightarrow b_1## because if we are in state ##b_1## we wouldn't know if we got there from previous state ##b_3## or previous state ##b_1##.

Right. Or the simpler way to think about it is that every state has a unique predecessor state: For every state [itex]b_1[/itex] there is at most one state [itex]b_2[/itex] such that [itex]b_2 \rightarrow b_1[/itex].

So if we have a finite number of states, we must have cyclic transition rules like ##b_1 \rightarrow b_2 \rightarrow b_3 \rightarrow b_1 \rightarrow b_2 ...## Do the colors in the diagram have something to do with these cycles?

No, it's just a partition of the states into states that are observationally distinguishable. Like I say, imagine a machine whose only visible characteristic is which of two lights, yellow or blue, is on. There is more going on inside the machine, but we don't have access to that.

I thought the purpose of discussing the diagram was to prove somehow that "microscopic reversibility implies macroscopic entropy is non-decreasing".

That was the purpose.

"Usually" sounds like an attempt to put probability back into the scenario.

My way of understanding entropy inherently is about lack of knowledge, so it's inherently about subjective probability. You can have subjective probability in a deterministic setting, because even though the system is in some definite state and evolves deterministically, we don't know which state it is in, so we can at best make probabilistic predictions about the future.

So, in the end, must we resort to assuming that is the case in order to keep yellow states from only transitioning to other yellow states?

No, I didn't assume that that is the case. I only assumed microscopic reversibility. It's possible that some blue states make transitions to yellow states. But at most 1% of them make such transitions.
 
Last edited:
  • #27
Stephen Tashi said:
That explanation begins by assuming determinism and then refers to probability.

I agree with a probabilistic model. However, the OP asks about a deterministic scenario. If we take a deterministic scenario seriously (literally) then can we formulate a coherent argument that introduces probability into the picture? We can say that things get all complicated, so we have to talk about probability. However, that contradicts the literal interpretation of the deterministic scenario.

I think you are misunderstanding the role of probability here. We are talking about uncertain initial conditions and deterministics laws.

We have deterministic laws but we cannot know the initial conditions for every particle. So, we have to analyse the system assuming an unknown initial state. But, this probabilitistic analysis (using deterministic laws) leads to general conclusions about all initial states and that they all result in an increase in entropy (*).

A probabilistic law would say that the result of an individual interaction cannot be known. A probabilistic analysis can be based on deterministic laws.

(*) For simple systems there may be initial conditions that would avoid entropy increase, but for large, complex systems the probability of having initial conditions that defied entropy increase is so low as to be physically irrelevant: we never and can never see that behaviour.

So:

Proposition 1: if you start with a litre of hot water and a litre of cold water and allow them to mix then they cannot inevitably reach a temperature equilibrium through deterministic laws of particle collisions.

I am saying proposition 1 is false.

Proposition 2: if you start with a litre of hot water and a litre of cold water and assume a plausible set of initial conditions (from previous observations of hot and cold water), then using deterministic laws the prediction is that the temperature will always reach equilibrium.

I am saying proposition 2 is true. Further, if you repeat the prediction for as many sets of initial conditions as possible, in all cases equilibrium results.

Proposition 3: you cannot specify initial conditions whereby the water temperature does not reach equilibrium. Obviously, unless you prevent the water from mixing.

I am saying proposition 3 is true.

The conclusion from propostions 2-3 is that in practice the temperature of water always reaches equilibrium, even with deterministic laws. This conclusion applies for any initial conditions and does not rely on water particles themselves following non-deterministic laws.

The problem we have avoiding the use of probability is that in all complex cases, we cannot specify the initial conditions completely. But, it's the initial conditions that are probabilistic, not the laws themselves. That's the key point.
 
  • #28
stevendaryl said:
It's just states. The system is in some location at every moment in time.

Then, as I asked in a previous post, how does the entropy "of a system" involve a "number of states" if the system is in only one state at a time? It seems to me that if we define entropy so it can involve a "number of states" ##\gt 1## then the definition of entropy must encompass several different systems or several different times.

For a typical state, entropy will not decrease.
Is it a "state" that has entropy, or is it a "system" (in a particular state) that has entropy? It's not clear to me how entropy "of a state" or "of a system" is defined vis-a-vis the picture.
No, there is no assumption about continuity (you can't really have continuity with only finitely many states, which is what I was assuming). You don't need that. If there is only 100 possible yellow states, then that means that at most 100 out of the 9900 blue states can make transitions to yellow states.
If the transition relation is reversible, then it means that there is a unique predecessor state for each state. So if there are 100 yellow states, then there are at most 100 states that are predecessor states to yellow states. That means that out of 10,000 states, only 100 of them make transitions to yellow states. The rest make transitions to blue states.
I understand what you are saying, but I don't understand what those facts have to do with entropy because it isn't yet clear how entropy is defined vis-a-vis the diagram.
 
  • #29
PeroK said:
I think you are misunderstanding the role of probability here. We are talking about uncertain initial conditions and deterministics laws.

We have deterministic laws but we cannot know the initial conditions for every particle. So, we have to analyse the system assuming an unknown initial state. But, this probabiliistic analysis (using deterministic laws) leads to general conclusions about all initial states and that they all result in an increase in entropy (*).

As I said, I don't object to a probabilistic analysis. But, this only answers the OP in an empirical sense. You are saying that, in practice, we cannot do a deterministic analysis, so we resort to probability.

The empirical treatment of the problem is not a solution to the theoretical question of why deterministic laws would imply that entropy does not decrease with time. As far as I can see, there is no guarantee that arbitrarily selected deterministic transition laws will have such a result.

Can we show that deterministic transition laws that have particular properties imply that entropy increases with time? I think stevendaryl is attempting to show that it is sufficient to require that the transition laws are reversible.
 
  • #30
Stephen Tashi said:
Then, as I asked in a previous post, how does the entropy "of a system" involve a "number of states" if the system is in only one state at a time?

It's only one state at a time, but all that we know about that state is that it is a yellow state or a blue state. If it's yellow, then the system is in one of 100 possible microstates. If it's blue, it's in one of 9900 possible microstates.

Entropy is (in the statistical mechanics view) a measure of how uncertain we are about the microstate, given the observable macrostate. The more realistic system is a tank full of gas. The macrostate--what we know about the state of the gas--is completely described by:
  1. The type of gas.
  2. The quantity of gas.
  3. The temperature of the gas.
  4. The volume of the gas.
The microstate classically would be the positions and momenta of every single molecule of the gas. Naturally, there is a huge number of microstates that are consistent with our macrostate.

Is it a "state" that has entropy, or is it a "system" (in a particular state) that has entropy?

In the statistical mechanics notion of entropy, there are two kinds of states:
  1. The microstate, which in my drawing would be represented by a pixel in the picture.
  2. The macrostate, which is an observable property of the microstate. In my example, it would be the color of the pixel.
The entropy of a microstate is proportional to the log of the number of microstates with the same macrostate. So in my example, the entropy of a yellow state would be the log of the area that is colored yellow. The entropy of a blue state would be the log of the area that is colored blue.

With the numbers that I made up, the entropy of a yellow state is proportional to log(100) = 2 (using base 10). The entropy of a blue state is proportional to log(9900) = 3.996. So blue states have about twice the entropy of yellow states.

(The choice of using log base 10 or some other base just changes the constant of proportionality.)
 
  • #31
stevendaryl said:
It's only one state at a time, but all that we know about that state is that it is a yellow state or a blue state. If it's yellow, then the system is in one of 100 possible microstates. If it's blue, it's in one of 9900 possible microstates.

In the statistical mechanics notion of entropy, there are two kinds of states:
  1. The microstate, which in my drawing would be represented by a pixel in the picture.
  2. The macrostate, which is an observable property of the microstate. In my example, it would be the color of the pixel.

So far, that's clear.

The entropy of a microstate is proportional to the log of the number of microstates with the same macrostate. So in my example, the entropy of a yellow state would be the log of the area that is colored yellow. The entropy of a blue state would be the log of the area that is colored blue.
Then "Entropy" as a property of a state doesn't change as a function of time, correct? A state is a state. It doesn't become a different state as time passes.

A system can change from one state to another, so we can define the entropy "of a system" at time t as the entropy of the state it is in at time t.

With the numbers that I made up, the entropy of a yellow state is proportional to log(100) = 2 (using base 10). The entropy of a blue state is proportional to log(9900) = 3.996. So blue states have about twice the entropy of yellow states.

That clarifies the intepretation of the picture. But it isn't clear how the picture shows that the entropy of a system never decreases with time. That would say that if a system in the yellow patch ever moves into the blue then it must stay there. If the intended meaning is not that this be strictly true, but that it be true "most of the time" then we are introducing probability or time averages into the model, aren't we? We are saying something like "Over a long period of time, a system that has moved into the blue area will spend most of its time in the blue area?"
 
  • #32
Stephen Tashi said:
As I said, I don't object to a probabilistic analysis. But, this only answers the OP in an empirical sense. You are saying that, in practice, we cannot do a deterministic analysis, so we resort to probability.

The empirical treatment of the problem is not a solution to the theoretical question of why deterministic laws would imply that entropy does not decrease with time. As far as I can see, there is no guarantee that arbitrarily selected deterministic transition laws will have such a result.

Can we show that deterministic transition laws that have particular properties imply that entropy increases with time? I think stevendaryl is attempting to show that it is sufficient to require that the transition laws are reversible.

I'm not talking about arbitrarily selected deterministic laws. I'm talking about the laws of motion and particle collisions as they exist in our universe.
 
  • #33
Stephen Tashi said:
Then "Entropy" as a property of a state doesn't change as a function of time, correct? A state is a state. It doesn't become a different state as time passes.

I'm not sure I understand the question, but let's imagine a machine that has two internal counters, [itex]i[/itex] and [itex]j[/itex] that can range over a value from 1 to 100. The machine has an internal representation of my drawing. It behaves this way: Every time one of the counters changes, it looks up the color of the pixel with coordinates [itex](i,j)[/itex] and turns on the yellow light or blue light as appropriate.

The state is the pair [itex](i,j)[/itex], and its changing with time according to some unspecified rule. The mapping from coordinates to colors is recorded in the drawing, and that mapping is not changing with time.

A system can change from one state to another, so we can define the entropy "of a system" at time t as the entropy of the state it is in at time t.

Yes.

That clarifies the intepretation of the picture. But it isn't clear how the picture shows that the entropy of a system never decreases with time.

It's not necessarily true that the entropy never decreases. But in my example, with 100 yellow states and 9900 blue states, if the evolution is reversible, at most 100 blue states out of 9900 will experience decreasing entropy. So entropy-decreasing transitions are rare.

In a more realistic situation, such as one involving [itex]10^{23}[/itex] particles, the number of states that will experience decreasing entropy will be microscopically tiny compared to the total number of states.

That would say that if a system in the yellow patch ever moves into the blue then it must stay there.

No, the rule that entropy always increases is a statistical statement: for the vast majority of states, it's true. There may be (and will be) a small number of states that will violate this rule.

If the intended meaning is not that this be strictly true, but that it be true "most of the time" then we are introducing probability or time averages into the model, aren't we?

Yes, in the statistical mechanics interpretation of entropy, it necessarily involves uncertainty about what the actual (microscopic) state is. So probability is involved. But the probability does not reflect nondeterminism in the laws of physics.
 
  • #34
stevendaryl said:
The state is the pair [itex](i,j)[/itex], and its changing with time according to some unspecified rule.
I agree that the state being measured is changing. That's because the thing being measured is changing states, not because a state itself is being redefined as a different state. It is the vocabulary distinction that we'd make in saying a person's mass changed from 80 kg to 85 kg. It's the person who changed, not the definition of an 80 kg mass. In the diagram, a point on the diagram remains fixed and we imaging the system moving from one point to another. The entropy of a point (eg.. at a point in the blue area) is constant with respect to time.

It's not necessarily true that the entropy never decreases.

I agree. So part of answering the OP's question:
Then the question again rises, why does entropy increase and not decrease, if all microstates all not equally likelly?
is to say that entropy may sometimes decrease.

( I think the OP is implying that deterministic transition laws might result in microstates not being equally likely.)

Another part of answering the OP is to determine what "equally likely" or "not equally likely" would mean. These phrases only makes sense if we are considering a model that involves probability or a model where we can interpret "equally likely" to mean "an equal fraction of the time" or "an equal number of things" out of the total number of things.
But in my example, with 100 yellow states and 9900 blue states, if the evolution is reversible, at most 100 blue states out of 9900 will experience decreasing entropy. So entropy-decreasing transitions are rare.

In what sense "rare"? Since we are not using a probabilistic model, "rare" with respect to a single given system must refer to some sort of frequency of occurrence of its states (and entropies) measured over an interval of time - correct? And "rare" for with respect to a state (or a color of a state) would refer to a small fraction of states out of the total number of states.

You explained that reversibility of transitions implies that at most 100 blue states can change to yellow states in a "step" of the transition process. So such instances are rare with respect to a fraction of states. What we wish to show is that entropy increase is rare for a system. So to connect "rare for a system" with "rare for a state", it seems to me that we must consider the behavior of a system over time, as it passes through different states.

Unless we stipulate that a system can pass through each of the states in the diagram then we also have consider more than one system in our definition of "rare for a system".

The argument based on the diagram depends on making the yellow area smaller than the blue area - which we interpret as making the number of yellow states smaller than the number of blue states. How do we relate this to the physical definition of entropy? Crudely put, why would it be necessary to color the microstates ( i.e. the points) so the areas are different?

In some sense microstates are "equally likely" - that is a requirement enforced by how microstates are defined, isn't it? The problem is to say what "equally likely" means in a completely deterministic scenario. As a fraction of states, each unique microstate is 1/(total number of microstates). That would hold no matter how we define the microstates.

If we visualize the diagram as discrete points instead of a continuum then we impose the restriction that in one "step" of time, a system must move from one point to another or stay on the point where it is and remain there forever after. E.g. it is impossible for a system to remain at a point for 3 steps and then move off of it. There are no sub-microstates within the microstate represented by a point. I think the implementation of microstates in physics is such that this concept is a good approximation. However, what technical part of the definition a microstate guarantees this?


Yes, in the statistical mechanics interpretation of entropy, it necessarily involves uncertainty about what the actual (microscopic) state is. So probability is involved. But the probability does not reflect nondeterminism in the laws of physics.

It seems to me that probability can be avoided if entropy can be define in terms of numbers of states. One may then introduce probability by saying "Suppose we pick a microstate at random, giving each microstate an equal probability of being selected". I agree that introducing probability in this manner does not require that the population being sampled was generated by some random process.
 
  • #35
Stephen Tashi said:
In what sense "rare"? Since we are not using a probabilistic model, "rare" with respect to a single given system must refer to some sort of frequency of occurrence of its states (and entropies) measured over an interval of time - correct?

Well in systems that are "ergodic", the system visits every state (or gets arbitrarily close to every state), so the entropy does reflect the amount of time spent in each macroscopic state.

But I wasn't talking about time, I was talking about uncertainty. If all you know is that you're in some blue state, then there is a 1/9900 chance that you're in any particular blue state. That's subjective probability.

You explained that reversibility of transitions implies that at most 100 blue states can change to yellow states in a "step" of the transition process. So such instances are rare with respect to a fraction of states. What we wish to show is that entropy increase is rare for a system. So to connect "rare for a system" with "rare for a state", it seems to me that we must consider the behavior of a system over time, as it passes through different states.

I just meant that it is improbable (using a subjective notion of probability) that the next state will be lower entropy, if all we know about the current state is that it is a blue state. The statement about the fraction of time spent in blue versus yellow states may also be true, but we would have to make more complicated assumptions about the nature of the transition relation to make that conclusion.

The argument based on the diagram depends on making the yellow area smaller than the blue area - which we interpret as making the number of yellow states smaller than the number of blue states. How do we relate this to the physical definition of entropy? Crudely put, why would it be necessary to color the microstates ( i.e. the points) so the areas are different?

I'm assuming that the color is a macroscopically observable property of the state. The point of using an example where the two colors corresponded to different numbers of states is just that otherwise, the entropy would always be constant. Entropy is only a useful concept when it differs from state to state.

In some sense microstates are "equally likely" - that is a requirement enforced by how microstates are defined, isn't it? The problem is to say what "equally likely" means in a completely deterministic scenario.

It's a subjective notion of probability. If somebody shuffles a deck of cards and you pick a card from within the deck, there is a subjective probability of 1/13 that your card will be an ace. You have no basis for assuming anything else. It's possible that someone who has studied shuffling and has studied your preferences in picking a card could make a better prediction, but given no more information than that the deck has been shuffled and you have picked a card, there is no basis for any choice other than 1/13.

If we visualize the diagram as discrete points instead of a continuum then we impose the restriction that in one "step" of time, a system must move from one point to another or stay on the point where it is and remain there forever after. E.g. it is impossible for a system to remain at a point for 3 steps and then move off of it.

Right, in a deterministic system where the next state is a function of the current state.

There are no sub-microstates within the microstate represented by a point. I think the implementation of microstates in physics is such that this concept is a good approximation. However, what technical part of the definition a microstate guarantees this?

I'm not sure what you mean by "sub-microstate". Do you mean that the microstate itself may actually be a macrostate, with even more microscopic details?

It seems to me that probability can be avoided if entropy can be define in terms of numbers of states. One may then introduce probability by saying "Suppose we pick a microstate at random, giving each microstate an equal probability of being selected". I agree that introducing probability in this manner does not require that the population being sampled was generated by some random process.

Boltzmann introduced the purely discrete notion of entropy: [itex]S = k log(W)[/itex], where [itex]S[/itex] is the entropy of a macrostate, [itex]k[/itex] is Boltzmann's constant, and [itex]W[/itex] is the number of microstates corresponding to the same macrostate.
 
<h2>1. How is entropy defined in a deterministic universe?</h2><p>In a deterministic universe, entropy is defined as the measure of the disorder or randomness within a closed system. It is a quantitative measure of the number of possible microstates that a system can have, given its macroscopic properties.</p><h2>2. Why does entropy increase in a deterministic universe?</h2><p>Entropy increases in a deterministic universe due to the second law of thermodynamics, which states that the total entropy of a closed system will always increase over time. This is because in a closed system, energy will naturally disperse and become more evenly distributed, leading to an increase in disorder and entropy.</p><h2>3. Can entropy ever decrease in a deterministic universe?</h2><p>In a deterministic universe, entropy can technically decrease in a local system, but it will always increase in the overall closed system. This is because energy can be transferred or transformed within a local system, leading to a decrease in entropy, but the total entropy of the closed system will always increase.</p><h2>4. How does entropy relate to the arrow of time?</h2><p>The increase of entropy in a deterministic universe is closely related to the concept of the arrow of time, which refers to the unidirectional flow of time from the past to the future. The second law of thermodynamics and the increase of entropy explain why we perceive time as moving in one direction and not the other.</p><h2>5. What are the implications of entropy increasing in a deterministic universe?</h2><p>The increase of entropy in a deterministic universe has several implications, including the inevitability of the eventual heat death of the universe, the limitations on perpetual motion machines, and the irreversibility of certain processes. It also plays a crucial role in fields such as thermodynamics, cosmology, and information theory.</p>

Related to How can entropy increase in a deterministic universe

1. How is entropy defined in a deterministic universe?

In a deterministic universe, entropy is defined as the measure of the disorder or randomness within a closed system. It is a quantitative measure of the number of possible microstates that a system can have, given its macroscopic properties.

2. Why does entropy increase in a deterministic universe?

Entropy increases in a deterministic universe due to the second law of thermodynamics, which states that the total entropy of a closed system will always increase over time. This is because in a closed system, energy will naturally disperse and become more evenly distributed, leading to an increase in disorder and entropy.

3. Can entropy ever decrease in a deterministic universe?

In a deterministic universe, entropy can technically decrease in a local system, but it will always increase in the overall closed system. This is because energy can be transferred or transformed within a local system, leading to a decrease in entropy, but the total entropy of the closed system will always increase.

4. How does entropy relate to the arrow of time?

The increase of entropy in a deterministic universe is closely related to the concept of the arrow of time, which refers to the unidirectional flow of time from the past to the future. The second law of thermodynamics and the increase of entropy explain why we perceive time as moving in one direction and not the other.

5. What are the implications of entropy increasing in a deterministic universe?

The increase of entropy in a deterministic universe has several implications, including the inevitability of the eventual heat death of the universe, the limitations on perpetual motion machines, and the irreversibility of certain processes. It also plays a crucial role in fields such as thermodynamics, cosmology, and information theory.

Similar threads

Replies
13
Views
2K
  • Thermodynamics
Replies
1
Views
1K
Replies
46
Views
5K
Replies
3
Views
2K
Replies
1
Views
801
Replies
6
Views
2K
  • Thermodynamics
Replies
10
Views
2K
Replies
2
Views
1K
Replies
12
Views
2K
Back
Top