Questions about the derivation of the gibbs entropy

In summary: The volume of phase space occupied by a microstate of the system is the size of one of these uncertainty boxes.
  • #1
V0ODO0CH1LD
278
0
In statistical mechanics the macro-state of a system corresponds to a whole region in the microscopical phase-space of that same system, classically, that means that an infinity of micro-states relate to a single macro-state. Similarly, given a hamiltonian, a whole surface in the microscopical phase space of a system can correspond to a single energy level (the surfaces of constant energy).

I've seen different derivations of the gibbs entropy, one in which a region in the microscopic phase-space corresponding to a macro-state of a system is divided into subregions of constant energy, and another where regions of constant energy get divided into subregions that correspond to different macro-states.

Although the procedure is aways to find the configuration of micro-states in that region that maximize the number of ways in which the micro-states can be organized into the subregions, I don't see how that makes sense in either case.

In the case where the outer region corresponds to a macro-state, does organizing the micro-states into subregions of constant energy means changing the hamiltonian? Because given a hamiltonian, a point in phase-space has a definite energy, right? Also, how does it even make sense to rearrange the micro-states into the subregions? For example, given a hamiltonian if all micro-states but one have the same energy, there is still only one way I can arrange those micro-states in that configuration, because the hamiltonian dictates that configuration.

I don't even know how to think about the other approach..

My other question comes later, and it has to do with the "transformation" of the expression that represents the number of ways you can organize the micro-states into the subregions
$$ \frac{N!}{\prod\limits_in_i!} $$
into the expression that later gets simplified into the gibbs entropy
$$ k_bln\left(\frac{N!}{\prod\limits_in_i!}\right). $$
Is that an empirical adaptation? Or is there a logical step involved? I'm okay if it's just an empirical thing, but I would like to know what that "thing" is.

EDIT: In the expressions above the N represents the number of micro-states in the outer region, and the n's represent the number of micro-states in each subregion. I know that those number are all infinite classically, but this derivation is an adaptation of a quantum mechanics derivation where those number are finite because the states of the system are discrete. In this case those numbers are just representations.
 
Last edited:
Science news on Phys.org
  • #2
Traditionally, one divides the phase space into regions the size of minimum position-momentum uncertainty boxes since this is thought to be the smallest volume of phase space a single system can be in due to the Heisenberg uncertainty principle. Historically, the size of these discrete regions was considered arbitrary since the coarse-graining would only define a zero value for the entropy; maximizing the entropy to find equilibrium conditions are largely independent of where the zero of entropy is. In this case, all formulas for entropy and so forth would be defined with respect to "standard values".

The reasons for taking the logarithm are twofold
1.) we define the entropy as an extensive parameter; only the logarithm of the number of microstates would be extensive (i.e. growing linearly with the "extent" of the system).
2.) the entropy without dimensions is proportional to the number of bits on average it would take to unambiguously describe the particular microstate the system is in; it is an information entropy which is very useful in statistical calculations.

Boltzmann's constant is an historical artifact of having different units for temperature and energy. entropy can easily be thought of as a dimensionless quantity, and a thermodynamic temperature be defined as Boltzmann's constant times the usual temperature.
 
  • #3
jfizzix said:
Traditionally, one divides the phase space into regions the size of minimum position-momentum uncertainty boxes since this is thought to be the smallest volume of phase space a single system can be in due to the Heisenberg uncertainty principle.

So the volume occupied by a state of the system in phase-space is the size of one of this uncertainty boxes? And those volumes are the subregions that I divide the outer region defined by the macro-state of the system into? Still, how does the rearranging of the system's micro-states looks like in that picture? What the ## n ## in those equations represent in that case?
 
  • #4
V0ODO0CH1LD said:
So the volume occupied by a state of the system in phase-space is the size of one of this uncertainty boxes? And those volumes are the subregions that I divide the outer region defined by the macro-state of the system into? Still, how does the rearranging of the system's micro-states looks like in that picture? What the ## n ## in those equations represent in that case?

Yes, the volume of phase space occupied by a microstate of the system is the size of one of these uncertainty boxes.

One of these boxes in 1D phase space would be of area [itex]dqdp=\frac{\hbar}{2}[/itex].
If we have a single particle in 3D phase space, the box will be of volume [itex](\frac{\hbar}{2})^{3}[/itex]
If we have [itex]N[/itex] particles in 3D phase space, the box will be of volume [itex](\frac{\hbar}{2})^{3N}[/itex]

Again, if you define entropy relative to a "standard state" you don't need to worry about the size of your boxes.

Forgive me, I think I need to know more about exactly what you are asking. Is the Gibbs entropy you refer to the entropy in the grand canonical ensemble (system connected to temperature and particle number reservoir) or just the canonical ensemble (system connected to temperature reservoir)?
 
  • #5


As a scientist, it is important to understand that there are multiple ways to derive the Gibbs entropy and different approaches may be more suitable for different systems. The derivation you have described is known as the Boltzmann entropy and it is based on the idea that the entropy of a system is related to the number of micro-states that correspond to a macro-state.

In this approach, the micro-states are organized into subregions of constant energy or different macro-states. This does not necessarily mean that the Hamiltonian is being changed, but rather that the micro-states are being grouped together based on their energy levels. This grouping allows for a more organized and systematic way of counting the number of micro-states, as opposed to considering each individual micro-state separately.

The expression for the number of ways to organize the micro-states into subregions is not an empirical adaptation, but rather a logical step based on the combinatorial principle. This principle states that the number of ways to arrange a set of objects is given by the factorial of the total number of objects divided by the product of the factorials of the number of objects in each subgroup. This is the basis for the expression you have mentioned.

The simplification of this expression into the Gibbs entropy is based on the fact that in statistical mechanics, we are interested in the logarithm of the number of micro-states, rather than the actual number itself. This is because the logarithm allows for easier calculations and also has important physical significance.

In conclusion, the derivation of the Gibbs entropy involves mathematical and logical steps based on fundamental principles of statistical mechanics. Different approaches may be used, but they all lead to the same result. It is important to understand the underlying principles and assumptions in each approach to fully grasp the concept of entropy in statistical mechanics.
 

Related to Questions about the derivation of the gibbs entropy

1. What is the Gibbs Entropy?

The Gibbs Entropy, also known as the thermodynamic entropy, is a measure of the disorder or randomness of a system. It is denoted as S and is a state function that depends on the temperature, pressure, and composition of the system.

2. How is the Gibbs Entropy derived?

The Gibbs Entropy is derived from the second law of thermodynamics, which states that the total entropy of a closed system always increases over time. It can also be derived from the statistical mechanics concept of microstates and macrostates.

3. What is the significance of the Gibbs Entropy?

The Gibbs Entropy is a fundamental concept in thermodynamics and is used to understand the behavior of systems at a molecular level. It is also a key factor in determining the direction of chemical reactions and phase transitions.

4. How is the Gibbs Entropy related to the Boltzmann Entropy?

The Boltzmann Entropy is the statistical interpretation of entropy, while the Gibbs Entropy is the thermodynamic interpretation. They are related through the equation S = kBlnΩ, where kB is the Boltzmann constant and Ω is the number of microstates in a system.

5. Can the Gibbs Entropy be negative?

Yes, the Gibbs Entropy can be negative if the disorder of the system decreases over time, such as in a spontaneous chemical reaction. However, the total entropy of the universe must always increase, so any decrease in entropy in one system must be accompanied by an increase in another system.

Similar threads

Replies
13
Views
2K
Replies
2
Views
874
Replies
1
Views
791
  • Thermodynamics
Replies
9
Views
1K
Replies
19
Views
1K
Replies
10
Views
1K
  • Thermodynamics
Replies
3
Views
921
  • Atomic and Condensed Matter
Replies
6
Views
4K
Replies
5
Views
2K
Replies
1
Views
940
Back
Top