Understanding Thermodynamics Entropy: Common Questions Answered

In summary, the first law of thermodynamics states that the total amount of heat added and the total amount of work done by a system must be the same during a process. If a process is irreversible, then the temperature and pressure at the interface can't be determined, but the pressure and temperature at the interface can be measured and controlled using the surroundings.
  • #1
Chemer
27
0
Hi, I've a few questions about entropy.
Entropy is the measure of disorder in universe, and the entropy of universe always increase. So, is saying this correct that entropy is the unusable energy causing the disorder and due to this the amount of useable energy decrease every time when the entropy increase in a process?
How entropy was derived from Carnot cycle?
When defining entropy, why we use the term reversible process though its a state function?
what happens to entropy with change in temperature? As entropy mathematical expression shows inverse relation but I think I'm getting it wrong.
Why we use the isothermal condition to define it?
Sorry for a long list of questions but I'm really confused about these points. Please need your help.
thanks.
 
Science news on Phys.org
  • #2
Entropy decreases with increase in temperature.
##S=\frac{Q}{T}##
Obviously 2/3 is larger than 2/9.
In order to approach the Carnot efficiency, the processes involved in the heat engine cycle must be reversible and involve no change inentropy. This means that the Carnot cycle is an idealization, since no real engine processes are reversible and all real physical processes involve some increase in entropy.
 
  • #3
AdityaDev said:
Entropy decreases with increase in temperature.

This is definitely not correct. The entropy of a body or a gas increases with temperature.

Chet
 
  • #4
Chemer said:
Hi, I've a few questions about entropy.
Entropy is the measure of disorder in universe, and the entropy of universe always increase. So, is saying this correct that entropy is the unusable energy causing the disorder and due to this the amount of useable energy decrease every time when the entropy increase in a process?
A useful interpretation of entropy is that it is a measure of the number of quantum mechanical states available to a system under the constraint that the system internal energy is constant. The entropy will typically increase with the temperature and decrease with the pressure.

For a pure material, it is best to think of the entropy (per unit mass) as a physical property of the material (which it actually is) that uniquely varies with the temperature and pressure (or density) of the material. Thus, once the temperature and pressure are specified, its entropy (per umit mass) is uniquely determined. It is best not to think of the entropy as something specifically related to this process or that process, but rather as a material property.

When defining entropy, why we use the term reversible process though its a state function?
If we want to determine the change in entropy of a material or system (comprised of a combination of materials, say) from thermodynamic equilibrium state A to thermodynamic energy state B, the only way we have for doing this is to dream up a reversible path between the two equilibrium states, and measure or calculate the integral of dQ/T for that (or any other) reversible path.

Why we use the isothermal condition to define it?
Who says?

I have prepared a write up to help students understand the first and second laws of thermodynamics, including the concept of entropy. I hope this helps give you a better perspective on entropy:

FIRST LAW OF THERMODYNAMICS

Suppose that we have a closed system that at initial time ti is in an initial equilibrium state, with internal energy Ui, and at a later time tf, it is in a new equilibrium state with internal energy Uf. The transition from the initial equilibrium state to the final equilibrium state is brought about by imposing a time-dependent heat flow across the interface between the system and the surroundings, and a time-dependent rate of doing work at the interface between the system and the surroundings. Let [itex]\dot{q}(t)[/itex] represent the rate of heat addition across the interface between the system and the surroundings at time t, and let [itex]\dot{w}(t)[/itex] represent the rate at which the system does work on the surroundings at the interface at time t. According to the first law (basically conservation of energy),
[tex]\Delta U=U_f-U_i=\int_{t_i}^{t_f}{(\dot{q}(t)-\dot{w}(t))dt}=Q-W[/tex]
where Q is the total amount of heat added and W is the total amount of work done by the system on the surroundings at the interface.

The time variation of [itex]\dot{q}(t)[/itex] and [itex]\dot{w}(t)[/itex] between the initial and final states uniquely characterizes the so-called process path. There are an infinite number of possible process paths that can take the system from the initial to the final equilibrium state. The only constraint is that Q-W must be the same for all of them.

If a process path is irreversible, then the temperature and pressure within the system are inhomogeneous (i.e., non-uniform, varying with spatial position), and one cannot define a unique pressure or temperature for the system (except at the initial and the final equilibrium state). However, the pressure and temperature at the interface can be measured and controlled using the surroundings to impose the temperature and pressure boundary conditions that we desire. Thus, TI(t) and PI(t) can be used to impose the process path that we desire. Alternately, and even more fundamentally, we can directly control, by well established methods, the rate of heat flow and the rate of doing work at the interface [itex]\dot{q}(t)[/itex] and [itex]\dot{w}(t)[/itex]).

Both for reversible and irreversible process paths, the rate at which the system does work on the surroundings is given by:
[tex]\dot{w}(t)=P_I(t)\dot{V}(t)[/tex]
where [itex]\dot{V}(t)[/itex] is the rate of change of system volume at time t. However, if the process path is reversible, the pressure P within the system is uniform, and

[itex]P_I(t)=P(t)[/itex] (reversible process path)

Therefore, [itex]\dot{w}(t)=P(t)\dot{V}(t)[/itex] (reversible process path)

Another feature of reversible process paths is that they are carried out very slowly, so that [itex]\dot{q}(t)[/itex] and [itex]\dot{w}(t)[/itex] are both very close to zero over then entire process path. However, the amount of time between the initial equilibrium state and the final equilibrium state (tf-ti) becomes exceedingly large. In this way, Q-W remains constant and finite.

SECOND LAW OF THERMODYNAMICS

In the previous section, we focused on the infinite number of process paths that are capable of taking a closed thermodynamic system from an initial equilibrium state to a final equilibrium state. Each of these process paths is uniquely determined by specifying the heat transfer rate [itex]\dot{q}(t)[/itex] and the rate of doing work [itex]\dot{w}(t)[/itex] as functions of time at the interface between the system and the surroundings. We noted that the cumulative amount of heat transfer and the cumulative amount of work done over an entire process path are given by the two integrals:
[tex]Q=\int_{t_i}^{t_f}{\dot{q}(t)dt}[/tex]
[tex]W=\int_{t_i}^{t_f}{\dot{w}(t)dt}[/tex]
In the present section, we will be introducing a third integral of this type (involving the heat transfer rate [itex]\dot{q}(t)[/itex]) to provide a basis for establishing a precise mathematical statement of the Second Law of Thermodynamics.

The discovery of the Second Law came about in the 19th century, and involved contributions by many brilliant scientists. There have been many statements of the Second Law over the years, couched in complicated language and multi-word sentences, typically involving heat reservoirs, Carnot engines, and the like. These statements have been a source of unending confusion for students of thermodynamics for over a hundred years. What has been sorely needed is a precise mathematical definition of the Second Law that avoids all the complicated rhetoric. The sad part about all this is that such a precise definition has existed all along. The definition was formulated by Clausius back in the 1800's.

Clausius wondered what would happen if he evaluated the following integral over each of the possible process paths between the initial and final equilibrium states of a closed system:
[tex]I=\int_{t_i}^{t_f}{\frac{\dot{q}(t)}{T_I(t)}dt}[/tex]
where TI(t) is the temperature at the interface with the surroundings at time t. He carried out extensive calculations on many systems undergoing a variety of both reversible and irreversible paths and discovered something astonishing. He found that, for any closed system, the values calculated for the integral over all the possible reversible and irreversible paths (between the initial and final equilibrium states) was not arbitrary; instead, there was a unique upper bound (maximum) to the value of the integral. Clausius also found that this result was consistent with all the "word definitions" of the Second Law.

Clearly, if there was an upper bound for this integral, this upper bound had to depend only on the two equilibrium states, and not on the path between them. It must therefore be regarded as a point function of state. Clausius named this point function Entropy.

But how could the value of this point function be determined without evaluating the integral over every possible process path between the initial and final equilibrium states to find the maximum? Clausius made another discovery. He determined that, out of the infinite number of possible process paths, there existed a well-defined subset, each member of which gave the same maximum value for the integral. This subset consisted of what we call today the reversible process paths. So, to determine the change in entropy between two equilibrium states, one must first dream up a reversible path between the states and then evaluate the integral. Any other process path will give a value for the integral lower than the entropy change.

So, mathematically, we can now state the Second Law as follows:

[tex]I=\int_{t_i}^{t_f}{\frac{\dot{q}(t)}{T_I(t)}dt}\leq\Delta S=\int_{t_i}^{t_f} {\frac{\dot{q}_{rev}(t)}{T(t)}dt}[/tex]
where [itex]\dot{q}_{rev}(t)[/itex] is the heat transfer rate for any of the reversible paths between the initial and final equilibrium states, and T(t) is the system temperature at time t (which, for a reversible path, is equal to the temperature at the interface with the surroundings). This constitutes a precise mathematical statement of the Second Law of Thermodynamics.

Chet
 
  • Like
Likes Astronuc
  • #5
AdityaDev said:
Entropy decreases with increase in temperature.
##S=\frac{Q}{T}##
It should be [itex]dS=\frac{dQ}{T}[/itex] which does not mean entropy decreases with increased temperature. It just means that entropy increases by less (for a given dQ) at higher temperatures.
 
  • Like
Likes Chestermiller
  • #6
Chemer said:
Entropy is the measure of disorder in universe,

That's one of those things that's repeated over and over again in the pop-sci press. There's nothing wrong with it as long as you remember that it's a math-free simplification intended for people who want to know what entropy is about but don't need to build further understanding on top of the concept.

If you want real answers to the questions you're asking, you'll have to set aside what you've already heard and start digging into a decent textbook on statistical mechanics. The good news is that there are many people here who can and will guide you through that.
 
  • #7
Chestermiller said:
I have prepared a write up to help students understand the first and second laws of thermodynamics, including the concept of entropy. I hope this helps give you a better perspective on entropy:

FIRST LAW OF THERMODYNAMICS . . .

I am thrilled with the clarity of this explanation. There are some points within that I will pick out because I could use some elaboration.

Chestermiller said:
The definition [of entropy] was formulated by Clausius back in the 1800's.

Clausius wondered what would happen if he evaluated the following integral over each of the possible process paths between the initial and final equilibrium states of a closed system:
[tex]I=\int_{t_i}^{t_f}{\frac{\dot{q}(t)}{T_I(t)}dt}[/tex]
where TI(t) is the temperature at the interface with the surroundings at time t. He carried out extensive calculations on many systems undergoing a variety of both reversible and irreversible paths and discovered something astonishing. He found that, for any closed system, the values calculated for the integral over all the possible reversible and irreversible paths (between the initial and final equilibrium states) was not arbitrary; instead, there was a unique upper bound (maximum) to the value of the integral. Clausius also found that this result was consistent with all the "word definitions" of the Second Law.

Clearly, if there was an upper bound for this integral, this upper bound had to depend only on the two equilibrium states, and not on the path between them. It must therefore be regarded as a point function of state. Clausius named this point function Entropy.

Some kind of physical thinking must have lead to this formulation (so what was it?) or else Clausius must have been fiddling with mathematical formulas rather arbitrarily to come up with it (which leaves us empty). Is the entropy measure defined with this equation an observed phyiscal or mathematical phenomenon which is inherent in nature, an invention that Clausius constructed to idealize, organize, and/or intellectualize something we do not fully understand, or nothing more than a mathematical curiosity based on the application and premises of the ideal gas law? Whichever answer, where exactly does it come from (i.e., what observations lead to it) and how is it derived, invented, developed from observations? What real physical meaning does this equation directly imply and how do we see and derive the physical meaning from this equation in our thinking?

My personal belief at this stage of my reflections on the subject have lead me to believe that the ratio is (or should be) the rate of the number of energy packets (a unitless real number) introduced into the system as a function of time and then the area under the curve is calculated. Still a somewhat baffling concept, but a bit more physically concrete than the ratio of Clausius. In my mind, I see the temperature T in the denominator as replaced with the exact average per-particle linear kinetic energy (it is proportional to T in an ideal gas) and the numerator as rate of total energy introduced into the system. I see the total energy introduced into the system (I like the case where the initial state is absolute 0 so the system starts with 0 energy level) as separating into linear k.e. over all the particles and everything else which I think of as potential energy of the interaction forces present in the gas cloud. In an ideal gas, there is no potential energy so it is all linear k.e. and the ratio should exactly equal number of gas particles. At equilibrium, I see the total linear k.e. over all particles in the gas as being in dynamic equilibrium with the potential energy and the ratio of the two is constant. Hence, the ratio of two energies (total energy over k.e. per particle) is the number of energetic particles (or packets or particle equivalents). I have been thinking that each energy packet represents an equivalent to a particle in an ideal gas. I am still reflecting on these ideas to gain further insight, as even this physical insight is marred by some technical difficulties and unanswered questions. Based on these ideas, I wonder if entropy could have been better defined by using such a unitless ratio. At least this way it wouldn't get confused with the concept of a physical energy of some kind or be misinterpreted as some form of energy or ability to do useful work.

Chestermiller said:
There have been many statements of the Second Law over the years, couched in complicated language and multi-word sentences, typically involving heat reservoirs, Carnot engines, and the like. These statements have been a source of unending confusion for students of thermodynamics for over a hundred years.

I know it is asking a lot, but what are all the different (and proveably equivalent, presumably) accecpted published statements or ways to state (or otherwise specify) the 2nd law of thermodynamics and how do they relate to each other (proof or derivation is preferable over general explanation - unless such an explanation rises to the quality of a proof)?

Chestermiller said:
So, mathematically, we can now state the Second Law as follows:

[tex]I=\int_{t_i}^{t_f}{\frac{\dot{q}(t)}{T_I(t)}dt}\leq\Delta S=\int_{t_i}^{t_f} {\frac{\dot{q}_{rev}(t)}{T(t)}dt}[/tex]
where [itex]\dot{q}_{rev}(t)[/itex] is the heat transfer rate for any of the reversible paths between the initial and final equilibrium states, and T(t) is the system temperature at time t (which, for a reversible path, is equal to the temperature at the interface with the surroundings). This constitutes a precise mathematical statement of the Second Law of Thermodynamics.

What is the logical derivation and/or fully explicitized physical explanation of the equivalence between the statement of the 2nd law specified with Clausius' entropy equation and these other statements of the 2nd law?

Rising Eagle
 
Last edited:
  • #8
Rising Eagle said:
I am thrilled with the clarity of this explanation. There are some points within that I will pick out because I could use some elaboration.

Some kind of physical thinking must have lead to this formulation (so what was it?) or else Clausius must have been fiddling with mathematical formulas rather arbitrarily to come up with it (which leaves us empty). Is the entropy measure defined with this equation an observed phyiscal or mathematical phenomenon
Entropy is physical property of each material and a unique function of state of the material. It is not a mathematical curiosity. Its physical reality and pertinence was recognized during the evolution of our understanding of the second law.

which is inherent in nature, an invention that Clausius constructed to idealize, organize, and/or intellectualize something we do not fully understand, or nothing more than a mathematical curiosity based on the application and premises of the ideal gas law?
It has nothing to do with the ideal gas law; the second law and entropy apply to all materials including non-ideal gases, liquids, and solids. Clausius' thinking must have started with the word statements of the second law and evolved into the concept of entropy and the Clausius Inequality.
Whichever answer, where exactly does it come from (i.e., what observations lead to it) and how is it derived, invented, developed from observations? What real physical meaning does this equation directly imply and how do we see and derive the physical meaning from this equation in our thinking?
The gory details of the development are in virtually every Thermodynamics textbook, and are too lengthy to repeat here. See, for example, Denbigh, Principles of Chemical Equilibrium or Smith and Van Ness, Introduction to Chemical Engineering Thermodynamics.

My goal in what I wrote was to provide students with a mathematically understandable and concise statement of the second law that they could apply to solving practical problems. I have seen so much confusion by students about these concepts because they are presented so poorly and imprecisely in most of the texts that are out there. It all becomes much simpler when we say that the change in entropy of a system between two equilibrium states is the maximum value of the integral of dQ/T over all the possible process paths between the two equilibrium states, including both reversible and irreversible paths.
My personal belief at this stage of my reflections on the subject have lead me to believe that the ratio is (or should be) the rate of the number of energy packets (a unitless real number) introduced into the system as a function of time and then the area under the curve is calculated. Still a somewhat baffling concept, but a bit more physically concrete than the ratio of Clausius. In my mind, I see the temperature T in the denominator as replaced with the exact average per-particle linear kinetic energy (it is proportional to T in an ideal gas) and the numerator as rate of total energy introduced into the system. I see the total energy introduced into the system (I like the case where the initial state is absolute 0 so the system starts with 0 energy level) as separating into linear k.e. over all the particles and everything else which I think of as potential energy of the interaction forces present in the gas cloud. In an ideal gas, there is no potential energy so it is all linear k.e. and the ratio should exactly equal number of gas particles. At equilibrium, I see the total linear k.e. over all particles in the gas as being in dynamic equilibrium with the potential energy and the ratio of the two is constant. Hence, the ratio of two energies (total energy over k.e. per particle) is the number of energetic particles (or packets or particle equivalents). I have been thinking that each energy packet represents an equivalent to a particle in an ideal gas. I am still reflecting on these ideas to gain further insight, as even this physical insight is marred by some technical difficulties and unanswered questions. Based on these ideas, I wonder if entropy could have been better defined by using such a unitless ratio. At least this way it wouldn't get confused with the concept of a physical energy of some kind or be misinterpreted as some form of energy or ability to do useful work.
I don't understand any of this. I am a continuum mechanics guy, and don't relate very well to molecular explanations. But you should also get yourself a textbook on Statistical Thermodynamics which provides a different and deeper perspective.

I know it is asking a lot, but what are all the different (and proveably equivalent, presumably) accecpted published statements or ways to state (or otherwise specify) the 2nd law of thermodynamics and how do they relate to each other (proof or derivation is preferable over general explanation - unless such an explanation rises to the quality of a proof)? What is the logical derivation and/or fully explicitized physical explanation of the equivalence between the statement of the 2nd law specified with Clausius' entropy equation and these other statements of the 2nd law?
Again, see the standard textbooks on Thermodynamics for all the gory details of the development.

Chet
 

Related to Understanding Thermodynamics Entropy: Common Questions Answered

What is thermodynamics?

Thermodynamics is the branch of science that deals with the study of energy and its transformation from one form to another.

What is entropy?

Entropy is a measure of the disorder or randomness in a system. It is a thermodynamic property that reflects the amount of energy that is unavailable for work in a system.

How is entropy related to thermodynamics?

Entropy is a fundamental concept in thermodynamics, as it helps us understand the direction of energy flow and the efficiency of energy conversion processes.

What is the second law of thermodynamics?

The second law of thermodynamics states that the total entropy of an isolated system will always increase over time, or at best, remain constant.

How does entropy affect our daily lives?

Entropy plays a crucial role in many natural processes, such as heat transfer, chemical reactions, and even our own metabolism. It also has implications in fields like engineering, environmental science, and economics.

Similar threads

Replies
12
Views
2K
Replies
15
Views
1K
  • Thermodynamics
Replies
3
Views
821
Replies
13
Views
2K
Replies
16
Views
909
Replies
3
Views
1K
  • Thermodynamics
2
Replies
57
Views
6K
  • Thermodynamics
Replies
2
Views
816
  • Thermodynamics
Replies
26
Views
2K
Back
Top