Calculating the number of energy states using momentum space

In summary, the conversation discusses the deducing of the number of possible energy states within a certain momentum in momentum space. The formula used to calculate this is Ns = Lx * px/h * Ly * py/h * pi, where L represents the length of the box in a certain dimension and p represents momentum. The conversation also touches on the assumption of a square box, the dependence of number density on length in a rectangular box, and the fact that the number of states in a momentum vector is dependent on the length of the container in that same vector direction. The conversation ends with a discussion on the calculation of the total number of states in a 3D spherical momentum space and the addition of a factor of 2 for the
  • #71
BvU said:
What makes you think they should all have the same kinetic energy ?
The collisions themselves !

I thought that perfectly elastic collisions among identical particles, which is assumed for the derivation, would keep a particle's kinetic energy more or less constant. Please elaborate if this is incorrect.
 
Physics news on Phys.org
  • #72
JohnnyGui said:
Please elaborate if this is incorrect
Very incorrect !
Experiment with sliding coins over a smooth table
 
  • #73
BvU said:
Very incorrect !
Experiment with sliding coins over a smooth table

I think it depends on the starting scenario of the system with a certain equilibrium temperature. If each particle has the same kinetic energy initially at the very start, then I can't conclude other than the kinetic energy of each particle staying constant because of perfectly elastic collisions. If however, if at the very start, each particle differ in kinetic energy (temperature has still yet to reach equilibrium) then I would understand why particles can have different kinetic energies in the system, even in the presence of perfectly elastic collisions.
 
  • #74
DId you try the coins ? Did the kinetic energy of each and every coin remain constant ?
Did you ever have to do an exercise with hard ball elastic collisions ? What is conserved ?
 
  • #75
JohnnyGui said:
If each particle has the same kinetic energy initially at the very start

Which they won't. A given equilibrium temperature only means the average kinetic energy of the particles is a certain value. It does not mean that every single particle has that kinetic energy.

I think you need to read the article on the kinetic theory of gases more carefully.
 
  • #76
PeterDonis said:
Which they won't. A given equilibrium temperature only means the average kinetic energy of the particles is a certain value. It does not mean that every single particle has that kinetic energy.

Two questions arises from this.

1. So if each particle does have the same kinetic energy initiallly at the very start, is it correct that each particle's kinetic energy stays constant after perfect elastic collisions?

2. The reason that they don't have the same kinetic energy at the very start is because the final equilibrium temperature is yet to be reached?
 
  • #77
JohnnyGui said:
if each particle does have the same kinetic energy initiallly at the very start

This is much, much too improbable to have any chance of being observed. Remember we're talking about something like ##10^{23}## particles in a typical container of gas.

JohnnyGui said:
is it correct that each particle's kinetic energy stays constant after perfect elastic collisions?

In the center of mass frame of the collision, yes, this will be true. But kinetic energy is frame-dependent, so it will not, in general, be true in the rest frame of the gas as a whole.

JohnnyGui said:
The reason that they don't have the same kinetic energy at the very start is because the final equilibrium temperature is yet to be reached?

No. Go read my post #75 again, carefully.
 
  • #78
PeterDonis said:
No. Go read my post #75 again, carefully.

I did, but I don't see how this post answers my question. It states that a characteristic of an equilibrium temperature is having an average kinetic energy and not every particle having that same kinetic energy. This is clear to me.

My question is more directed towards why particles don't have the same kinetic energy at the very start even if perfect elastic collisions are considered. I have a hard time grasping "rest frame of the gas as a whole" because a gas consists of particles going in different directions and thus each particle having its own rest frame.
 
  • #79
BvU said:
Experiment with sliding coins over a smooth table
 
  • #80
BvU said:
Experiment with sliding coins over a smooth table

My posted conclusion and question is deduced from this experiment. I have difficulty choosing the starting scenario; in the case of 2 coins, should I make 2 coins have the same velocity before collision or should one stay still? If it's the latter case, then I would conclude that the reason that particles don't have the same kinetic energy at equilibrium temperature is because particles had different kinetic energies before that equilibrium temperature was reached.
 
  • #81
Either. Only precisely head-on collisions of equal coins with equal but opposite velocities conserve the kinetic energies of both coins. Chance of one in very, very many.
 
  • #82
JohnnyGui said:
It states that a characteristic of an equilibrium temperature is having an average kinetic energy and not every particle having that same kinetic energy. This is clear to me.

Ok, good.

JohnnyGui said:
My question is more directed towards why particles don't have the same kinetic energy at the very start even if perfect elastic collisions are considered.

Because elastic collisions conserve the total kinetic energy of the two colliding particles. They don't conserve the kinetic energies of the two particles individually except in the very rare case where the combined momentum of the two particles is zero.

JohnnyGui said:
I have a hard time grasping "rest frame of the gas as a whole" because a gas consists of particles going in different directions and thus each particle having its own rest frame.

You're confused about frames. I can pick any frame I like to analyze the situation; there is no need to use a different frame for every particle just because each particle has a different velocity. The rest frame of the gas as a whole is the frame in which the center of mass of the gas as a whole is at rest. When we talk about the temperature of a gas being the average kinetic energy of its particles, we mean the average kinetic energy in that frame, the frame in which the center of mass of the gas is at rest. And in that frame, virtually all collisions will change the kinetic energies of both particles.
 
  • Like
Likes JohnnyGui
  • #83
@PeterDonis : Thank you for the clear explanation. I think I understand it now.

BvU said:
Either. Only precisely head-on collisions of equal coins with equal but opposite velocities conserve the kinetic energies of both coins. Chance of one in very, very many.

Ah, this explains it for me. I was not aware of this.

So, the number of states ##n_s(p)## for a particular momentum ##p## is given by:
$$n_s(p) =\frac{2\pi p^2 \cdot L^2}{h^2}$$
I have read about Boltzmann’s and Maxwell’s derivations for the number of particles with a particular momentum if the allowed momentums are discrete. If the allowed momentums are very closely packed together, is also correct to deduce that the number of particles ##n_i## having a particular momentum of ##p_i## to be:
$$n_i = F(p_i) \cdot n_s(p_i) = F(p_i) \cdot \frac{2\pi p_i^2 \cdot L^2}{h^2}$$
Where ##F(p_i)## is the number of particles at a particular momentum ##p_i## but per 1 microstate.
I am aware it is usually written in the form of a State Density, but I was wondering if this approach is also correct.
 
Last edited:
  • #84
BvU said:
You'll have a hard time finding solutions for the Schroedinger equation in this funny case !

I don't see why that would be difficult in many cases.
Calculating the number of energy states using momentum space

First, solve the Schrodinger equation for a box of Lx, Ly2 ; and record the constants for wavelength 'k' in x, and y; eg: record k for the lowest state of n in each direction.

So long as the differences in length, Lx - Lx2, Ly-Ly2, are multiples of the recorded wavelength ( for each respective axis ) Then I think the same wavelength must correctly solve the extended box in each axis.

The reason is simple, sine wave solutions for standing waves are zero at the walls; and happen to be zero at points where the walls "might" have existed if the box was reduced to dimensions Lx by Ly2.

Therefore, I'm sure any infinite well/rigid box can be extended by an integer multiple of wavelengths at points where the sine-waves are naturally zero, without making solution to the Schrodinger equation impossible.

The notation of 'n', can be confounded by the different lengths of the box ... but the ability to solve the Schrodinger equation is not made impossible because traditional notation can be confounded.

vanhees71 said:
To the contrary! In this container (i.e., the one with rigid boundary conditions) the position is well defined as a self-adjoint operator, but momentum is not. There are thus also no momentum eigenstates.

Vanshees, I pointed out in another thread that your proof appears to depend on over-specifiying the number of boundary conditions to compute the domain of a function. I suspect your complaint is probably a mathematical fiction caused by over-specifying the boundary conditions ??

When we only require that the value at the wall be the same as the opposing wall, [itex]psi(+a,y)=psi(-a,y)[/itex] we have already given enough boundary conditions to determine that the momentum is a self adjoint operator for the specified axis. The boundary condition can be repeated for each axis, showing each one to be independently self-adjoint. That is to say, when we only *require* that the wave function be periodic, and not that it is also zero; we get a bigger domain than if we try to restrict the wave function to having a specific value at a periodic boundary. When we solve the *general* case for *any* value at the periodic boundary (the wall is one such boundary), the proof will come out with psi being self adjoint. But the proof will fail if we try to specify a particular value at the wall (even if we *know* what it should be.)

Again, By analogy --- > we *know* that in any test of Young's double slit experiment ... that if we try to specify mathematically that the particle must have a "probably" of *zero* to be found in one of the slits, we would destroy the solution to the interference pattern that is the well known result of the experiment. eg: You can put in mathematical boundary conditions that you are *sure* are true (when tested), that will destroy the ability of the Schrodinger equation to produce results consistent with experiment.

My understanding of the idea of self-adjointness, is essentially to prove the imaginary part of psi is canceled out when computing expectation values.

Operators work on psi by multiplication after differentiation; and self adjointness is required for the final product(s) to sum up to a purely real expectation value.

If only a single point's product (somewhere on psi) is computed, the idea of self adjointness is demonstrated when given real constants a,b that the complex product on the left side of this next equation is always real:

eg: [itex]( a - ib )^{1/2}* ( a + ib )^{1/2}= ( a^2 + b^2 )^{1/2}[/itex]

I've chosen to represent psi as the square root of a complex number, because in some sense psi is the momentum of the particle; and it's square is the kinetic energy in classical physics. [itex]p^2 / 2m = T[/itex]

For self adjointness of functions, I am not required that the result of the multiplication be purely real at every point; but only that the *sum* (or integral) of the results cancel out the imaginary portion. However, the condition of self adjointness is trivially met when b=0, everywhere.

Since I can give a time invariant solution to Schrodinger's that has a psi that is purely *real* (b=0), in the case of an infinite well box; Where exactly does your claim of failure to be self adjoint come from?

If I naively compute the momentum operator on an infinite well and get an integral of a product that has a purely real result when evaluated; Why should I believe that self-adjointness is not true? eg: As opposed to believing you've over-specified a problem, and thereby made it insoluble by a mathematical proof that is perhaps flawed in cases having more boundary condition than there are unknowns that *must* be solved for?

To solve for N unknowns, in linear equations; I only need N independent equations. If I put in N+1 equations, depending on the textbook ... the proofs for an algorithm solving a linear set of equations may or may not be valid. We need to know the chain of reasoning used in the proofs whenever working with more equations than we have unknowns to solve for, in order to know the proof is valid.
 
Last edited:
  • #85
I have a question about calculating the number of particles at a particular energy level using Boltzmann Statistics in case of discrete energy levels.

For the number of particles ##n_i## at a particular discrete energy level ##E_i##, I understand that according to Boltzmann this is given by:
$$n_i = \frac{N}{\sum_{i=0}^\infty e^{\frac{-E_i}{k_B}}} \cdot e^{\frac{-E_i}{k_B}}$$
My question is, does this formula take into account the number of possible quantum states at that particular energy level ##E_i## or does it only give the number of particles for just 1 quantums state at that energy level?
 
  • #86
JohnnyGui said:
I have a question about calculating the number of particles at a particular energy state using Boltzmann Statistics

Boltzmann statistics are classical, not quantum.

JohnnyGui said:
does this formula take into account the number of possible quantum states at that particular energy state ##E_i##?

No; it can't, because, as above, Boltzmann statistics are classical, not quantum.
 
  • #87
PeterDonis said:
Boltzmann statistics are classical, not quantum.
No; it can't, because, as above, Boltzmann statistics are classical, not quantum.

Does this mean that the mentioned formula for ##n_i## can be multiplied by the number of quantum states at that energy level in order to get the "true" number of particles at that energy level?
 
  • #88
JohnnyGui said:
Does this mean that the mentioned formula for ##n_i## can be multiplied by the number of quantum states at that energy level in order to get the "true" number of particles at that energy level?

No. Apparently you didn't grasp what "Boltzmann statistics are classical, not quantum" means. Not only that, but ##n_i## is, by definition, the number of particles with energy ##E_i##, as you yourself said in your previous post, so I have no idea why you would think you can get a "true" number of particles by multiplying it by something else.
 
  • #89
PeterDonis said:
No. Apparently you didn't grasp what "Boltzmann statistics are classical, not quantum" means. Not only that, but ##n_i## is, by definition, the number of particles with energy ##E_i##, as you yourself said in your previous post, so I have no idea why you would think you can get a "true" number of particles by multiplying it by something else.

Because you said it can't take into account the number of quantum states at a particular energy level, letting me think that the classical approach would give an erroneous number of particles in the case of a quantum approach for which it should be corrected somehow. Furthermore, the Boltzmann factor is combined with the number of quantum states to derive a formula when energylevels are considered continuous, making me think that perhaps ##n_i(E_i)## should be corrected that way.

This video shows that (part) of the Boltzmann formula is multiplied by the number of states at a particular energylevel ##\rho(\epsilon)## (the ##\rho(\epsilon)## is discussed in his previous video).
 
  • #90
JohnnyGui said:
Because you said it can't take into account the number of quantum states at a particur energy level

Can you give a specific quote? It's been a while.

JohnnyGui said:
letting me think that the classical approach would give an erroneous number of particles

If by "erroneous" you mean "different than the number that quantum statistics would give", of course it does. That's why we don't use Boltzmann statistics when the difference between them and the correct quantum statistics is important.

JohnnyGui said:
for which it should be corrected somehow

You don't "correct" Boltzmann statistics if you want correct answers when quantum effects are important. You just use the correct quantum statistics instead.

JohnnyGui said:
the Boltzmann factor is combined with the number of quantum states to derive a formula when energylevels are considered continuous

Can you give a reference? (Preferably a written one, not a video; it takes a lot more time to extract the relevant information from a video than it does from a written article or paper.)
 
  • #91
PeterDonis said:
Can you give a specific quote? It's been a while
I was reffering to your answer "No, it can't" in your previous post #86 when I asked "Does this formula take into account the number of possible quantum states at that particular energy state ##Ei##?"

PeterDonis said:
Can you give a reference? (Preferably a written one, not a video; it takes a lot more time to extract the relevant information from a video than it does from a written article or paper.)

Ok, I couldn't find the exact way on paper as how the lecturer did it, but I'll try to write a summary of what he did since I'm curious whether his method is correct or not. His method does result in the correct Maxwell's Distribution formula.

Boltzmann derived classically that the number of particles ##n_i## with a particular discrete energy level ##E_i## is:
$$n_i = \frac{N}{\sum_{i=0}^\infty e^{\frac{-E_i}{k_B}}} \cdot e^{\frac{-E_i}{k_B}}$$
I was able to derive this one.

Furthermore, I tried to derive by myself the number of particles if energy is considered continuous; let's call this number ##n## to separate it from Boltzmann's ##n_i## that is used for discrete energylevels. I deduced that ##n## is equal to the Density of quantum states function ##D(E)## times ##dE## multiplied by some function ##F'(E)## times ##dE##. The ##F'(E)## is the number of particles per 1 quantum state per 1 ##E##; so it's basically the particle number density at a particular ##E## per 1 quantum state of that ##E##. Both ##D_E## and ##F'(E)## are derivatives of cumulative functions.
We already discussed that ##D(E) \cdot dE = \frac{V \cdot 2^{2.5}\cdot \pi \cdot m^{1.5}}{h^3} \cdot \sqrt{E} \cdot dE##. So that ##n## would be:
$$n = D(E) \cdot dE \cdot F'(E) \cdot dE = \frac{V \cdot 2^{2.5}\cdot \pi \cdot m^{1.5}}{h^3} \cdot \sqrt{E} \cdot dE \cdot F'(E) \cdot dE$$
Here comes the part that I don't get. The lecturer in the video states all of a sudden that:
$$F'(E) \cdot dE = \frac{N}{\sum_{i=0}^\infty e^{\frac{-E_i}{k_B}}} \cdot e^{\frac{-E_i}{k_B}}$$
So according to him, the number of particles in a continuous energy spectrum is given by:
$$n = \frac{V \cdot 2^{2.5}\cdot \pi \cdot m^{1.5}}{h^3} \cdot \sqrt{E} \cdot \frac{N}{\sum_{i=0}^\infty e^{\frac{-E_i}{k_B}}} \cdot e^{\frac{-E_i}{k_B}} \cdot dE$$
Notice how he basically combined Boltzmann's classical formula (with discrete energylevels) with the Density of quantum states function ##D(E)##.
You can also see http://hep.ph.liv.ac.uk/~hock/Teaching/StatisticalPhysics-Part3-Handout.pdf(on sheet number ##8##) that this is done more or less the same way, combining the Boltzmann factor with the States Density.

I have continued working with that formula nonetheless. Integrating it to infinity gives me a complex constant ##C## that should be equal to the total number of particles ##N##. The probability of finding a particle with energy between ##E ≥ E + dE## is equal to ##\frac{n}{N}##. Writing ##n## in terms of the previous formula and ##N## in terms of ##C## and then simplifying it gives me the probability density as a function of ##E## that is exactly the same as Wiki states:

Distribution.png


I'd really like to understand how it is allowed to substitute a continuous formula ##F'(E)## with the classical Boltzmann's formula in which energylevels are considered discrete, combine it with quantum states density formula, and then get a valid formula out of it. Is there a way to explain this?
 

Attachments

  • Distribution.png
    Distribution.png
    1.8 KB · Views: 197
  • #92
JohnnyGui said:
was reffering to your answer "No, it can't" in your previous post #86 when I asked "Does this formula take into account the number of possible quantum states at that particular energy state ##Ei##?"

Ok, but that's just because the Boltzmann formula is classical. Obviously a classical formula can't take into account a quantum phenomenon. But you also can't get a correct answer by just multiplying the classical formula by the number of quantum states; why would you expect that to work?
 
  • #93
PeterDonis said:
Ok, but that's just because the Boltzmann formula is classical. Obviously a classical formula can't take into account a quantum phenomenon. But you also can't get a correct answer by just multiplying the classical formula by the number of quantum states; why would you expect that to work?

Perhaps you are already reading and replying; but as for your last question, please see the second part of my previous post. Also, perhaps my question is better to be formulated as: Is the number of particles at a particular energy level that is calculated by the Botlzmann formula, divided over the possible quantum states of that energy level?
 
  • #94
JohnnyGui said:
You can also see http://hep.ph.liv.ac.uk/~hock/Teaching/StatisticalPhysics-Part3-Handout.pdf(on sheet number 8) that this is done more or less the same way, combining the Boltzmann factor with the States Density.

That's not what is being done. The continuous state density is substituted for the discrete Boltzmann factor, not multiplied by it. That's what the right arrow in equation (13) means. Basically the assumption is that the energies of the states are close enough together that they can be approximated by a continuum. This is a common assumption for systems with very large numbers of particles (for example, a box of gas one meter on a side at room temperature has something like ##10^{25}## particles in it).
 
  • #95
JohnnyGui said:
Are the number of particles at a particular energy level, calculated by the Botlzmann formula, divided over the possible quantum states of that energy level?

No. The two numbers have nothing to do with each other. One is a classical approximation. The other is a quantum result. You can't just mix them together. As I said before, if you want a correct quantum answer, you should not be using the classical Boltzmann formula at all. You should be using the correct quantum distribution (Bose-Einstein or Fermi-Dirac, depending on what kind of particles you are dealing with).
 
  • #96
JohnnyGui said:
Boltzmann derived classically that the number of particles ##n_i## with a particular discrete energy level ##E_i## is:

$$
n_i = \frac{N}{\sum_{i=0}^\infty e^{\frac{-E_i}{k_B}}} \cdot e^{\frac{-E_i}{k_B}}
$$

I was able to derive this one.

How did you derive it? And what makes you think the derivation is classical? Discrete energy levels indicate a quantum system (more precisely, a quantum system that is bound, i.e., confined to a finite region of space), not a classical one.
 
  • #97
PeterDonis said:
That's not what is being done. The continuous state density is substituted for the discrete Boltzmann factor, not multiplied by it. That's what the right arrow in equation (13) means. Basically the assumption is that the energies of the states are close enough together that they can be approximated by a continuum. This is a common assumption for systems with very large numbers of particles (for example, a box of gas one meter on a side at room temperature has something like ##10^{25}## particles in it).

A part of the continuous state density is substituted by the Boltzmann factor (see also my previous post in which ##F(E) \cdot dE## is substituted). The Boltzmann factor is then multiplied by the Density of States within the integration. I can't see how a part of a classical approach can be mixed with a part of a quantum approach (density of states) while you said that it is not possible to get them mixed.

Edit: Typing a reply to your latest post, just a moment..
 
  • #98
PeterDonis said:
How did you derive it? And what makes you think the derivation is classical? Discrete energy levels indicate a quantum system (more precisely, a quantum system that is bound, i.e., confined to a finite region of space), not a classical one.

This is the Boltzmann formula that I was talking about the whole time. You made me think it was classical since you said that Boltzmann statistics are classical in your post #86. I'm not sure now which Boltzmann statistics you were referring to as classical.
 
  • #99
JohnnyGui said:
A part of the continuous state density is substituted by the Boltzmann factor (see also my previous post in which ##F(E) \cdot dE## is substituted). The Boltzmann factor is then multiplied by the Density of States within the integration.

That's not what's being done in the reference you linked to. You need to read it more carefully. See below.

JohnnyGui said:
This is the Boltzmann formula that I was talking about the whole time.

And that formula does not appear at all in the reference you linked to after equation (13). Equation (13) in that reference describes removing that formula, which involves a sum over discrete energy levels, and putting in its place a continuous integral; this amounts to ignoring quantum effects (which are what give rise to discrete energy levels) and assuming the energy per particle is continuous. There is no "Boltzmann factor" involving a sum over discrete energy levels anywhere in the distribution obtained from the integral.

JohnnyGui said:
You made me think it was classical since you said that Boltzmann statistics are classical in your post #86. I'm not sure now which Boltzmann statistics you were referring to as classical.

That's because we've been using the term "Boltzmann" to refer to multiple things. To be fair, that is a common thing to do, but it doesn't help with clarity.

Go back to this statement of yours:

JohnnyGui said:
Boltzmann derived classically that the number of particles ##n_i## with a particular discrete energy level ##E_i## is

This can't be right as you state it, because, as I've already said, classically there are no discrete energy levels. The only way to get discrete energy levels is to assume a bound system and apply quantum mechanics. So any derivation that results in the formula you give cannot be classical.

Here's what the reference you linked to is doing (I've already stated some of this before, but I'll restate it from scratch for clarity):

(1) Solve the time-independent Schrodinger Equation for a gas of non-interacting particles in a box of side ##L## to obtain an expression for a set of discrete energy levels (equations 10 and 11).

(2) Write down the standard partition function for the system with those discrete energy levels in terms of temperature (equation 12).

(3) Realize that that partition function involves a sum that is difficult to evaluate, and replace the sum with an integral over a continuous range of energies (equation 13 expresses this intent, but equation 22 is the actual partition function obtained, including the integral, after the density of states function ##g(\varepsilon)## is evaluated).

Step 3 amounts to partly ignoring quantum effects; but they're not being completely ignored, because the density of states ##g(\varepsilon)## is derived assuming that the states in momentum space (##k## space) are a discrete lattice of points, which is equivalent to assuming discrete energies. But the replacing of the sum by the integral does require that the energies are close enough together that they can be approximated by a continuum, which, again, amounts to at least partly ignoring quantum effects.

However, note equation 25 in the reference, which is an equation for the number of particles with a particular energy:

$$
n_j = \frac{N}{Z} e^{\frac{- \varepsilon_j}{kT}}
$$

This formula actually does not require the energies to be discrete; the subscript ##j## is just a way of picking out some particular value of ##\varepsilon## to plug into the formula. The formula can just as easily be viewed as defining a continuous function ##n(\varepsilon)## for the number of particles as a function of energy; or, as is often done, we can divide both sides by ##N##, the total number of particles, to obtain the fraction of particles with a particular energy, which can also be interpreted as the probability of a particle having a particular energy:

$$
f(\varepsilon) = \frac{1}{Z} e^{\frac{- \varepsilon}{kT}}
$$

Then you can just plug in whatever you obtain for ##Z## (for example, equation 24 in the reference). This kind of function is what Boltzmann worked with in his original derivation, and he did not know how to derive a specific formula for ##Z## from quantum considerations, as is done in the reference you give, because, of course, QM had not even been invented yet when he was doing his work. As far as I know, he and others working at that time used the classical formula for ##Z## in terms of the free energy ##F##:

$$
Z = e^{\frac{-F}{kT}}
$$

which of course looks quite similar to the above; in fact, you can use this to rewrite the function ##f## from above as:

$$
f(\varepsilon) = e^{\frac{F - \varepsilon}{kT}}
$$

which is, I believe, the form in which it often appears in the literature from Boltzmann's time period. Note that this form is purely classical, requiring no quantum assumptions; you just need to know the free energy ##F## for the system, which classical thermodynamics had ways of deriving for various types of systems based on other thermodynamic variables.
 
  • #100
I will further read on the detailed second part of your post about the method, thanks for that. I wanted to clear the following out of the way first:

PeterDonis said:
And that formula does not appear at all in the reference you linked to after equation (13).

I never referenced to anything after equation (13). My formula appears on the very first sheet in the link and equation (13) was the equation I was questioning about.

PeterDonis said:
That's because we've been using the term "Boltzmann" to refer to multiple things. To be fair, that is a common thing to do, but it doesn't help with clarity.

The first time you said that Boltzmann statistics are classical (post #86) is in response to my question about the formula for discrete energy levels shown in post #85, hence me thinking that formula is classical.

PeterDonis said:
This can't be right as you state it, because, as I've already said, classically there are no discrete energy levels.

Again, I called it "classically" as a consequence of the misconception of you calling it classically.

PeterDonis said:
There is no "Boltzmann factor" involving a sum over discrete energy levels anywhere in the distribution obtained from the integral.

The "Boltzmann factor" I'm referring to is the ##e^{\frac{-E}{kT}}## which is contained within the integral of equation (13). This factor is also present in the Boltzmann formula for discrete energy values, hence me wondering about how it can be used for a continuous approach. But perhaps you have already explained that in the second part of your post which I will read on now.
 
  • #101
JohnnyGui said:
I never referenced to anything after equation (13).

Yes, I know; that's part of my point. The part after equation (13) can't be left out, because that's where the actual derivation of the partition function is done. The discrete formula given prior to that is not used at all.

JohnnyGui said:
The first time you said that Boltzmann statistics are classical (post #86) is in response to my question about the formula for discrete energy levels shown in post #85, hence me thinking that formula is classical.

Yes, sorry for the confusion. I didn't catch at that point that you were using a discrete formula.

JohnnyGui said:
ng about how it can be used for a continuous approach. But perhaps you have already explained that in the second part of your pos

Yes, read on!
 
  • #102
I have read your explanation that it brought me two more questions before making me understand this better.

Question 1

PeterDonis said:
This formula actually does not require the energies to be discrete; the subscript jjj is just a way of picking out some particular value of εε\varepsilon to plug into the formula.

If energy is considered continuous, doesn't this mean that the formula for ##n_j## must be replaced with a derivative of a cumulative function of the number of particles, just like the fact that the density of states ##g(\epsilon)## times ##d\epsilon## is used within the integral, which gives the number of states between ##\epsilon ≥ \epsilon + d\epsilon##. Why isn't it done like that for ##n_j##?

Question 2

I just noticed that sheet number 18 in my http://hep.ph.liv.ac.uk/~hock/Teaching/StatisticalPhysics-Part3-Handout.pdf shows a relevant part about my mentioned formula so it's not only shown on the first sheet; it says right above equation 35 that the formula... $$n(\epsilon) = \frac{N}{Z} \cdot e^{-\frac{\epsilon}{kT}}$$
...is actually the number of particles per 1 state which kind of answers my question in post #93#. However, since the formula in that sheet is considering energy being continuous (notice the ##\epsilon##), is this exact interpretation of the formula also valid for a discrete energy level ##\epsilon_j##? If not, how is the interpretation of the very same formula then changed merely by considering energy being continuous or discrete?
 
  • #103
JohnnyGui said:
If energy is considered continuous, doesn't this mean that the formula for ##n_j## must be replaced with a derivative of a cumulative function of the number of particles

What formula for ##n_j## are you talking about? Also, you do understand that evaluating the integral gives you a continuous function for the number of particles as a function of the energy?

JohnnyGui said:
is this exact interpretation of the formula also valid for a discrete energy level ##\epsilon_j##?

Why wouldn't it be?
 
  • #104
PeterDonis said:
What formula for njnjn_j are you talking about? Also, you do understand that evaluating the integral gives you a continuous function for the number of particles as a function of the energy?

I made a typo, I am referring to ##n(\epsilon) = \frac{N}{Z} \cdot e^{-\frac{\epsilon}{kT}}## which is multiplied by ##g(\epsilon) \cdot d\epsilon## to give the number of particles between ##\epsilon ≥ \epsilon + d\epsilon## as the link and the video show:
$$n(\epsilon ≥ \epsilon + d\epsilon) = \frac{N}{Z} \cdot e^{\frac{-\epsilon}{kT}} \cdot g(\epsilon) \cdot d\epsilon$$
From what I understand, an integral gives a continuous function as a function of energy if the derivative of a cumulative function is integrated. This is indeed done for the number of states; the derivative of the volume of a sphere in energy-space is within the integral; ##g(\epsilon)##.
But since energy is continuous, why isn't ##g(\epsilon) \cdot d\epsilon## multiplied by the number density per ##\epsilon## instead of ##n(\epsilon)## within the integral?

PeterDonis said:
Why wouldn't it be?

Because you denied that statement in post ##95## and I wanted to make sure that deny was part of the earlier misconception as well.
Furthermore, I noticed that the link and the video do not tell this interpretation when deriving Boltzmann's formula for discrete energy levels, hence me wanting to make sure.
 
Last edited:
  • #105
JohnnyGui said:
why isn't ##g(\epsilon) \cdot d\epsilon## multiplied by the number density per ##\epsilon## instead of ##n(\epsilon)## within the integral?

It depends on whether you want the number of particles or the fraction of particles. You could just as easily divide by the total number of particles ##N## and have the fraction of particles instead of the number. The math is the same either way (since ##N## is a constant so it doesn't affect how you do the integral). And none of this has anything to do with the continuous vs. discrete question.

JohnnyGui said:
Because you denied that statement in post 95

No, I didn't. I denied a different statement, which is not part of what we are currently talking about.

JohnnyGui said:
I wanted to make sure that deny was part of the earlier misconception as well.

I guess the answer to this would be "yes" given the above.

JohnnyGui said:
I noticed that the link and the video do not tell this interpretation when deriving Boltzmann's formula for discrete energy levels

The link you give doesn't derive Boltzmann's formula for discrete energy levels (equation 12) at all. It just assumes it.
 

Similar threads

  • Quantum Physics
2
Replies
61
Views
1K
  • Quantum Physics
Replies
16
Views
1K
Replies
2
Views
831
  • Quantum Physics
Replies
6
Views
2K
Replies
1
Views
743
  • Quantum Physics
Replies
3
Views
766
Replies
1
Views
872
  • Quantum Physics
Replies
9
Views
847
Replies
4
Views
248
Replies
4
Views
1K
Back
Top