Archive for July, 2009

Physics Friday 83

July 31, 2009

Consider a crystal of N atoms. Each atom has two accessable states: a ground state, of energy 0; and an excited state of energy ε. We note that the total energy of the crystal is , where there are n atoms in the excited state and Nn in the ground state. The number of different ways to achieve this, which gives the degeneracy of the energy level En= , is just the binomial coefficient . One could use this, along with the Boltzmann entropy equation to obtain a formula for entropy as a function of energy, and using (along with Stirling’s approximation), one could find the energy U for equilibrium at temperature T. Go read this post to see exactly how this is done (let εa=0 and εb=ε).

Instead, this time let’s compute the canonical partition function . Now, the energy is En=, and the degeneracy is . Thus:

Note, however, by the binomial theorem, this is just the expansion of , and so . Note that the partition function for one of the atoms is , so that the total partition function is just the product of the partition functions of the individual atoms.

This is an example of a more general property of the canonical partition function. If a system can be broken into elements, where the total energy is the sum of the energies of the elements, and where each element may occupy any of its particular states independent of the states of the other elements (such as our two-state crystal atoms above), then the partition function of the total system is the product of the partition functions of the individual elements. Considering a more general set of elements with more general states lets us prove this in a manner mathematically similar to the above: let the lth element in its mth particular state be denoted εlm. Note that these elements need not be the same, and need not have the same set of individual states. So then we have energies of the form , where m1 is the index of individual states for the first element, m2 is the index of individual states for the second element, m3 is the index of individual states for the third element, and so on.
Then our partition function is the sum of the Boltzmann factors for all combinations of allowed values for m1, m2, m3, etc.:

where Zl is the partition function of the lth element.

Note that since the Helmholtz free energy is given by , this means that the Helmholtz free energy of the system is the sum of the Helmholtz free energies of the elements.

When we consider a molecular gas, as in many previous posts, we usually assume (as it makes a good approximation for normal conditions) that the translational, rotational, and vibrational modes of the molecules are independent, and that intermolecular collisions couple only to the translational modes. This means that we can separate the partition sum:
This allows us to more clearly compute how properties such as heat capacity are affected by these internal modes that simply assuming a constant f≥3 accessable degrees of freedom with equipartition of energy, as done here.


Monday Math 82

July 27, 2009

Consider writing a proper fraction (a positive rational number less than 1), in lowest terms, as a decimal. As many people learn in their mathematics education, there are three possible outcomes:
I). Terminating Decimal
One has a finite sequence of digits after the decimal point (followed by a non-written infinite sequence of zeroes).
Examples: 1/2=0.5, 1/8=0.125, 1/200=0.005
II). Pure Repeating Decimal
One has a finite sequence of digits which repeats infinitely many times after the decimal point.
Examples: 1/3=0.333333…, 1/7=0.142857142857…, 1/33=0.03030303…
III). Mixed Repeating Decimal
One has a finite sequence of non-repeating digits followed by infinite repeats of a different digit sequence.
Examples: 1/6=0.166666…, 1/22=0.0454545…, 43/180=0.23888888…

How do we determine which of these a given fraction will have without performing the division to actually compute the result? Let’s first examine the terminating decimals.
The digits of a decimal represent multiples of negative powers of 10. For our examples:
0.5=(5/10)=1/2, 0.125=(1/10)+(2/100)+(5/1000)=125/1000=1/8, and 0.005=(0/10)+(0/100)+(5/1000)=5/1000=1/200.
So we see the common denominator of the fractions represented by the digits of the decimal is that of the rightmost digit; if we have n digits, the common denominator is 10n. Thus, the denominator of our fraction must divide this power of 10: 2|10, 8|1000, 200|1000. In fact, we see that 10n is the smallest power of 10 which is divisible by our fraction denominator.
From this, we then obtain the condition for a fraction to give a terminating decimal: there must be a power of 10 divisible by the denominator of the fraction (in lowest terms). This occurs only if the denominator’s prime factors are 2 and/or 5: 2=2, 8=23, 200=23*52. Note that the highest power of 2 and/or 5 in the prime factorization of the denominator is the number of digits n

Examination of the prime factorizations of the denominators of pure repeating decimals versus mixed repeating decimals will give us the distinguishing element:
the denominators of fractions that give pure repeating decimals are coprime with 10; they are divisible by neither 2 nor 5. For the mixed repeating decimals, the denominator is divisible by 2 and/or 5, but the prime factorization also contains other primes. From our examples:
6=2*3, 22=2*11, 180=22*32*5.

Note that for the mixed repeating decimals, the fraction denominators can be factored into the product of a number which divides a power of 10 and a number coprime with 10. For example 6=2*3, 22=2*11, 180=20*9. If we call the number which divides the power of 10 a and the number coprime with 10 b then our mixed repeating decimal has as many non-repeating digits as the terminating decimal 1/a and a repeated sequence of the same length as the repeated sequence of 1/b:
1/6: 1/2=0.5 has one digit, 1/3=0.333… has a one-digit repeat, so 1/6=0.16666 has one non-repeating digit and one repeating.
1/22: 1/2=0.5 has one digit, 1/11=0.090909… has a two-digit repeat, so 1/22=0.0454545 has one non-repeating digit and two repeating.
43/180: 1/20=0.05 has two digits, 1/9=0.1111… has a one-digit repeat, so 43/180=0.23888… has two non-repeating digits and one repeating.

I’d Watch It…

July 25, 2009

Physics Friday 82

July 24, 2009

One might recall my previous post where I considered a system in contact with a thermal reservoir with which it could exchange energy; from considering the entropy of the system+reservoir combination as a function of the system’s energy to derive the Boltzmann factor. This method of analysing the statistical system, by considering the different probabilities of different states of a system when it is in contact with a thermal reservoir with which it can exchange energy is known as statistical mechanics in “canonical formalism” or in “Helmholtz representation.”
In particular, we noted that for system energy E1 and total energy E, we could expand the entropy of the system in Taylor series about E and solve for the number of reservoir microstates N2:

or in terms of the thermodynamic beta ,

where S2(E) is the reservoir entropy when the total energy is entirely in the reservoir. The latter exponential factor, the only one which is a function of our system’s energy, is the Boltzmann factor.
We noted that the probability of the system having energy E1 is proportional to N2. In particular, if we let N be the total number of microstates of the system+reservoir at total energy E, then the probability that the system is in a state of energy E1 is

and from the Boltzmann definition of entropy, the total entropy is
Let U be the average value of the energy of our system. Then S(E)=S1(U)+S2(EU), and expanding our reservoir entropy in Taylor series about EU (our equilibrium point) instead of E,

so that
And so

and due to the additivity of the entropy of the system and reservoir:
so the probability becomes
where is the Helmholtz free energy of our system, and is, again, the Boltzmann factor.
We do not yet know the Helmholtz free energy, but we can compute it from the above. We note that the exponential it appears in plays the role of a normalization factor: Summing the probability over all allowed states of our system, where in state s the system has energy Es, we get:

we denote this sum, called the “canonical partition function,” by Z. If we consider the sum over energy levels Ei, then we have
where gi is the degeneracy of the energy level Ei.
[In classical mechanics, the parameters are of a particle are continuous, and we can’t actually express the partition function as a sum, having to replace it with an integral with a “coarse graining” procedure (see here). In quantum mechanics, however, energy levels are discrete, and the above summation makes sense.]

Now, there are a number of thermodynamic variables we can extract from Z. First, as , we see .
Next, consider the average energy U. This is the expectation value of the energy, and will be the internal energy of our system (thus the choice to name it U):

Now, note that
, so that
or, using the chain rule to rewrite β in terms of T, we have
Consider the second derivative of with respect to β:
the variance of the energy.
Now, the heat capacity at constant volume is
Using the latter expression, we can find the dimensionless specific heat capacity (at constant volume):

Since ,

One small step

July 20, 2009

Today is the 40th anniversary of the Apollo 11 moon landing.

Monday Math 81

July 20, 2009

Recall from here that
Letting and solving for the sum, we see
Now, , so the above becomes

Now, along with our formula that
for positive integer n (see here), we can derive a couple of interesting results.

First, consider . Expanding the sine in its Maclaurin series:

Now, suppose we instead expanded the denominator of the integrand of I(x) via a geometric series as here:
Via multiple integration by parts or a table of integrals,

and thus
And we also can get:

Physics Friday 81

July 17, 2009

For a thermodynamic system consisting of n different species of particles, we can define for each species a chemical potential, which is the increase in internal energy of the system with the addition of a single particle of that species, with volume, entropy, and the number of particles of other species held constant. Thus, for the i-th species, the chemical potential μi is defined as
To change from constant volume and entropy to constant pressure and temperature, we use the Gibbs free energy , so that

(we have exact differentials, so one can use the chain rule to prove the above).

For an ideal gas of a single species, the internal energy U at constant volume is , where is the dimensionless specific heat capacity at constant volume, and f is the number of available degrees of freedom for a molecule of the gas (see here). We previously found that (for high enough temperatures) the entropy of an ideal gas can be expressed as for some positive undetermined constant φ. We can rewrite this as
where , and is the dimensionless specific heat capacity at constant pressure.
To rewrite in terms of pressure instead of volume, we use , so

and so
Thus, for a single-species ideal gas, G=μN.

To the Moon

July 16, 2009

40 years ago today, the Apollo 11 spacecraft was launched from Cape Kennedy.

Monday Math 80

July 13, 2009

And now, the answer to last week’s challenge: here are two solutions, one using the polylogarithm, and one without.

Physics Friday 80

July 10, 2009

In several past posts, we explored the properties of a classical ideal gas. Next, we consider the entropy of an ideal gas. Using the differential dS and the chain rule, we can express it in terms of the temperature and volume differentials as

(with volume and temperature both possibly functions of particle number N).
For the first term on the right, we recall our discussion of heat capacity, and that . This means our entropy expression becomes
For the latter, we use one of the Maxwell relations, specifically the one (derived from the mixed second partial derivatives of the Helmholtz free energy) which states:
Now, solving the ideal gas equation for pressure,
we can take the partial derivative to get

Applying these to the entropy differential, we get:
Now, recalling our definition of the dimensionless specific heat capacity , we can change the above to

Integrating this, we get:
where C(N) is our constant of integration, an as yet unknown function of the particle number. Using our properties of logarithms, this may be restated as
where the function f(N) has the same units as .

Now, we need to use the fact that entropy is an extensive property. This means that if the extensive parameters, here V and N, are scaled by a constant, the entropy will be multiplied by that same constant:
. Plugging this into our above, we get:

which solving for f(cN), tells us
. This, in turn, tells us that f(N) must be a constant multiple of N. Thus
where φ is a (positive) constant with the same units as .

This is the limits of where classical thermodynamics takes us. However, we note that this formula cannot be valid for lower temperatures: the above, for any value of φ, gives entropy going to negative infinity as the temperature approaches absolute zero, with zero entropy at some positive temperature .

For a monatomic ideal gas (so ), quantum mechanical arguments can be used to give a value for φ which gives an entropy equation which holds for a wide range of states in the classical regime. The result is the Sackur-Tetrode equation.