Archive for April, 2009

Monday Math 69

April 27, 2009

Consider a set of n positive real numbers , with for all i=1,2,…,n. Show then that the inequality
holds.
Solution:

Advertisements

Physics Friday 69

April 24, 2009

Consider a particle of mass m moving freely in a rectangular box, and undergoing perfectly elastic collisions with the walls of the box. Let us consider the motion in the x direction. Let the x-component of the particle’s velocity be vx, and the length of the box in the x direction be Lx. The time it takes to travel from one of the x-bounding walls to the other is , and so the number of collisions with a particular x-bounding wall per unit time is . When it collides with the wall, the x-component of the velocity changes from vx to –vx, and so, due to conservation of momentum, it imparts a momentum of 2m|vx|. The product of the momentum imparted per collision and the rate of collisions per unit time gives the (time-)average force exerted on the wall by the particle:
.

Now, let us replace the particle with N identical non-interacting particles (an ideal gas). Then the force on one of the x-bounding walls is
, where the bar represents the average over the N particles. Dividing this by the area A of the wall gives us pressure:

where is the volume of the box.
If our gas is isometric, then , where is the root-mean-square speed. This means that the pressure is the same on all sides of the box, and
.

Now, the Maxwell speed distribution tells us that for ideal gas molecule, the average kinetic energy is given by . This allows us to rewrite the above equation for pressure as
, or multiplying both sides by the volume,
.

This is the ideal gas law. Often, one is introduced to the law in terms of the number of moles, n, instead of the number of molecules N. As , where NA is Avogadro’s number, this gives us the form more commonly seen in introductory courses:
,
where is the ideal gas constant.

Monday Math 68

April 20, 2009

Show that for positive real numbers such that , then
.

Consider the average of the positive numbers x and y: . The AM-GM inequality tells us that
and so for any positive numbers x and y. Letting and , we then see that for all i=1,2,…,n. Multiplying these n inequalities together:
,
where the last is because .

Thinking of Robots

April 19, 2009

I finally got a chance to see “The Animatrix” this weekend. One of the thoughts that came to mind when watching both parts of “The Second Renaissance” is that despite being one of the more common forms of cybernetic revolt in fiction, the “humans mistreat their robots, fail to recognise them as sentient, etc., until the machines fight back,” modern experience indicates that this is an increasingly unlikely scenario. In particular, this story mode fails to take into account the strength of human tendencies to anthropomorphism.
We have a strong tendency to read human-like thoughts and motivations into non-intelligent creatures, and even inanimate objects. Robots don’t appear to be different; see this Washington Post article from about two years, about robots used by the military troops, and the way the troops treat these machines. An example:

Ted Bogosh recalls one day in Camp Victory, near Baghdad, when he was a Marine master sergeant running the robot repair shop.
That day, an explosive ordnance disposal technician walked through his door. The EODs, as they are known, are the people who — with their robots — are charged with disabling Iraq’s most virulent scourge, the roadside improvised explosive device. In this fellow’s hands was a small box. It contained the remains of his robot. He had named it Scooby-Doo.
“There wasn’t a whole lot left of Scooby,” Bogosh says. The biggest piece was its 3-by-3-by-4-inch head, containing its video camera. On the side had been painted “its battle list, its track record. This had been a really great robot.”
The veteran explosives technician looming over Bogosh was visibly upset. He insisted he did not want a new robot. He wanted Scooby-Doo back.
“Sometimes they get a little emotional over it,” Bogosh says. “Like having a pet dog. It attacks the IEDs, comes back, and attacks again. It becomes part of the team, gets a name. They get upset when anything happens to one of the team. They identify with the little robot quickly. They count on it a lot in a mission.”
The bots even show elements of “personality,” Bogosh says. “Every robot has its own little quirks. You sort of get used to them. Sometimes you get a robot that comes in and it does a little dance, or a karate chop, instead of doing what it’s supposed to do.” The operators “talk about them a lot, about the robot doing its mission and getting everything accomplished.” He remembers the time “one of the robots happened to get its tracks destroyed while doing a mission.” The operators “duct-taped them back on, finished the mission and then brought the robot back” to a hero’s welcome.
Near the Tigris River, operators even have been known to take their bot fishing. They put a fishing rod in its claw and retire back to the shade, leaving the robot in the sun.

We’re far more likely to see awareness where it’s not, that to fail to see awareness where it is. (Once again, another area where humans are far more likely to make a type I error than a type II error, like this, this, and this.)
If (or when) we develop AI, its likely that some will be mistreated by some people; consider some of the truly unnecessary cruelty to animals that still goes on. However, it’s unlikely to be the sort of systematic, widespread thing we see B166er and its fellows suffer under. Perhaps because of the long exploration of such scenarios in fiction, people have been considering for years (with varying degrees of earnestness) the legal and ethical ramifications of machine intelligence; see here, here, and here (this last is a bit out of date, being over 20 years old). So, despite being a pessimistic, I strongly suspect that anthropomorphism will win over anthropodenial, and that most of us will welcome our new robot overlords friends. 😉

Physics Friday 68

April 17, 2009

Maxwell Speed Distribution

Previously, we described the Boltzmann factor. Multiplying a normalization constant to give an actual probability, we obtain the Boltzmann distribution:
.
Now, suppose that we have a system of tiny non-interacting particles (an ideal gas), with the energy being purely kinetic. Thus, the probability that a particle of mass m has a speed v is proportional to .
However, in 3-dimensional velocity space, the velocity vectors that give the same speed form a sphere, with radius v; the higher the speed v, the larger the number of possible velocity vectors there are. Thus, the distribution of speeds is proportional to , the surface area of the sphere in velocity space. Combining, we have
. Normalizing, we have
,
(where we’ve used the fact that )
so
.
This is the Maxwell speed distribution. Now, we can find three important speeds with this:
I. Most probable speed:
This is vp>0 where . The derivative is
,
and this is zero for vp>0 when
.

II. Mean speed
.

III. Root-mean-square speed:
.

The last of these gives us the average kinetic energy of ideal gas molecules:

Monday Math 67

April 13, 2009

Suppose that x, y, and z are positive real numbers with x+y+z=1. Then find (without calculus) the minimum value of .

Here, we can use the AM-GM inequality:

and
,
but , so
,
and so


and so , with equality when .

Physics Friday 67

April 10, 2009

The Boltzmann Factor

Consider a macroscopic physical system, made up of a very large number of microscopic parts; the domain of statistical mechanics. In particular, we note that most macroscopic states correspond to many possible microstates, and that a macrostate becomes more likely the more distinct microstates correspond to it. This insight is encapsulated in the Second Law of Thermodynamics, which can be stated as: An isolated physical system will tend toward the allowable macrostate with the largest number of possible microstates. This form is reflected in the Boltzmann definition of entropy:
S=k\ln{N}, where S is the entropy of a macrostate, N is the number of microstates, and k is Boltzmann’s constant. In particular, this helps us to express entropy as a function of energy, S(E).

Now, suppose you have two systems, and place them into contact; if the systems have energies E1 and E2, these energies may change, but their sum, the total energy E=E1+E2, is constant. While the two individual systems are not isolated systems, the larger system of their combination is, and so the second law can be applied. If system 1 and system 2 are in macrostates with N1 and N2 microstates, respectively, then the combination system has N=N1N2 microstates, so the total entropy is S=k\ln{N}=k(\ln{N_1}+\ln{N_2})=S_1+S_2; this should be maximized. Now, taking these as a function of energy, and noting that as E=E1+E2 is conserved, we have
S=S_1(E_1)+S_2(E_2)=S_1(E_1)+S_2(E-E_1),
so that we have the entropy of the total system as a function of the energy of the first sub-system. Now, the maximization of entropy tells us that \frac{\partial{S}}{\partial{E_1}}=0. This, in turn, tells us that
0=\frac{\partial{S}}{\partial{E_1}}=\frac{\partial{S_1}}{\partial{E_1}}(E_1)+\frac{\partial{S_2}}{\partial{E_1}}(E-E_1)
or thus
\frac{\partial{S_1}}{\partial{E_1}}=\frac{\partial{S_2}}{\partial{E_2}}.
Thus, we expect that \frac{\partial{S}}{\partial{E}} is equal to some property which must be the same for two systems in equilibrium. Considering the older thermodynamic definition of entropy given by dS=\frac{dQ_{rev}}{T}, we then see that \frac{\partial{S}}{\partial{E}}=\frac{1}{T}, the reciprocal of temperature, and the above confirms that systems in thermodynamic equilibrium have the same temperature (see here for a previous post using this relation). In fact, this can be used as a definition of temperature (see here).

So now, let us consider a system (system 1), in contact with a heat bath (system 2): a system large enough that any heat exchanges will not significantly change its temperature, T2; namely, that E1E=E1+E2, and S2(E2) is a smooth function.

Now, the entropy of our heat bath is
S_2(E_2)=S_2(E-E_1)=k\ln{N_2},
where N2 is the number of microstates. From our above assumptions, we can approximate S2(EE1) by the Taylor series to first order about E:
S_2(E-E_1)\approx{S_2}(E)-\frac{\partial{S_2}(E)}{\partial{E_1}}E_1. However, we note from our previous work that the derivative in the linear term is simply \frac{\partial{S_2}(E)}{\partial{E_1}}=\frac{1}{T}, and so we get
k\ln{N_2}\approx{S_2}(E)-\frac{E_1}{T}. Solving this for the number of microstates, we get:
N_2\approx e^{\frac{S_2(E)}{k}}e^{-\frac{E_1}{kT}}.
Thus, the probability of any microstate of system 1 with energy E1 will be proportional to the above, and thus, as the first exponential is a constant for all possible states of our system, the probability is proportional to e^{-\frac{E_1}{kT}}; this is the Boltzmann factor, which is key to deriving a number of statistics used in both classical and quantum statistical mechanics.

Monday Math 66

April 6, 2009

Today, I present a pair of useful inequalities:

1. Chebyshev’s inequality:
Given two sequences of real numbers and , with at least one of them consisting entirely of positive numbers,
A. If and then
,
(the product of the averages is less than or equal to the average of the products).
B. If and then
,
(the product of the averages is greater than or equal to the average of the products).
In both cases, equality holds if and only if either or . (A proof may be found here.)

2. The inequality of arithmetic and geometric means (AM-GM inequality):
For any list of non-negative real numbers, the arithmetic mean is greater than of equal to the geometric mean, with equality only when all of the numbers are the same:
For ,
, with equality only when .
(Several proofs may be found here.)

 

For an example of usage, consider trying to prove that for x,y,z>0,


First, we note that . Now, due to the symmetry of the above (with repect to permutation of x, y, z), we can assume without loss of generality that xyz. Then it is obvious that , and thus by Chebyshev’s inequality,

and so


Now let us consider the second term in the right-hand-side product:
, and the AM-GM inequality tells us:
.
So ,
and so
.
Q.E.D.

Physics Friday 66

April 3, 2009

Last time, we showed that a plane loop carrying current I placed in a uniform magnetic field experiences a torque , where is the magnetic moment of the loop, and is the area vector for the loop, as given using the right-hand rule.

Now, let us consider an electron moving in a circular orbit of radius r and angular velocity ω0. Then the period of the orbit is , and so the charge passing through any point on the orbit per unit time is , and so we can treat the circular orbit as a current loop of current (the minus sign indicates that the current is opposite in direction to the motion of the electron, as the electron has a negative charge). Thus, the orbit has a magnetic moment, .

Unlike the previous case of a current loop, here we also have to consider the angular momentum of our electron’s orbit as well. As our orbit is circular, the angular momentum is just , where me is the mass of the electron. Thus, we can rewrite the magnetic moment in terms of the angular momentum:
.
and the magnetic moment is proportional to the angular momentum of the orbit, with the constant of proportionality dependent only on the properties of the electron.

Note here that since the magnetic moment is proportional to the angular momentum, the torque due to an external magnetic field, is perpendicular to both and . Thus, as seen in previous work, the orbit, if not perpendicular to the magnetic field, will precess about the field. This is an example of Larmor precession, which occurs whenever there is a magnetic moment proportional to angular momentum exposed to an external magnetic field. The frequency of this precession, the Larmor frequency, for this problem is . In the more general form, the constant of proportionality between the magnetic moment and angular momentum is called the gyromagnetic ratio (or sometimes the magnetogyric ratio), and usually denoted by γ: . Then the Larmor frequency is . These frequencies, when applied to a charged particle with spin, are important in spin transitions, and play an important role in systems such as nuclear magnetic resonance