Part 9: Maxwell’s Laws

Summing up so far, electrostatics uses Gauss’ law, which in differential form states

.

For magnetostatics, we had (Gauss’ law of magnetism, equivalent to saying that magnetic monopoles do not exist), and Ampère’s Law, which in differential form states

.

Lastly, we began electrodynamics with Faraday’s Law, which in differential form is

(we have used SI units for all of these).

However, these four equations, as written here, are inconsistent; one cannot expect the static equations to hold for dynamic situations. In recognizing this, and providing the correction, is where James Clerk Maxwell made his great achievement.

Specifically; the problem is in Ampère’s law. Taking the divergence of both sides,

.

but the divergence of a curl is always zero, so the left-hand side must be zero, and Ampère’s Law as formulated for magnetostatics requires . But by the continuity equation, ; so this condition only holds when the charge density is fixed.

According to J.D. Jackson in his *Classical Electrodynamics* textbook (my textbook for E&M at Caltech), Maxwell’s repair can be reasoned as follows: One begins with the continuity equation . One then notes that Gauss’ law says , and so, taking its time derivative, one sees that

; plugging this into the continuity equation, the result is

,

so the quantity must always be solenoidal (divergence-free); thus, one replaces **J** in the magnetostatic form of Ampère’s law with this quantity; then both sides are always divergence free:

.

Maxwell dubbed this added term the displacement current. While it has units of current density, it is not an actual current of flowing charges, but instead indicates that, just as a current generates a magnetic field, so does a time-varying electric field.

The four equations,

.

are collectively known as Maxwell’s equations, and are the collective basis of all classical electrodynamics.

From the last two, we see that a time-varying electric field generates a magnetic field (Ampère’s Law), and a time-varying magnetic field generates an electric field (Faraday’s Law); the combination of these two makes possible electromagnetic waves.

## Archive for July, 2010

### Physics Friday 129

July 30, 2010### Monday Math 128

July 26, 2010We have previously discussed the Fourier transform (here and here, especially). In this post, we noted that (using the symmetric angular convention) the space transform for an *n* dimensional space is

and the inverse is

.

We can also do the same for a vector field:

and

.

We note from vector calculus then that for a vector field, the components of the transform are the transforms of the components:

.

We also used integration by parts here to show that for a one-dimensional function *f*(*x*), with , the derivative has Fourier transform:

.

Similarly, we can use vector integration by parts for our multi-dimensional transforms. Working in three dimensions from here:

First, one form of the divergence theorem states:

,

where *S* is the boundary of the volume *V*, with outward normal.

Letting *f*=*φ**ψ*, and using the gradient product rule ,

.

Letting , we see

.

and since , we have

.

Now, if as , *φ*(**x**) goes to zero faster than , then, as we expand the volume *V* to cover all space, the surface integral will go to zero, and we obtain

,

which means

,

in analogy to our one-dimensional rule.

### Physics Friday 128

July 23, 2010Part 8: Work, Induction, and Magnetic Energy Density

Previously, we derived the energy density of the electric field by considering the work done in assembling a charge distribution from infinite separation, with the assembly done slowly enough as to be a quasi-static system. While we can find the energy density of the magnetic field by considering the work done in forming a particular current density, we cannot do so using magnetostatics; we must use Faraday’s Law of induction.

First, consider a circuit with a constant current *I*. If the flux *Φ* through the circuit changes, then an electromotive force will be generated around the circuit. This will change the current in the circuit; to oppose this, and keep the current constant, the current source must do work. As we found here, the power delivered to a current (work per unit time) by a voltage *V* is *P*=*IV*. Thus, the emf does work per unit time of , and so the work per unit time needed to oppose this and keep the current constant is . Now, Faraday’s Law tells us that (using SI units) . Thus, our current source delivers power ; or, in terms of differentials, a small change in flux *δΦ* is countered by work .

Next, consider a system of *n* circuits, with respective currents *I _{i}*,

*i*=1,2,…,

*n*. Then the flux in the

*i*th circuit is

where

*d*

**S**

*is the vector surface element for a surface bounded by the*

_{i}*i*th circuit (I’ve used

*d*

**S**rather than the usual

*d*

**A**to avoid confusion with the magnetic vector potential

**A**).

Now, as we noted here, the definition of the vector potential combined with the Kelvin-Stokes theorem tell us that

,

where

*d*

**s**

*is a vector line element of the*

_{i}*i*th circuit.

Thus, by the above single-circuit case, when there is a change in the magnetic field, and thus the fluxes, the current source of current

*I*to maintain the current must deliver power equal to

_{i}.

So, then the total work necessary to take these

*n*circuits from zero current to some final values a time

*T*later, is

.

Now, the result should be independent of the particular “path” through intermediate values, so to simplify, we ramp up the currents proportionally, so that there is some increasing function of time

*f*(

*t*), with

*f*(0)=0 and , with some constants of proportionality

*c*, for all

_{i}*i*=1,2,…,

*n*. The key then, is to note that the magnetic field generated by a current has magnitude proportional to that current, and so

**B**will be linearly proportional to

*f*(

*t*), and thus all the

*Φ*will be proportional to

_{i}*f*(

*t*). Thus, dubbing the constants of this latter proportionality by

*k*,

_{i}and so

. Since the final values of the current and flux are , and , respectively, this says that for proportional ramp-up,

, and so the work in setting up these currents is

,

and thus, for

*n*circuits with currents

*I*and fluxes

_{i}*Φ*, the energy stored is

_{i},

and using our expression for flux in terms of the line integral of the vector potential,

.

Now, let us instead consider a continuous current distribution, with current density

**J**. As we did in our argument here, we break up the distribution into elemental current loops. An elemental loop will have a path

*C*with line element

*d*

**s**parallel to the local current density, so that

*J*

*d*

**s**=

**J**

*ds*; and we have a small perpendicular cross-section

*Δσ*, so that the current in the loop is

*I*=

*JΔσ*. Thus, the contribution to the total stored energy by this element is

;

but, as we noted in our argument here,

*JΔσ*

*d*

**s**=

**J**

*Δσ*

*ds*=

**J**

*d*

^{3}

*r*, and so the sum over all of these elemental loops becomes a volume integral:

.

Now, recall that Ampère’s Law states that

, and so, using this to replace the current density in the above, we see

.

Now, the product rule for the divergence of a cross product states that for two vector fields

**v**and

**w**,

.

Letting

**v**be

**B**and

**w**be

**A**, and solving for the first term on the right-hand side, we see

,

and so

.

This second term is the volume integral of a divergence; thus, by the divergence theorem,

, where

*S*is the surface bounding our volume of integration. Now, a realistic current distribution can be expected to be of finite spatial extent; thus, as we expand our volume, the surface will eventually come to be far from the current distribution. As we noted here, for distant fields, the dipole term dominates, and the vector potential goes as

*r*

^{-2}, and the field goes as

*r*

^{-3}; thus, their cross product will have a magnitude that goes as

*r*

^{-5}, while the surface of integration has area that goes as

*r*

^{2}; thus, as the volume is expanded to all space, this surface integral will go to zero, and we get

. But from the definition of the vector potential, , so the above is

, and we identify the quantity being integrated over all space as the energy density of the magnetic field: .

Compare this to the energy density of the electric field, .

### I write like

July 20, 2010According to the algorithm here, which has been making the rounds in the blogs I frequent,

Edgar Allan Poe

*I Write Like* by Mémoires, Mac journal software. **Analyze your writing!**

I’m not sure how it arrives at that result.

### Monday Math 127

July 19, 2010Consider Poisson’s equation in three dimensions:

, where is some function. Let us investigate the solution on a volume *V*.

Suppose their are two solutions, *φ*_{1} and *φ*_{2}. We can then define . Hence,

.

Now, recall that Green’s first identity states that for scalar fields *f* and *g*,

,

where *S* is the surface bounding the volume *V*, with outward normal . Letting *f*=*g*=*ψ*, we have

,

where is the normal derivative of the function, and we have used the fact that .

Now, suppose we have the Dirichlet boundary condition, so that the value of *φ* is specified on our boundary *S*, Thus, *φ*_{1}=*φ*_{2}, and *ψ*=0, on this surface; so then on the surface *S*, and so

,

but the norm square is positive definite, so the left hand integral is zero if and only if on all of *V*, which requires that *ψ* be constant on *V*; and since *ψ* is zero on the boundary of *V*, we have *ψ*=0 for all points in *V*, and so *φ*_{1}=*φ*_{2}: the solution to Poisson’s equation with the Dirichlet boundary condition is unique.

Suppose instead that we have the Neumann boundary condition, where is specified on all of the boundary *S*. Then

on *S*.

Thus on the surface *S*, and, as in the previous case, , and so *ψ* is constant on *V*, and so our solutions are unique up to addition by a constant.

In fact, there are other boundary equations that lead to similar results, where is unique (and *φ* is thus either unique, or unique up to a constant); including in an infinite domain, with an appropriate boundary at infinity (see more here).

### Physics Friday 127

July 16, 2010Part 7: Electromagnetic Induction

Moving beyond electrostatics and megnetostatics into the study of time-varying fields, we begin with Faraday’s Law of Induction. Developed independently by Faraday and Henry (though Faraday was first to publish; see here), in simple terms, it states that a change in magnetic flux through a closed circuit will induce a current in that circuit. Specifically, given a circuit with magnetic flux Φ in SI units, one has an electromotive force (in Gaussian/cgs units, the equation is ).

Note that electromotive force is a misnomer; the quantity it refers to is not a force, but has units of electric potential (energy per unit charge, or, alternately, electric field times distance). The minus sign is given by Lenz’s law, which holds that the current induced by the change in flux is in a direction such that the field it produces (via the Biot-Savart law) opposes the change in flux.

An important part of Faraday’s law is that the change in flux may be due to motion of the circuit through a spatially-varying field, or due to a change in the field at a stationary circuit. Classically, these are very different phenomena. The identical mathematical description of these was one of the steps that led Einstein to develop special relativity (PDF).

In terms of vector calculus, if our circuit is along the curve *C*, the magnetic flux is , where *S* is any simple surface bounded by the curve *C*; and the orientation of the surface normal is determined by the orientation of *C* via the right-hand rule. Similarly, the electromotive force for induction is the line integral over the circuit of the electric field *in the frame of the circuit*, denoted here by **E**‘

;

and so Faraday’s law is

,

where the time derivative on the right-hand side is a total derivative.

Let’s consider a frame where the circuit is stationary. Then **E**‘=**E**, and so we are defining the electric and magnetic field in the same frame. Further, the surface *S* is stationary, so the total time derivative is equivalent here to the partial derivative, and will commute with the surface integral. so then we have

.

Using the Kelvin-Stokes theorem, we can convert the line integral into a surface integral:

.

Since this must hold for any stationary surface in space, we see the integrands must be equal, and we obtain the differential form of Faraday’s Law:

in SI units, or

in Gaussian units.

Note that this means that, in electrodynamics, the electric field is no longer irrotational, and thus we can no longer find scalar field *φ* such that .

### Monday Math 126

July 12, 2010From the generalized Stokes’ theorem, which generalizes the fundamental theorem of calculus to higher dimensional differential forms on manifolds, one may derive a number of useful theorems of vector calculus, such as the gradient theorem, Kelvin-Stokes theorem (also frequently known as “Stokes’ theorem” or the “curl theorem”), the divergence theorem, and Green’s Theorem. One may also derive from it the formula for vector integration by parts: for a region *Ω* of with piecewise smooth boundary *Γ*, with outward surface normal , then for scalar function *φ*(**r**) and vector function **v**(**r**), then one has

,

or, rearranging,

,

or

.

Using the second form, and letting *φ*=1, we get

,

the divergence theorem.

Letting our vector field be the gradient of a scalar function, , in the first form, we obtain

,

which is Green’s first identity, often written as

,

and usually used in three dimensions:

,

Exchanging *φ* and *ψ*,

,

and subtracting this from the previous, the dot product of gradients terms cancel, giving Green’s second identity:

.

Taking Green’s first identity in the form

,

and setting *ψ*=*φ*, we get

.

Letting in the first form, we see

,

since the curl of a vector field always has zero divergence.

### Physics Friday 126

July 9, 2010Part 6: Work, Electric Potential, and Energy Density

In electrostatics, the electric potential can be interpreted as the potential energy per unit charge of a test charge at the point in question. In particular, if we have a localized charge distribution creating fields with a scalar potential *φ*, which goes to zero at infinite distance from the distribution; and if we bring (sufficiently slowly) a point charge *q* from infinity to the point **r**, when the work done on this charge, and thus the potential energy (relative to zero potential energy at infinite separation), is simply *W*=*qφ*(**r**).

If our potential is due to a charge *q*_{2} at point **r**_{2}, and we bring in from infinity the charge *q*_{1} to point **r**_{1}, then the potential due to *q*_{2} is , and so the potential energy is

.

This can also be seen as the charge *q*_{2} times the potential due to charge *q*_{1} at point **r**_{2}.

For a set of *n* charges *q _{i}* at points

**r**

*, the total potential energy is thus*

_{i}summed for each distinct pair of charges

*q*and

_{i}*q*. If we denote by

_{j}*φ*(

_{i}**r**) the potential due to all charges

*except*

*q*, then we see that is the total potential energy; the 1/2 term out front is because the above sum counts each pair of charges twice.

_{i}Extending to a localized continuous charge distribution, for a small element

*d*

^{3}

*r*at the point

**r**, the charge is , and the potential is ; so, the above sum is replaced with an integral over space:

.

Now, the differential form of Gauss’ law (or Coulomb’s law) is

,

and since, by definition of the potential, we have

,

substituting this latter into the former tells us that

.

Substituting this last into our integral for energy, we obtain

.

Using vector integration by parts (see here), one can see that

.

Now, letting the volume increase, we recall that our charge distribution is localized; examining the surface integral, we note that far from the distribution, the potential

*φ*goes like

*r*

^{-1}, and the magnitude of goes as

*r*

^{-2}; their product has a magnitude that goes as

*r*

^{-3}. In contrast, the area of the surface goes as

*r*

^{2}; thus, as the volume is expanded to encompass all space, the surface integral vanishes, and so

.

Substituting this into our potential energy formula, this leads to the result that the potential energy, the work needed to assemble the charge distribution from infinite separation, is

.

This is the integral over all space of the quantity ; this quantity thus has units of energy density. We identify this integrand as the energy density of the electric field.

### Should I even bother? A Poll

July 8, 2010### A Schoolhouse Rock 4th of July

July 4, 2010Ed Morrissey over at Hot Air has up a post with several Schoolhouse Rock! videos about the founding of this nation.