Find a non-summation expression for the value of the sum .
Posts Tagged ‘Calculus’
From the generalized Stokes’ theorem, which generalizes the fundamental theorem of calculus to higher dimensional differential forms on manifolds, one may derive a number of useful theorems of vector calculus, such as the gradient theorem, Kelvin-Stokes theorem (also frequently known as “Stokes’ theorem” or the “curl theorem”), the divergence theorem, and Green’s Theorem. One may also derive from it the formula for vector integration by parts: for a region Ω of with piecewise smooth boundary Γ, with outward surface normal , then for scalar function φ(r) and vector function v(r), then one has
Using the second form, and letting φ=1, we get
the divergence theorem.
Letting our vector field be the gradient of a scalar function, , in the first form, we obtain
which is Green’s first identity, often written as
and usually used in three dimensions:
Exchanging φ and ψ,
and subtracting this from the previous, the dot product of gradients terms cancel, giving Green’s second identity:
Taking Green’s first identity in the form
and setting ψ=φ, we get
Letting in the first form, we see
since the curl of a vector field always has zero divergence.
Very few of the calculus textbooks I have used give a rigorous derivation of the derivatives of sine and cosine, instead using little more than the graphs of the functions as justification. Here, I will demonstrate a rigorous derivation, starting with a clear derivation of the limit .
[Click on figure for full size image]
Start with the unit circle diagram above, with small, positive θ.
Now, the area of the triangle ▵OAB is . Similarly, the area of the circular sector between OA and OB is . Lastly, the right triangle ▵OAD has area .
Comparing these areas, we have inequality , which means
Multiplying this by the positive quantity , we get
which, using , becomes
Therefore, the inverses obey the inequality
and since , by the squeeze theorem, we see
; and since is an even function, this must also be the left-hand limit, and so we have
Now, consider . We can find this limit using the above limit and a little trigonometry:
Using these two limits and the addition formulas for sine and cosine, we compute the derivatives from the definition .
Watch this video: “I Will Derive”
How might we find an exact, non-series formula in terms of α and β of the integral . Attempting to find the antiderivative of the integrand will show it cannot be done with elementary functions (it can be expressed in terms of a polylogarithm; see here). However, we do have a method that can do so: ‘differentiation under the integral sign.’
Now, let us take the derivative of this function. As the limits of the integral are constants, we see:
, with C some constant.
Now, we see
For a second example, let us consider
As before, we find the derivative of this function:
Using a table of integrals (such as #21 here with a=1, p=1, and q=cosφ), we see that for 0≤θ≤π,
And this tells us
Now, when , we have
, thus giving , and thus for ,
This method can also be used to find some definite integrals without parameters by adding one; for example, the integral
To use the method, we define:
Our original integral is thus
Taking the derivative of this function,
Now, we see
Which is –αtimes the integrand we obtained for . Thus:
And thus is a constant.
And thus our original integral is also zero.
Similarly, one can find the integral from 0 to infinity of the sine cardinal function by using this method upon
And so we find
for some constant.
Now, note that as , we see that for all x>0, we see
, and thus . As , we thus find , and so
, and our original integral is: