Posts Tagged ‘Probability’

Monday Math 160

September 8, 2014

Suppose we have four identical-looking coins. Three are fair, but one is biased, with a probability of coming up heads of 3/5. We select one of the four coins at random.

1. If we flip the selected coin twice, and it comes up heads both times, what is the probability that our coin is the biased one?

2. If we flip the selected coin three times, and it comes up heads all three times, what, then, is the probability that our coin is the biased one?

3. Generalize: We have m fair coins and one identical-looking biased coin with probability p of getting heads. If we select one coin at random, and obtain k heads in n flips, what is the probablility P(m,p,n,k) that we have the biased coin?

solution:

Advertisements

Monday Math 149

January 3, 2011

What is the probability that two independently randomly chosen integers are mutually prime (have no common factor greater than 1)? The probability for four random integers? For n integers in general?
Solution:

Monday Math 145

December 6, 2010

Here’s a problem I’ve seen in a number of places: Consider a bowl of spaghetti with N strands, all tangled. We randomly select two ends from the bowl, and tie them together. We then randomly draw two more ends, and tie them together; this repeats until the last two free ends are joined. This process will have created a number of closed loops; specifically, a minimum of one loop and a maximum of N loops. What, then, is the expected number of loops?
(more…)

Monday Math 125

June 28, 2010

Consider two bins, each with N items, with N a large number. Let us randomly select one of the two bins, with equal probability (such as by flipping a fair coin), and remove an item from the selected bin. We repeat this procedure of removing items from the bins by random selection until one bin is empty. What, then, is the expected value n of the number of items remaining in the non-empty bin?
Solution:

Physics Friday 57

January 30, 2009

Consider a single particle wavefunction (in the position basis) . Then for a volume V, the probability P that the particle will be measured to be in V is
.
The time derivative of this is:
.

Now, we recall that , where * indicates the complex conjugate. Thus, via the product rule,

So


Now, consider the time dependent Schrödinger equation:

Solving for the time derivative, we get

And taking the complex conjugate of that:
.
(Note that the potential is real).
Thus
,
and

Adding these, we get

(the potential energy terms cancel).
So
.

Here, we need to use some vector calculus; namely, the product rule for the divergence operator : for scalar valued function φ and vector field , the divergence of their product is given by

Now, if our vector field is itself the gradient of a scalar function ψ, then

Swapping φ and ψ,

And taking the difference, we find
.
Putting in our wavefunction and it’s conjugate for φ and ψ,

So
.
Let us call the vector-valued function that is the argument of the divergence in the integrand :

Then

Now, recalling that the probability P is the integral over V of the probability density
So in terms of the probability density

and as this holds for all V, the integrand must vanish:


Now, this should look familiar to some of you. For any conserved quantity ρ with a flux given by the function , and no sources or sinks, the quantity and flux obey the continuity equation
.
[For example, if ρ is electric charge density, then the conservation of electric charge gives
,
where is the electric current density.]
Now, as the probability density for the particle must always integrate over all space to unity, we similarly expect it to be a conserved quantity. Thus the above result is our continuity equation for quantum probability, and so as defined above is our probability current (or probability flux). It has units of probability/(area × time), or probability density times velocity.

Note that the continuity equation tells us that for a stationary state, the divergence of the probability current must be zero. However, this does not mean that the current itself must be zero. Consider the three-dimensional plane wave .
Then


So

So
.
Note that as is the particle’s velocity, the probability current of the plane wave, a stationary state, is the amplitude squared times the particle velocity.

Lastly, consider a wavefunction which has the same complex phase for all locations at any given time; that is , where is a real-valued function. Then , and , and so we see the probability current is zero for such wavefunctions (one example of which are solutions to the particle in a one-dimensional box).

Physics Friday 45

November 7, 2008

Quantum Mechanics and Momentum
Part 2: Wavefunctions, Operators, and Observables

In quantum mechanics, the state of particle or system is represented by a wavefunction, which is a complex-valued function over some space. In more particular, methematical terms, the state of a quantum system is a vector in some complex Hilbert space.
Usually, we represent the wavefunction as a function over some space, usually our standard position space. For one dimension, we have ψ(x). The probability density is given by the squared norm of the wavefunction; the probability Pab of finding the particle’s position in the interval (a,b) is . We see that to have a valid wavefunction, the probability for all of the space must be unity: ; such a wavefunction is said to be normalized.
We have a similar situation for spaces of higher dimensionality; with the n-dimensional position vector, the probability of the particle being in a region V is
,
and normalization requires that
.
Note that the space over which the function is defined need not be physical space. For example, one can define a wavefunction ψ(p) over momentum space. In some situations, the wavefunction can be a vector of countably-infinite dimension, or even a vector of finite dimension; the wave function for a spin 1/2 particle (ignoring spatial freedom) can be represented as a two-dimensional complex vector (see here).

Of particular importance is how we treat observable quantities of a system. Each observable property corresponds to a linear operator on the wavefunction, whose eigenvalues are the allowed values of the observable, with the corresponding eigenfunction for each value being the wavefunction for the state where the observable has that value. In other words, if we have an observable corresponding to the operator Â, then the observable has value a when the wavefunction for the system is ψa, where Âψa=a.
In particular, the operators corresponding to physical quantities are hermitian. This, amongst other things, ensures that the eigenvalues are real.
(Note that for an operator with discrete eigenvalues, eigenvectors corresponding to different eigenvalues are orthogonal in our vector space. Observables with a continuum of allowed values, however, give rise to eigenfunctions that are Dirac delta distributions, and thus not in the Hilbert space.)

For example, if our wavefunctions are defined on a one-dimensional position space (ψ(x)), then the observable corresponding to the position is just multiplication by the position variable x:
.

One last important point to take away is the expectation value of an observable. In classical probability, the expected value of some function g of a random variable X is given by for a discrete random variable with probabilities Pi, and with probability density function P(x).
Similarly, in quantum mechanics, we define:

where * represents the complex conjugate. Note that for position operator , we see

Which, given that |ψ(x)|2 gives our probability density, matches our standard definition.

An example of these concepts in action can be seen in the proof of the Heisenberg uncertanty principle in this past post.

Monday Math 39

September 29, 2008

Suppose we have a group of n people (n≥2). The name of each person is written on a separate slip of paper. These n slips are put in a box, and each person then draws a slip from the box. What, then, is the probability that nobody draws their own name? What happens to this probability when n becomes large?
Solution

Monday Math 31

August 4, 2008

The binomial distribution, the discrete probability distribution of obtaining n successes out of N Bernoulli trials, each with probability p of sucess, is
.
We see that the probabilities sum to unity due to the binomial theorem:
.
Now, consider the expected number of successes:

Now, note that the n=0 term is zero, and may be dropped:

Shifting the index by defining k=n-1 and M=N-1, we have
.

Now, suppose we rewrite it in terms of the expected number of successes in place of p:
.

Let us consider what happens if we take the limit as the sample size , while holding λ fixed:


Now, in that first fraction, the numerator has n terms:
,
and so as N increases without bound, each of these terms approaches unity. The next fraction, is independent of N.
In the last term, , the base of the exponent approaches unity, while the exponent is fixed, so that term in turn approaches unity.
Lastly, we recall that
,
and so we see that the penultimate term fits this form, and so
.

This distribution is known as the Poisson distribution, which is the distribution for a large number of trials of a Poisson process, and which models a number of real-world processes, such as radioactive decay, or shot noise.