Suppose we have four identical-looking coins. Three are fair, but one is biased, with a probability of coming up heads of 3/5. We select one of the four coins at random.

1. If we flip the selected coin twice, and it comes up heads both times, what is the probability that our coin is the biased one?

2. If we flip the selected coin three times, and it comes up heads all three times, what, then, is the probability that our coin is the biased one?

3. Generalize: We have *m* fair coins and one identical-looking biased coin with probability *p* of getting heads. If we select one coin at random, and obtain *k* heads in *n* flips, what is the probablility P(*m*,*p*,*n*,*k*) that we have the biased coin?

1. Let us denote selecting a biased coin by B, selecting an unbiased coin by U, and getting two heads in two flips by H^{2}. Then we are looking for P(B|H^{2}), the probability that our coin is biased given that we got two heads in two flips. According to Bayes’ Theorem,

and the law of total probability tells us that

Now, we know that and . Now, the probability of getting two heads in a row with the biased coin is , and the probability with an unbiased coin is .

Thus, we see that the total probability of getting two heads in a row is given by the law of total probability as:

Thus, Bayes’ theorem tells us:

2. Now, with H^{3} being the outcome of getting three heads in three flips, we see that the probability of H^{3} with the biased coin is , and the probability with an unbiased coin is .

Thus the total probability of getting three heads in a row is

and the probability that our coin is biased is

3. Let H[*n*,*k*] denote the outcome where we have *k* heads in *n* flips. The probability of *k* successes in *n* successive Bernoulli trials, each with probability *p*, is given by the binomial distribution as:

So, we see that the probability of H[*n*,*k*] given that our coin is biased is

and given an unbiased coin,

Now, with and , we see that the total probability of getting *k* heads in *n* flips is

Plugging these into Bayes’ theorem, we obtain:

(Or, to put it in a form with more visible parallelism between the probability *p* of the biased coin and the 1/2 probability of the fair coins,

)

Using our original *m*=3, *p*=3/5 example, we obtain

So, if we were to flip our coin 100 times, and we get 59 heads, then the probability that we have the biased coin is ≈0.625. If instead we get 55 heads, the probability drops to ≈0.247, and if we get 51 heads, then the probability that we have the biased coin is only ≈0.061.

Considering our getting all heads, as in (1) and (2), we have in the case *k*=*n* that . We can, by solving for *n*, see that to obtain , we must have *n*≥7, to get requires *n*≥13, and to get requires *n*≥19 flips all heads!

One final note: when we divided both the numerator and denomenator by , we were assuming that 0<*p*<1, so that this term is not zero. If the coin is fixed so as to always come up heads; that is to say, if p=1, then obviously if our coin ever comes up tails, we have a fair coin: P(*m*,1,*n*,*k*)=0 for any *k*<*n*. If, however, we have *n* flips all heads, then we can’t be sure we have the biased coin (rather than an unlikely “streak” on the biased coins):

For example, if we have 99 fair coins, one “fixed” coin, and get ten flips all heads, we still have a probability of 1-P(99,1,10,10)≈8.8% chance of having a fair coin.

Tags: Bayes' Theorem, Coin Flip, Conditional Probability, Probability, Total Probability

## Leave a Reply