Steven R. Dunbar
Department of Mathematics
203 Avery Hall
University of Nebraska-Lincoln
Lincoln, NE 68588-0130
http://www.math.unl.edu
Voice: 402-472-3731
Fax: 402-472-8466
Topics in
Probability Theory and Stochastic Processes
Steven R. Dunbar
__________________________________________________________________________
Binomial Distribution
_______________________________________________________________________
Note: These pages are prepared with MathJax. MathJax is an open source JavaScript display engine for mathematics that works in all browsers. See http://mathjax.org for details on supported browsers, accessibility, copy-and-paste, and other features.
_______________________________________________________________________________________________
Mathematically Mature: may contain mathematics beyond calculus with proofs.
_______________________________________________________________________________________________
Consider a family with 5 children. What is the probability of having all ﬁve children be boys? How many children must a couple have for at least a 0.95 probability of at least one girl? What is a proper and general mathematical framework for setting up the answer to these questions and similar questions?
_______________________________________________________________________________________________
for $k=0,1,2,\dots ,n$.
__________________________________________________________________________
__________________________________________________________________________
An elementary experiment in this section consists of an experiment with two outcomes. An elementary experiment is also called a Bernoulli trial. Label the outcomes of the elementary experiment $1$, occurring with probability $p$ and $0$, occurring with probability $q$, where $p+q=1$. Often we name $1$ as success and $0$ as failure. For example, a coin toss would be a physical experiment with two outcomes, say with “heads” labeled as success, and “tails” as failure.
A composite experiment consists of repeating an elementary experiment $n$ times. The sample space, denoted ${\Omega}_{n}$ is the set of all possible sequences of $n$ 0’s and 1’s representing all possible outcomes of the composite experiment. We denote an element of ${\Omega}_{n}$ as $\omega =\left({\omega}_{1},\dots ,{\omega}_{n}\right)$, where each ${\omega}_{k}=0$ or $1$. That is, ${\Omega}_{n}={\left\{0,1\right\}}^{n}$. We assign a probability measure $\mathbb{P}n\left[\cdot \right]$ on ${\Omega}_{n}$ by multiplying probabilities of each Bernoulli trial in the composite experiment according to the principle of independence. Thus, for $k=1,\dots ,n$,
and inductively for each $\left({e}_{1},{e}_{2},\dots ,{e}_{n}\right)\in {\left\{1,0\right\}}^{n}$
$$\begin{array}{c}{\mathbb{P}}_{n+1}\left[{\omega}_{n+1}=1\text{and}\left({\omega}_{1},\dots ,{\omega}_{k}\right)=\left({e}_{1},\dots ,{e}_{n}\right)\right]=\\ \mathbb{P}\left[{\omega}_{n+1}=1\right]\times \mathbb{P}\left[\left({\omega}_{1},\dots ,{\omega}_{n}\right)=\left({e}_{1},\dots ,{e}_{n}\right)\right]\end{array}$$
Additionally, let ${S}_{n}\left(\omega \right)$ be the number of 1’s in $\omega \in {\Omega}_{n}$. Note that ${S}_{n}\left(\omega \right)={\sum}_{k=1}^{n}{\omega}_{k}$. We also say ${S}_{n}\left(\omega \right)$ is the number of successes in the composite experiment. Then
We can also deﬁne a uniﬁed sample space $\Omega $ that is the set of all inﬁnite sequences of 0’s and 1’s. We sometimes write $\Omega ={\left\{0,1\right\}}^{\infty}$. Then ${\Omega}_{n}$ is the projection of the ﬁrst $n$ entries in $\Omega $.
A random variable is a function from a set called the sample space to the real numbers $\mathbb{R}$. For example as a frequently used special case, for $\omega \in \Omega $ let
then ${X}_{k}$ is an indicator random variable taking on the value $1$ or $0$. ${X}_{k}$ (the dependence on the sequence $\omega $ is usually suppressed) indicates success or failure at trial $k$. Then as above,
is a random variable indicating the number of successes in a composite experiment.
Proposition 1. The random variable ${S}_{n}$ takes only integer values between $0$ and $n$ inclusive and
Remark. The notation $\mathbb{P}n\left[\cdot \right]$ indicates that we are considering a family of probability measures indexed by $n$ on the sample space $\Omega $.
Proof. From the inductive deﬁnition
and inductively for each $\left({e}_{1},{e}_{2},\dots ,{e}_{n}\right)\in {\left\{1,0\right\}}^{n}$
$$\begin{array}{c}{\mathbb{P}}_{n+1}\left[{\omega}_{n+1}=1\text{and}\left({\omega}_{1},\dots ,{\omega}_{n}\right)=\left({e}_{1},\dots ,{e}_{n}\right)\right]=\\ \mathbb{P}\left[{\omega}_{n+1}=1\right]\times \mathbb{P}n\left[\left({\omega}_{1},\dots ,{\omega}_{n}\right)=\left({e}_{1},\dots ,{e}_{n}\right)\right]\end{array}$$the probability assigned to an $\omega $ having $k$ 1’s and $n-k$ 0’s is ${p}^{k}{\left(1-p\right)}^{n-k}={p}^{{S}_{n}\left(\omega \right)}{\left(1-p\right)}^{n-{S}_{n}\left(\omega \right)}$. The sample space ${\Omega}_{n}$ has precisely $\left(\genfrac{}{}{0.0pt}{}{n}{k}\right)$ such points. By the additive property of disjoint probabilities,
and the proof is complete. □
Proposition 2. If ${X}_{1},{X}_{2},\dots {X}_{n}$ are independent, identically distributed random variables with distribution $\mathbb{P}\left[{X}_{i}=1\right]=p$ and $\mathbb{P}\left[{X}_{i}=0\right]=q$, then the sum ${X}_{1}+\cdots +{X}_{n}$ has the distribution of a binomial random variable ${S}_{n}$ with parameters $n$ and $p$.
Proof. First Proof: By the binomial expansion
Diﬀerentiate with respect to $p$ and multiply both sides of the derivative by $p$:
Now choosing $q=1-p$,
For the variance, diﬀerentiate the binomial expansion with respect to $p$ twice:
Multiply by ${p}^{2}$, substitute $q=1-p$,and expand:
Therefore,
□
Proof. Second proof: Use that the sum of expectations is the expectation of the sum, and apply it to the corollary with ${S}_{n}={X}_{1}+\cdots +{X}_{n}$ with $\mathbb{E}\left[{X}_{i}\right]=p$.
Similarly, use that the sum of variances of independent random variables is the variance of the sum applied to ${S}_{n}={X}_{1}+\cdots +{X}_{n}$ with $\mathbb{E}\left[{X}_{i}\right]=p\left(1-p\right)$. □
Example. The following example appeared in the January 20, 2017 “Riddler” puzzler on the website fivethirtyeight.com.
You and I ﬁnd ourselves indoors one rainy afternoon, with nothing but some loose change in the couch cushions to entertain us. We decide that well take turns ﬂipping a coin, and that the winner will be whoever ﬂips 10 heads ﬁrst. The winner gets to keep all the change in the couch! Predictably, an enormous argument erupts: We both want to be the one to go ﬁrst.
What is the ﬁrst ﬂippers advantage? In other words, what percentage of the time does the ﬁrst ﬂipper win this game?
First solve an easier version of the puzzle where the ﬁrst person to ﬂip a head will win. Let the person who ﬂips ﬁrst be A, and the probability that A wins by ﬁrst obtaining a head is ${P}_{A}$. Then adding the probabilities for the disjoint events that the sequence of ﬂips is H, or TTH, or TTTTH and so forth.
$$\begin{array}{llll}\hfill {P}_{A}& =\frac{1}{2}+{\left(\frac{1}{2}\right)}^{2}\left(\frac{1}{2}\right)+{\left(\frac{1}{2}\right)}^{4}\left(\frac{1}{2}\right)+\dots \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\frac{1}{2}\left(1+\frac{1}{4}+{\left(\frac{1}{4}\right)}^{2}+\dots \right)\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\frac{1}{2}\cdot \frac{1}{1-1\u22154}=\frac{1}{2}\cdot \frac{4}{3}=\frac{2}{3}.\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$
Another way to do this problem would be to use ﬁrst-step analysis from Markov Chain theory. Then the probability of the ﬁrst player winning ${P}_{A}$ is the probability of winning on the ﬁrst ﬂip plus the probability of both players each losing their ﬁrst ﬂip at which point the game is essentially starting over,
Solving, $\frac{3}{4}{P}_{A}=\frac{1}{2}$ or
Now extend the same reasoning as in the ﬁrst approach to the case of the ﬁrst player to get 10 heads winning. The ﬁrst case for A to win is to get $9$ heads in ﬂips $1,3,5,\dots ,17$ and the $10$th head on ﬂip $19$ and for player B to get anywhere from $0$ to $9$ heads on ﬂips $2,4,6,\dots ,18$. This is probability $\left(\genfrac{}{}{0.0pt}{}{9}{9}\right){\left(\frac{1}{2}\right)}^{9}\cdot \frac{1}{2}$ and cumulative binomial probability $\sum _{k=0}^{9}\left(\genfrac{}{}{0.0pt}{}{9}{k}\right){\left(\frac{1}{2}\right)}^{9}$ respectively. The next disjoint probability case is for A to win is to get $9$ heads in ﬂips $1,3,5,\dots ,19$ and the $10$th head on ﬂip $21$ and for player B to get anywhere from $0$ to $9$ heads on ﬂips $2,4,6,\dots ,20$. This is probability $\left(\genfrac{}{}{0.0pt}{}{10}{9}\right){\left(\frac{1}{2}\right)}^{10}\cdot \frac{1}{2}$ and cumulative binomial probability $\sum _{k=0}^{9}\left(\genfrac{}{}{0.0pt}{}{10}{k}\right){\left(\frac{1}{2}\right)}^{10}$ respectively. In general, the disjoint probability case is for A to win is to get $9$ heads in ﬂips $1,3,5,\dots ,2j-1$ and the $10$th head on ﬂip $2j+1$ and for player B to get anywhere from $0$ to $9$ heads on ﬂips $2,4,6,\dots ,2j$. This is probability $\left(\genfrac{}{}{0.0pt}{}{j}{9}\right){\left(\frac{1}{2}\right)}^{j}\cdot \frac{1}{2}$ and cumulative binomial probability $\sum _{k=0}^{9}\left(\genfrac{}{}{0.0pt}{}{j}{k}\right){\left(\frac{1}{2}\right)}^{j}$ respectively.
Then multiplying the independent probabilities for A and B in each case and adding all these disjoint probabilities
There does not seem to be an exact analytic or closed form expression for this probability as in the case of winning with a single head, so we need to approximate it. In the case of winning with 10 heads, ${P}_{A}\approx 0.53278$.
Example. The following example appeared in the August 5, 2018 “Riddler” puzzler on the website fivethirtyeight.com.
I ﬂip a coin. If it is heads, I win the game. If it is tails, then I have to ﬂip again, now needing to get two heads in a row to win. If, on my second toss, I get another tails instead of a heads, then I now need three heads in a row to win. If, instead, I get a heads on my second toss (having ﬂipped a tails on the ﬁrst toss) then I still need to get a second heads to have two heads in a row and win, but if my next toss is a tails (having thus tossed tails-heads-tails), I now need to ﬂip three heads in a row to win, and so on. The more tails youve tossed, the more heads in a row youll need to win this game.
I may ﬂip a potentially inﬁnite number of times, always needing to ﬂip a series of $N$ heads in a row to win, where $N$ is $T+1$ and $T$ is the number of cumulative tails tossed. I win when I ﬂip the required number of heads in a row.
What are my chances of winning this game?
Some winning sequences of ﬂips are H, THH, TTHHH, THTHHH. From here the winning sequences get more numerous and more complicated. For winning with $4$ heads in a row, the shortest sequence is TTTHHHH to the longest THTHHTHHHH. But instead of considering the possibility of winning, consider instead the complementary probability of losing.
Let ${P}_{T}$ be the probability of losing when we have ﬂipped our $T$th tail. Two possibilities can happen from here. First, we could ﬂip the $T+1$ heads in a row needed to win. Second, the complementary event is that we could ﬂip another tails before getting the necessary heads to win. Then the probability ${P}_{T}$ of losing is
In other words, if we don’t ﬂip the required $T+1$ heads in a row, we have the situation where we lose with probability ${P}_{T+1}$. Then we can use this to form the product
The probability of winning this game is $1-{P}_{0}$. This can be obtained by multiplying all the chances throughout the game that we don’t win, and subtracting from $1$:
This expression is approximately $0.711$, so there is about a $71.1\%$ chance of winning. This value is easily approximated and veriﬁed with a script, see the exercises.
This is the value $1-\varphi \left(1\u22152\right)$ using the Euler function
This function is the basic example of a relation between combinatorics and analysis. The coeﬃcient $p\left(k\right)$ in the formal power series expansion for $1\u2215\varphi \left(q\right)$ gives the number of all partitions of $k$.
This section is adapted from: Heads or Tails, by Emmanuel Lesigne, Student Mathematical Library Volume 28, American Mathematical Society, Providence, 2005, Sections 1.2 and Chapter 4 [3]. The ﬁrst example is heavily adapted from the weekly “Riddler” column of January 20, 2017 from the website fivethirtyeight.com. The second example is adapted from the weekly “Riddler” column of August 3, 2018 from Riddler, Where on Earth is the Riddler?..
_______________________________________________________________________________________________
The following Octave code in ineﬃcient in the sense that it generates far more trials than it needs. However, writing the code that captures exactly the number of ﬂips needed on each trial would probably take more lines, so it is easy to be ineﬃcient here.
_______________________________________________________________________________________________
__________________________________________________________________________
[1] Leo Breiman. Probability. SIAM, 1992.
[2] William Feller. An Introduction to Probability Theory and Its Applications, Volume I, volume I. John Wiley and Sons, third edition, 1973. QA 273 F3712.
[3] Emmanuel Lesigne. Heads or Tails: An Introduction to Limit Theorems in Probability, volume 28 of Student Mathematical Library. American Mathematical Society, 2005.
__________________________________________________________________________
__________________________________________________________________________
I check all the information on each page for correctness and typographical errors. Nevertheless, some errors may occur and I would be grateful if you would alert me to such errors. I make every reasonable eﬀort to present current and accurate information for public use, however I do not guarantee the accuracy or timeliness of information on this website. Your use of the information from this website is strictly voluntary and at your risk.
I have checked the links to external sites for usefulness. Links to external websites are provided as a convenience. I do not endorse, control, monitor, or guarantee the information contained in any external website. I don’t guarantee that the links are active at all times. Use the links here with the same caution as you would all information on the Internet. This website reﬂects the thoughts, interests and opinions of its author. They do not explicitly represent oﬃcial positions or policies of my employer.
Information on this website is subject to change without notice.
Steve Dunbar’s Home Page, http://www.math.unl.edu/~sdunbar1
Email to Steve Dunbar, sdunbar1 at unl dot edu
Last modiﬁed: Processed from LATEX source on September 11, 2018