From Pascal's Triangle to the Bell-shaped CurvePosted May 2010.In this column we will explore this interpretation of the binomial coefficients, and how they are related to the normal distribution represented by the ubiquitous "bell-shaped curve."... Tony Phillips
Pascal's TriangleBlaise Pascal (1623-1662) did not invent his triangle. It was at least 500 years old when he wrote it down, in 1654 or just after, in his Traité du triangle arithmétique. Its entries C(n, k) appear in the expansion of (a + b)^{n} when like powers are grouped together giving C(n, 0)a^{n} + C(n, 1)a^{n-1}b + C(n, 2)a^{n-2}b^{2} + ... + C(n, n)b^{n}; hence binomial coefficients. The triangle now bears his name mainly because he was the first to systematically investigate its properties. For example, I believe that he discovered the formula for calculating C(n, k) directly from n and k, without working recursively through the table. Pascal also pioneered the use of the binomial coefficients in the analysis of games of chance, giving the start to modern probability theory. In this column we will explore this interpretation of the coefficients, and how they are related to the normal distribution represented by the ubiquitous "bell-shaped curve."
Factorial representation of binomial coefficientsThe entries in Pascal's triangle can be calculated recursively using the relation described in the caption above; in our notation this reads The shape of the rows in Pascal's triangleThe numbers in Pascal's triangle grow exponentially fast as we move down the middle of the table: element C(2k, k) in an even-numbered row is approximately 2^{2k} / (πk)^{1/2}. The following graphs, generated by Excel, give C(n, k) plotted against k for n = 4, 16, 36, 64 and 100. They show both the growth of the central elements and a general pattern to the distribution of values, which suggests that a linear rescaling could make the middle portions of the graphs converge to a limit. C(4, k) C(16, k) C(36, k) C(64, k) C(100, k) As k varies, the maximum value of C(n, k) occurs at n / 2. For the graphs of C(n, k) to be compared as n goes to infinity their centers must be lined up; otherwise they would drift off to infinity. Our first step in uniformizing the rows is to shift the graph of C(n, k) leftward by n / 2; the centers will now all be at 0. The estimate mentioned above for the central elements (it comes from Stirling's formula) suggests that for uniformity the vertical dimension in the plot of C(n, k) should be scaled down by 2^{n} / n^{1/2}. Now 2^{n} = (1+1)^{n} = C(n, 0) + C(n, 1) + ... + C(n, n), which approximates the area under the graph; to keep the areas constant (and equal to 1) in the limit we stretch the graphs horizontally by a factor of n^{1/2}. With these translations and rescalings, the convergence of the central portions of the graphs becomes graphically evident: C(4, k)*2 / 2^4, plotted against (k-2) / 2 C(16, k)*4 / 2^16, plotted against (k-8) / 4. C(36, k)*6 / 2^36, plotted against (k-18) / 6. C(64, k)*8 / 2^64, plotted against (k-32) / 8. C(100, k)*10 / 2^100, plotted against (k-50) / 10. Probabilistic interpretationOur experiment is an example of the Central Limit Theorem, a fundamental principle of probability theory (which brings us back to Pascal). The theorem is stated in terms of random variables. In this case, the basic random variable X has values 0 or 1, each with probability 1/2 (this could be the outcome of flipping a coin). So half the time, at random, X = 0, and the rest of the time X = 1. The mean or expected value of X is E(X) = μ = 1/2 (0) + 1/2 (1) = 1/2 . Its variance is defined by σ^{2} = E(X^{2})-[E(X)]^{2} = 1/4, so its standard deviation is σ = 1/2. The n-th row in Pascal's triangle corresponds to the sum X_{1} + ... + X_{n}, of n random variables, each identical to X. The possible values of the sum are 0, 1, ..., n and the value k is attained with probability C(n, k)/2^{n}. [So, for example, if we toss a coin four times, and count 1 for each head, 0 for each tail, then the probabilities of the sums 0, 1, 2, 3, 4 are 1/16, 1/4, 3/8, 1/4, 1/16 respectively.] This set of values and probabilities is called the binomial distribution with p = (1-p) = 1/2. Its has mean μ_{n} = n/2 and standard deviation σ_{n} = n^{1/2}/2. In our normalizations we have shifted the means to 0 and stretched or compressed the axis to achieve uniform standard deviation 1/2. The Central Limit Theorem is more general, but it states in this case that the limit of these normalized probability distributions, as n goes to infinity, will be the normal distribution with mean zero and standard deviation 1/2. This distribution is represented by the function f(x) = 2/(2π)^{1/2} e^{-2x2} (its graph is a "bell-shaped curve") in the sense that the probability of the limit random variable lying in the interval [a,b] is equal to the integral of f over that interval.
Suppose you want to know the probability of between 4995 and 5005 heads in 10,000 coin tosses. The calculation with binomial coefficients would be tedious; but it amounts to calculating the area under the graph of C(10000, k) between k = 4995 and k = 5005, relative to the total area. This is equivalent to computing the relative area under the normalized curve between (4995-5000) / 100 = -.05 and (5005-5000) / 100 = .05; to a very good approximation this is the integral of the normal distribution function f(x) between -.05 and .05, i.e. 0.0797. [Correction, May 2017: Three instances of 4005 were changed to 4995. Thanks to a careful reader for the correction--Tony.] [Correction, December 2018: The value of the last integral has been corrected. Thanks to Art Benis--Tony] Tony Phillips |
Welcome to the These web essays are designed for those who have already discovered the joys of mathematics as well as for those who may be uncomfortable with mathematics. Search Feature Column Feature Column at a glance |