The
(standard) normal variable plays a key role in sampling theory, due to the Central Limit Theorem, which we study below. In this paragraph we describe how the standard normal variable arises as a limit process applied to a sequence of binomial variables. Let
\(Y\) be a Bernoulli variable with
\(P(Y=1)=1/2=P(Y=0)\text{.}\) It is easy to check that
\(\mu_Y=1/2\) and
\(\sigma_Y=1/2\text{.}\) Let
\(Y_1,Y_2,Y_3,\ldots\) be an infinite sequence of samples of
\(Y\text{,}\) and let
\(S_n\) be the binomial variable
\(S_n=Y_1+Y_2+\ldots Y_n\text{.}\) Using the sampling formulas
(6.12) and
(6.13), we have
\(E(S_n)=n\mu\) and
\(\var(S_n)=n\sigma\text{.}\) Let
\(T_n=(S_n-n\mu)/(\sigma\sqrt{n})\) be the normalized version of
\(S_n\text{.}\) It turns out that there is exists a limit function
\(\Phi = \lim_{n\to\infty}T_n\text{,}\) which means that
\(\Phi(x) = \lim_{n\to\infty} T_n(x)\) for every real number
\(x\text{.}\) The limit function
\(\Phi\) satisfies the properties of a distribution function. By the fact alluded to in the opening paragraph of
Subsectionย 5.1, it follows that there exists a random variable
\(Z\) such that
\(\Phi\) is the distribution function
\(F_Z\) of
\(Z\text{.}\) The variable
\(Z\) is called the
(standard) normal variable. It is also called a
Gaussian variable, in honor of C.F. Gauss, who discovered it. The standard normal distribution has a mean of zero, a standard deviation of 1, and has probability density function
\begin{equation}
f(x) = \frac{1}{\sqrt{2\pi}} e^{-x^2/2}.\tag{6.22}
\end{equation}