In Chapter 1 we stated that statistics is the science of estimating the probability distribution of a random variable on the basis of repeated observations drawn from the same random variable. If we denote the random variable in question by X, the n repeated observations in mathe­matical terms mean a sequence of n mutually independent random vari­ables X, X2, . . . , Xn, each of which has the same distribution as X. (We say that (X;) are i. i.d.)

For example, suppose we want to estimate the probability (p) of heads

for a given coin. We can define X = 1 if a head appears and = 0 if a tail appears. Then Xt represents the outcome of the zth toss of the same coin. If X is the height of a male Stanford student, X* is the height of the ith student randomly chosen.

We call the basic random variable X, whose probability distribution we wish to estimate, the population, and we call (Xj, X2, . . . , Xn) a sample of size n. Note that (X1; X2,. . . , Xn) are random variables before we observe them. Once we observe them, they become a sequence of numbers, such as (1, 1, 0, 0, 1, . . .) or (5.9, 6.2, 6.0, 5.8, . . .). These observed values will be denoted by lowercase letters (jcj, x2,. . . , x„). They are also referred to by the same name, sample.

Leave a reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>