Sample Moments

Подпись: Sample mean i= 1

In Chapter 4 we defined population moments of various kinds. Here we shall define the corresponding sample moments. Sample moments are “natural” estimators of the corresponding population moments. We define

Sample variance

S = Z № – *)* = ^ X ХЇ – (Xf.

i= 1 i= 1

Подпись: i= 1

Sample kth moment around zero

Sample kth moment around the mean

І № – X)


If (Хіг Yi), і = 1, 2, . . . , n, are mutually independent in the sense of Definition 3.5.4 and have the same distribution as (X, Y), we call {(X„ Yj} a bivariate sample of size n on a bivariate population (X, У). We define

Sample covariance

Z № – X) (Xi – Y) = і S x? i – XY.

i= 1 J= 1

Sample correlation

Sample Covariance

The observed values of the sample moments are also called by the same names. They are defined by replacing the capital letters in the definitions above by the corresponding lowercase letters. The observed values of the sample mean and the sample variance are denoted, respectively, by x and


The following way of representing the observed values of the sample moments is instructive. Let (xj, x^,… , xn) be the observed values of a sample and define a discrete random variable X* such that P(X* = xt) = 1 /n, і = 1, 2, . . . , n. We shall call X* the empirical image of X and its probability distribution the empirical distribution of X. Note that X* is always discrete, regardless of the type of X. Then the moments of X* are the observed values of the sample moments of X.

We have mentioned that sample moments are “natural” estimators of population moments. Are they good estimators? This question cannot be answered precisely until we define the term “good” in Section 7.2. But let us concentrate on the sample mean and see what we can ascertain about its properties.

(1) Using Theorem 4.1.6, we know that EX = EX, which means that the population mean is close to a “center” of the distribution of the sample mean.


(2) Suppose that VX = a is finite. Then, using Theorem 4.3.3, we know that VX = <r2/n, which shows that the degree of dispersion of the distribution of the sample mean around the population mean is inversely proportional to the sample size n.

(3) Using Theorem 6.2.1 (Khinchine’s law of large numbers), we know that plimn_>oo X = EX. If VX is finite, the same result also follows from (1) and (2) above because of Theorem 6.1.1 (Chebyshev).

On the basis of these results, we can say that the sample mean is a “good” estimator of the population mean, using the term “good” in its loose everyday meaning.

Leave a reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>