DISTRIBUTION FUNCTION

As we have seen so far, a discrete random variable is characterized by specifying the probability with which the random variable takes each single value, but this cannot be done for a continuous random variable. Con­versely, a continuous random variable has a density function but a discrete random variable does not. This dichotomy can sometimes be a source of inconvenience, and in certain situations it is better to treat all the random variables in a unified way. This can be done by using a cumulative distri­bution function (or, more simply, a distribution function), which can be defined for any random variable.

definition 3.5.1 The (cumulative) distribution function of a random variable X, denoted by F(-), is defined by

(3.5.1) F(x) = P(X < x) for every real x.

From the definition and the axioms of probability it follows directly that F is a monotonically nondecreasing function, is continuous from the left, F( — °°) = 0, and Т(°°) = 1. Some textbooks define the distribution func­tion as F(x) = P(X < x). Then the distribution function can be shown to be continuous from the right.

Let X be a finite discrete random variable such that P(X = хг) = рг, і = 1, 2, . . . , n. Then its distribution is a step function with a jump of length pi at Xi as shown in Figure 3.8. At each point of jump, the value of the distribution function is at the solid point instead of the empty point, indicating the fact that the function is continuous from the left.

The distribution function of a continuous random variable X with density function /(•) is given by

f(t)dt.

From (3.5.2) we can deduce that the density function is the derivative of the distribution function and that the distribution function of a continu­ous random variable is continuous everywhere.

The probability that a random variable falls into a closed interval can be easily evaluated if the distribution function is known, because of the following relationship:

(3.5.3) P(x1 < X < x2) = P(xі < X < x2) + P(X = x2)

= P(X < x2) – P(X < xx) + P(X = x2)

= P{x2) ~ F(x{) + P(X = x2).

If X is a continuous random variable, P(X = x2) = 0; hence it may be omitted from the terms in (3.5.3).

The following two examples show how to obtain the distribution func­tion using the relationship (3.5.2), when the density function is given.

EXAMPLE 3.5.1 Suppose

(3.5.4) /(x) = і e */2 for x > 0,

Подпись: otherwise.image099
= 0

Then F(x) = 0 if x < 0. For x > 0 we have

(3.5.5) Подпись: 'o

image101

F{x) =

EXAMPLE 3.5.2 Suppose

(3.5.6) f{x) = 2(1 — x) for 0 < x < 1,

Подпись: = 0otherwise.

Clearly, Fix) = 0 for x < 0 and Fix) = 1 for x ^ 1. For 0 < x < 1 we have

image103

image104

(3.5.7)

Example 3.5.3 gives the distribution function of a mixture of a discrete and a continuous random variable.

EXAMPLE 3.5.3 Consider

(3.5.7) Fix) =0, x < 0,

= 0.5, 0 < x < 0.5, = x, 0.5 < x ^ 1, = 1, x > 1.

This function is graphed in Figure 3.9. The random variable in question takes the value 0 with probability Vs> and takes a continuum of values between V2 and 1 according to the uniform density over the interval with height 1.

A mixture random variable is quite common in economic applications. For example, the amount of money a randomly chosen person spends on the purchase of a new car in a given year is such a random variable because we can reasonably assume that it is equal to 0 with a positive probability and yet takes a continuum of values over an interval.

We have defined pairwise independence (Definitions 3.2.3 and 3.4.6)

image105

FIGURE 3.9 Distribution function of a mixture random variable

and mutual independence (Definitions 3.2.5 and 3.4.7) first for discrete random variables and then for continuous random variables. Here we shall give the definition of mutual independence that is valid for any sort of random variable: discrete or continuous or otherwise. We shall not give the definition of pairwise independence, because it is merely a special case of mutual independence. As a preliminary we need to define the multi­variate distribution function F(x^, x2,. . . , xn) for n random variables Xb X2, . . . , Xn by F(xb x2, . . . , xn) = P(Xi < xh X2 < x2, . . . , Xn < xn).

DEFINITION 3.5.2 Random variables Xb X2, . . . , Xn are said to be mutually independent if for any points xb x2, . . . , xn,

(3.5.8) F(xx, x2, , xn) = F(xx)F(x2) . . . F(xn).

Equation (3.5.9) is equivalent to saying

(3.5.9) P(Xx Є Sb X2eS2,. .. ,Xn Є Sn)

= P(Xx Є Sx)P(X2 Є S2) • • • P(Xn Є Sn)

for any subsets of the real line S1; S2, . . . , Sn for which the probabilities in (3.5.10) make sense. Written thus, its connection to Definition 2.4.3 concerning the mutual independence of events is more apparent. Defini­tions 3.2.5 and 3.4.7 can be derived as theorems from Definition 3.5.2.

We still need a few more definitions of independence, all of which pertain to general random variables.

DEFINITION 3.5.3 An infinite set of random variables are mutually in­dependent if any finite subset of them are mutually independent.

DEFINITION 3.5.4 Bivariate random variables (Х^Гі), (X2, У2), . . . , (X„, Yn) are mutually independent if for any points x, x2, . . . , xn and y1;

Уь ■ • • > Уп,

(3.5.11) F{xx, уъ x2, y2,. . . , x„, yn) = F(xx, yi)F(xb y2) ■ ■ • F{xn, yn).

Note that in Definition 3.5.4 nothing is said about the independence or nonindependence of Хг and Yt. Definition 3.5.4 can be straightforwardly generalized to trivariate random variables and so on or even to the case where groups (terms inside parentheses) contain varying numbers of random variables. We shall not state such generalizations here. Note also that Definition 3.5.4 can be straightforwardly generalized to the case of an infinite sequence of bivariate random variables.

Finally, we state without proof:

THEOREM 3.5.1 Let ф and ф be arbitrary functions. If a finite set of random variables X, Y, Z, . . . are independent of another finite set of random variables U, V, W, . . . , then ф(Х, Y, Z, . . .) is independent of «I>(U, V,W, …).

Just as we have defined conditional probability and conditional density, we can define the conditional distribution function.

DEFINITION 3.5.5 LetX and Y be random variables and let S be a subset of the (x, y) plane. Then the conditional distribution function of X given S, denoted by F(x S), is defined by

(3.5.12) F(x I S) = P(X < x I (X, Y) Є S).

Note that the conditional density f(x S) defined in Definition 3.4.3 may be derived by differentiating (3.5.12) with respect to x.

Leave a reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>