Category INTRODUCTION TO STATISTICS AND ECONOMETRICS

Marginal Density

When we are considering a bivariate random variable (X, Y), the prob­ability pertaining to one of the variables, such as P(x < X ^ x2) or P(yj < У < yf), is called the marginal probability. The following relationship between a marginal probability and a joint probability is obviously true.

figure 3.5 Domain of a double integral for Example 3.4.4

(3.4.18) P(x1 < X < x2) = P{xx < X < x2, -«з < Y < <*>).

More generally, one may replace x ^ X < x2 in both sides of (3.4.18) by x Є S where 5 is an arbitrary subset of the real line.

Similarly, when we are considering a bivariate random variable (X, Y), the density function of one of the variables is called the marginal density. Theorem 3.4.1 shows how a marginal density is related to a joint density.

THEOREM 3.4...

Read More

NORMAL RANDOM VARIABLES

image174

The normal distribution is by far the most important continuous distribu­tion used in statistics. Many reasons for its importance will become appar­ent as we study its properties below. We should mention that the binomial random variable X defined in Definition 5.1.1 is approximately normally distributed when n is large. This is a special case of the so-called central limit theorem, which we shall discuss in Chapter 6. Examples of the normal approximation of binomial are given in Section 6.3.

When X has the above density, we write symbolically X ~ TV(jjl, a2).

We can verify J-cc f(x)dx = 1 for all p, and all positive a by a rather complicated procedure using polar coordinates. See, for example, Hoel (1984, p. 78)...

Read More

Bayes’ Theorem

Bayes’ theorem follows easily from the rules of probability but is listed separately here because of its special usefulness.

Подпись: Р(АІ I E)

image006

THEOREM 2.4.2 (Bayes) Let events A, A2, . . . , An be mutually exclusive such that P{A U A2 U. . . U An) = 1 and Р(Д) > 0 for each i. Let E be an arbitrary event such that P(E) > 0. Then

Подпись: (2.4.3) Подпись: Р(АІ I E) Подпись: P(E I AdPjAj) P(E)

Proof. From Theorem 2.4.1, we have

Since E П A], E П А4, . . . , E fl An are mutually exclusive and their union is equal to E, we have, from axiom (3) of probability,
(2.4.4) P(E) = X P(E n Aj)-

j= і

Thus the theorem follows from (2.4.3) and (2.4.4) and by noting that P(E П Aj) = P(£ I Aj)P(Aj) by Theorem 2.4.1. □

Read More

Conditional Density

We shall extend the notion of conditional density in Definitions 3.3.2 and

3.3.3 to the case of bivariate random variables. We shall consider first the situation where the conditioning event has a positive probability and second the situation where the conditioning event has zero probability. Under the first situation we shall define both the joint conditional density and the conditional density involving only one of the variables. A gener­alization of Definition 3.3.3 is straightforward: definition 3.4.2 Let (X, У) have the joint density fix, y) and let S be a subset of the plane such that P[(X, Y) Є 5] > 0. Then the conditional density of (X, Y) given (X, Y) Є S, denoted by f(x, у | S), is defined by

(3.4.21) f(x, у I S) =———————— for (x, у) Є S,

P[(X, F)ES]

= 0 otherwise.

We are...

Read More

MULTIVARIATE NORMAL RANDOM VARIABLES

In this section we present results on multivariate normal variables in matrix notation. The student unfamiliar with matrix analysis should read Chapter 11 before this section. The results of this section will not be used directly until Section 9.7 and Chapters 12 and 13.

Let x be an и-dimensional column vector with Ex = x and Ух = X. (Throughout this section, a matrix is denoted by a boldface capital letter and a vector by a boldface lowercase letter.) We write their elements explicidy as follows:

Note that <jt] = Cov(x„ xA, i, j = 1,2,…, n, and, in particular, сти = Vxt,

J о

і = 1, 2, . . . , n. We sometimes write ct* for сгй.

DEFINITION 5.4.1 We say x is multivariate normal with mean (jl and variance-covariance matrix X, denoted 1V(|A, X), if its density is given by

Подпись:

image200
Read More