Category INTRODUCTION TO STATISTICS AND ECONOMETRICS

BINOMIAL AND NORMAL RANDOM VARIABLES

5.1 BINOMIAL RANDOM VARIABLES

Let X be the number of successes in n independent trials of some experi­ment whose outcome is “success” or “failure” when the probability of success in each trial is p. Such a random variable often appears in practice (for example, the number of heads in n tosses) and is called a binomial random variable. More formally we state

DEFINITION 5.1.1 Let (FJ, г = 1, 2, be mutually independent

with the probability distribution given by

(5.1.1) F,- = 1 with probability/?

= 0 with probability 1 — p = q.

Then the random variable X defined by

П

(5.1.2) X = X Y{

i=1

is called a binomial random variable. Symbolically we write X ~ Bin, p).

Note that Fj defined in (5.1.1) is distributed as 5(1, p), which is called a binary or Bernoulli random variable.

theorem 5...

Read More

CONDITIONAL PROBABILITY AND INDEPENDENCE

1.2.2 Axioms of Conditional Probability

The concept of conditional probability is intuitively easy to understand. For example, it makes sense to talk about the conditional probability that number one will show in a roll of a die given that an odd number has occurred. In the frequency interpretation, this conditional probability can be regarded as the limit of the ratio of the number of times one occurs to the number of times an odd number occurs. In general we shall consider the “conditional probability of A given B,” denoted by P(A B), for any pair of events A and В in a sample space, provided P(B) > 0, and establish axioms that govern the calculation of conditional probabilities.

Axioms of Conditional Probability

(In the following axioms it is assumed that P(B) >0.)

(1)

P(A 1

Read More

Marginal Density

When we are considering a bivariate random variable (X, Y), the prob­ability pertaining to one of the variables, such as P(x < X ^ x2) or P(yj < У < yf), is called the marginal probability. The following relationship between a marginal probability and a joint probability is obviously true.

figure 3.5 Domain of a double integral for Example 3.4.4

(3.4.18) P(x1 < X < x2) = P{xx < X < x2, -«з < Y < <*>).

More generally, one may replace x ^ X < x2 in both sides of (3.4.18) by x Є S where 5 is an arbitrary subset of the real line.

Similarly, when we are considering a bivariate random variable (X, Y), the density function of one of the variables is called the marginal density. Theorem 3.4.1 shows how a marginal density is related to a joint density.

THEOREM 3.4...

Read More

NORMAL RANDOM VARIABLES

image174

The normal distribution is by far the most important continuous distribu­tion used in statistics. Many reasons for its importance will become appar­ent as we study its properties below. We should mention that the binomial random variable X defined in Definition 5.1.1 is approximately normally distributed when n is large. This is a special case of the so-called central limit theorem, which we shall discuss in Chapter 6. Examples of the normal approximation of binomial are given in Section 6.3.

When X has the above density, we write symbolically X ~ TV(jjl, a2).

We can verify J-cc f(x)dx = 1 for all p, and all positive a by a rather complicated procedure using polar coordinates. See, for example, Hoel (1984, p. 78)...

Read More

Bayes’ Theorem

Bayes’ theorem follows easily from the rules of probability but is listed separately here because of its special usefulness.

Подпись: Р(АІ I E)

image006

THEOREM 2.4.2 (Bayes) Let events A, A2, . . . , An be mutually exclusive such that P{A U A2 U. . . U An) = 1 and Р(Д) > 0 for each i. Let E be an arbitrary event such that P(E) > 0. Then

Подпись: (2.4.3) Подпись: Р(АІ I E) Подпись: P(E I AdPjAj) P(E)

Proof. From Theorem 2.4.1, we have

Since E П A], E П А4, . . . , E fl An are mutually exclusive and their union is equal to E, we have, from axiom (3) of probability,
(2.4.4) P(E) = X P(E n Aj)-

j= і

Thus the theorem follows from (2.4.3) and (2.4.4) and by noting that P(E П Aj) = P(£ I Aj)P(Aj) by Theorem 2.4.1. □

Read More