7.3.1 Discrete Sample

Suppose we want to estimate the probability (p) that a head will appear for a particular coin; we toss it ten times and a head appears nine times. Call this event A. Then we suspect that the coin is loaded in favor of heads: in other words, we conclude that p = У2 is not likely. If p were У2, event A would be expected to occur only once in a hundred times, since we have P(A I p = У2) = CI°(V,)10 = 0.01. In the same situation p = % is more likely, because P(A p = 3/4) = С9°(3/4)9(У4) s= 0.19, and p = 9/10 is even more likely, because P(A p = 9/ю) = С9°(9/ю)9(Уіо) — 0.39. Thus it makes sense to call P(A p) = СІ°р9(1 — p) the likelihood function of p given event A...

Read More


In Section 10.2.3 we briefly discussed the problem of choosing between two bivariate regression equations with the same dependent variable. We stated that, other things being equal, it makes sense to choose the equa­tion with the higher Hr. Here, we consider choosing between two multiple regression equations

(12.5.1) у = XP + uj and

(12.5.2) у = Sy + u2,

where each equation satisfies the assumptions of model (12.1.3). Suppose the vectors P and 7 have К and H elements, respectively. If H Ф K, it no

longer makes sense to choose the equation with the higher R2, because the greater the number of regressors, the larger R2 tends to be. In the extreme case where the number of regressors equals the number of ob – servations, R = 1...

Read More


In the preceding sections we have studied the theory of hypothesis testing. In this section we shall apply it to various practical problems.

EXAMPLE 9.6.1 (mean of binomial) It is expected that a particular coin is biased in such a way that a head is more probable than a tail. We toss this coin ten times and a head comes up eight times. Should we conclude that the coin is biased at the 5% significance level (more precisely, size)? What if the significance level is 10%?

From the wording of the question we know we must put (9.6.1) H0: p = У2 and Яр p > У2.

From Example 9.4.2, we know that we should use X ~ B(10,p), the number of heads in ten tosses, as the test statistic, and the critical region should be of the form

(9.6.2) R = {c, c + 1, . . . , 10},

where c (the critical value) should...

Read More

Multinomial Model

We illustrate the multinomial model by considering the case of three alternatives, which for convenience we associate with three integers 1, 2, and 3. One example of the three-response model is the commuter’s choice of mode of transportation, where the three alternatives are private car, bus, and train. Another example is the worker’s choice of three types of employment: being fully employed, partially employed, and self-employed.

We extend (13.5.2) to the case of three alternatives as (13.5.8) Uu = x’u 0 + uu

Ub = X2; P + щі

Ubi = Хзі (З + uSi,

where (щі, Щі, иы)are i. i.d. It is assumed that the individual chooses the alternative with the largest utility. Therefore, if we represent the ith per­son’s discrete choice by the variable yv our model is defined by

Р(Уі = 1) =...

Read More

Tests for Structural Change

Suppose we have two regression regimes



Уь =

a + 3i*T + ut,

t= 1, 2, . .

■ • ,Ti


Tit =

a + 32*21 + u2t,

t = 1, 2, . .

• , T2,

where each equation satisfies the assumptions of the model (10.1.1). We

9 9

denote Vun = crj and Vu% = a2 . In addition, we assume that (щ() and {u2t)
are normally distributed and independent of each other. This two-regres­sion model is useful to analyze the possible occurrence of a structural change from one period to another. For example, (10.3.7) may represent a relationship between у and x in the prewar period and (10.3.8) in the postwar period.

First, we study the test of the null hypothesis H0: (3j = (32, assuming

9 9

aj = ct2 under either the null or the alternative hypothesis...

Read More

Continuous Sample

For the continuous case, the principle of the maximum likelihood estima­tor is essentially the same as for the discrete case, and we need to modify Definition 7.3.1 only slightly.

DEFINITION 7.S.2 Let (Xj, X%, . . . , X„) be a random sample on a con­tinuous population with a density function /(• 10), where 0 = Ob 02, ■ ■ • , 0jf), and let хг be the observed value of X,. Then we call L = IIf= f{xt | 0) the likelihood function of 0 given (x, x%,. . • , xn) and the value of 0 that maximizes L, the maximum likelihood estimator.

example 7.3.3 Let {X,-}, і = 1, 2, . . . , n, be a random sample on


Подпись: (7.3.13) so that image271 Подпись: (*>■ - p)2 2a2

N{p, a ) and let {x,| be their observed values. Then the likelihood func­tion is given by

7h 1 11

(7.3.14) log L = ~ log(2ir) – ^ log a2 – —- X (*« “ P)2- 2 2 2cr2 i=


Read More