# Some Test Principles Suggested in the Statistics Literature  We start by introducing some notation and concepts. Suppose we have n inde­pendent observations y1, y2,…, yn on a random variable Y with density function f(y; 0), where 0 is a p x 1 parameter vector with 0 Є 0 C ^p. It is assumed that f(y; 0) satisfies the regularity conditions stated in Rao (1973, p. 364) and Serfling (1980, p. 144). The likelihood function is given by

where y = (y1, y2,…, yn)’ denotes the sample.

Suppose we are interested in testing a simple hypothesis H0 : 0 = 00 against another simple hypothesis H1 : 0 = 01. Let S denote the sample space. In standard test procedures, S is partitioned into two regions, ю and its compliment юс. We reject the null hypothesis if the sample y Є ю; otherwise, we do not reject H0. Let us define a test function Ф(у) as: Ф(у) = 1 when we reject H0, and Ф(у) = 0 when we do not reject H0. Then, ф(у) = 1 if y Єю = 0 if y Є юс.

Therefore, the probability of rejecting H0 is given by Y (9) = Ее[Ф(у)]

where Ee denotes expectation when f(y; 9) is the probability density function. Type-I and type-II error probabilities are given by, respectively,

Pr(Reject Ho | Ho is true) = Еє0[Ф(у)] = Y(9q) (2.4)

and

Pr(Accept Ho | H is true) = EeJ1 – Ф(у)] = 1 – Y(Є1). (2.5)

Note that y (91) is the probability of making a correct decision and it is called the power of the test. An ideal situation would be if we could simultaneously mini­mize y(9o) and maximize y(91). However, because of the inverse relationship between the type-I and type-II error probabilities and also because Neyman and Pearson (1933) wanted to avoid committing the error of first kind and did not want y (9o) to exceed a preassigned value, they suggested maximizing the power Y (91) after keeping y (9o) at a low value, say a, that is, maximize Ee [Ф(у)] subject to ЕЄо [Ф(у)] = a [see also Neyman, 1980, pp. 4-5]. A test Ф*(у) is called a most powerful (MP) test if Ee [ф*(у)] > Ee [ф(у)] for any ф(у) satisfying E9o [ф(у)] = a. If an MP test maximizes power uniformly in 9 G 01 C 0, the test is called a uniformly most powerful (UMP) test. A UMP test, however, rarely exists, and therefore, it is necessary to restrict optimal tests to a suitable subclass by requiring the test to satisfy other criteria such as local optimality, unbiasedness, and invariance, etc. For the N-P Lemma, there is only one side condition, that is, the size (a) of the test. Once the test is restricted further, in addition to the size there will be more than one side condition, and one must use the generalized N-P lemma given in Neyman and Pearson (1936).