Category INTRODUCTION TO STATISTICS AND ECONOMETRICS

Estimators in General

We may sometimes want to estimate a parameter of a distribution other than a moment. An example is the probability (pi) that the ace will turn up in a roll of a die. A “natural” estimator in this case is the ratio of the number of times the ace appears in n rolls to n—denote it by p. In general, we estimate a parameter 0 by some function of the sample. Mathematically we express it as

(7.1.1) 0 = ф(Х], X2, . . . , Xn).

We call any function of a sample by the name statistic. Thus an estimator is a statistic used to estimate a parameter. Note that an estimator is a random variable. Its observed value is called an estimate.

The pi just defined can be expressed as a function of the sample. Let Xi be the outcome of the zth roll of a die and define У, = 1 if X* = 1 and Yi = 0 otherwise...

Read More

DETERMINANTS AND INVERSES

Throughout this section, all the matrices are square and n X n.

Before we give a formal definition of the determinant of a square matrix, let us give some examples. The determinant of a 1 X 1 matrix, or a scalar, is the scalar itself. Consider a 2 X 2 matrix

д __ an an

Cl 2i #22

Its determinant, denoted by |A| or det A, is defined by (11.3.1) |A| = <211^22 — <221«12-

The determinant of а З X 3 matrix

ап

а12

аЪ

а21

а 22

а23

а31

а32

аЪЪ

Подпись: (11.3.2) Подпись: |A| — an Подпись: a22 «23 «32 Язз Подпись: a2i Подпись: «12 e32 Подпись: «13 a33 Подпись: + «3! Подпись: «12 «13 «22 «23

is given by

= «цяггя’зз — «11«32«23 — «21«12«33 + «21 «32 «13 + «31 «12«23 — «31«22«13 •

Now we present a formal definition, given inductively on the assumption that the determinant of an (n – 1) X (n — 1) matrix has already been defined.

DEFINITION 11...

Read More

CONFIDENCE INTERVALS

We shall assume that confidence is a number between 0 and 1 and use it in statements such as “a parameter 0 lies in the interval [a, b with 0.95 confidence,” or, equivalently, “a 0.95 confidence interval for 0 is [a, b].” A confidence interval is constructed using some estimator of the parameter in question. Although some textbooks define it in a more general way, we shall define a confidence interval mainly when the estimator used to construct it is either normal or asymptotically normal. This restriction is not a serious one, because most reasonable estimators are at least asymp­totically normal. (An exception occurs in Example 8.2.5, where a chi – square distribution is used to construct a confidence interval concerning a variance...

Read More

Error Components Model

The error components model is useful when we wish to pool time-series and cross-section data. For example, we may want to estimate production functions using data collected on the annual inputs and outputs of many firms, of demand functions using data on the quantities and prices col­lected monthly from many consumers. By pooling time-series and cross – section data, we hope to be able to estimate the parameters of a relation­ship such as a production function or a demand function more efficiently than by using two sets of data separately. Still, we should not treat time – series and cross-section data homogeneously. At the least, we should try to account for the difference by introducing the specific effects of time and cross-section into the error term of the regression, as follows:

(1...

Read More

Properties of a. and j3

First, we obtain the means and the variances of the least squares estimators a and (3. For this purpose it is convenient to use the formulae (10.2.12) and (10.2.16) rather than (10.2.4) and (10.2.5).

Inserting (10.1.1) into (10.2.12) and using (10.2.17) yields

(10.2.19) Подпись: by ut Пхї? P – (3 =

Since Eut = 0 and {x*’ are constants by our assumptions, we have from

(10.2.19) and Theorem 4.1.6,

(10.2.20) = p.

In other words, p is an unbiased estimator of p. Similarly, inserting

(10.1.1) into (10.2.16) and using (10.2.18) yields

Подпись:(10.2.21) d – a =

which implies

(10.2.22) Eol = a.

Using (10.2.19), the variance of P can be evaluated as follows:

(10.2.23) Up =——– ^ U(E x*ut) by Theorem 4.2.1

image522 Подпись: by Theorem 4.3.3

[Ц*?)2]2

S(xf)2

Similarly, we obtain from (10.2.21)

2

(10.2.24) Ud =————-

mn2

How good are the least squares est...

Read More

Nonparametric Estimation

In parametric estimation we can use two methods.

(1) Distribution-specific method. In the distribution-specific method, the distribution is assumed to belong to a class of functions that are characterized by a fixed and finite number of parameters—for example, normal—and these parameters are estimated.

(2) Distribution-free method. In the distribution-free method, the distribution is not specified and the first few moments are estimated.

In nonparametric estimation we attempt to estimate the probability distribution itself. The estimation of a probability distribution is simple for a discrete random variable taking a few number of values but poses problems for a continuous random variable...

Read More