Category Introduction to the Mathematical and Statistical Foundations of Econometrics

Limsup and Liminf

Let an (n = 1, 2,…) be a sequence of real numbers, and define the sequence bn as

Подпись: (II.1)bn = sup am


Подпись: limsup an == lim n—то n—TO Подпись: sup am m>n Подпись: (II.2)

Then bn is a nonincreasing sequence: bn > bn+1 because, if an is greater than the smallest upper bound of an+1, an+2, an+3,…, then an is the maximum of an, an+1, an+2, an+3,..hence, bn = an > bn+1 and, if not, then bn = bn+1 ■ Nonincreasing sequences always have a limit, although the limit may be – to. The limit of bn in (II.1) is called the limsup of an:

image917 Подпись: sup am m>n Подпись: (II.3)

Note that because bn is nonincreasing, the limit of bn is equal to the infimum of bn. Therefore, the limsup of an may also be defined as

Note that the limsup may be +to or – to, for example, in the cases an = n and an = —n, respectively.

Similarly, the liminf of an is defined by

Подпись: (II4)Подпись: (II5)liminfan = lim inf am

n — то n —— to V ...

Read More

Distributions Related to the Standard Normal Distribution

The standard normal distribution generates, via various transformations, a few other distributions such as the chi-square, t, Cauchy, and F distributions. These distributions are fundamental in testing statistical hypotheses, as we will see in Chapters 5, 6, and 8.

4.6.1. The Chi-Square Distribution

Let X1,Xn be independent N(0, 1)-distributed random variables, and let


Yn = £ X2. (4.30)


The distribution of Yn is called the chi-square distribution with n degrees of freedom and is denoted by x2 or x2(n). Its distribution and density functions

can be derived recursively, starting from the case n = 1:

Gi(y) = P[71 < y] = P [X < y] = P[-Vy < Xi < Vt]

4у 4у

= j f (x)dx = 2 j f (x)dx for y > 0,

-Vt о

Gi(y) = 0 for y < 0,

where f (x) is defined by (4.28); hence,

gi(y) = G 1(y) = f (Vу) /...

Read More

Testing Parameter Restrictions

8.5.1. The Pseudo t-Test and the Wald Test

image723 Подпись: РЙ. Подпись: (8.53)

In view of Theorem 8.2 and Assumption 8.3, the matrix Й can be estimated consistently by the matrix Й in (8.53):

If we denote the ith column of the unit matrix Im by ei it follows now from (8.53), Theorem 8.4, and the results in Chapter 6 that

Theorem 8.5: (Pseudo t-test) under Assumptions 8.1-8.3, ti = л/не^в/ JejЙ-1ei ^dN(0, 1) ifeTeo = 0.

Thus, the null hypothesis Й0 : eTв0 = 0, which amounts to the hypothesis that the ith component of в0 is zero, can now be tested by the pseudo t-value ti in the same way as for M-estimators.

Next, consider the partition

в0 = (вз °) , в1,0 Є Rm-r, в2,0 Є Rr (8.54)

and suppose that we want to test the null hypothesis в2,0 = 0...

Read More

Confidence Intervals

Because estimators are approximations of unknown parameters, the question of how close they are arises. I will answer this question for the sample mean and the sample variance in the case of a random sample X1, X2,…, Xn from the N(д, a 2) distribution.

It is almost trivial that X ~ N(a, a2/n); hence,

Vn(X – n)/a – N(0, 1). (5.19)

Therefore, for given а є (0, 1) there exists a в > 0 such that

P[|X – A < Pa/Vn] = P[Vn(X – A)/a < в]


f exp(-u2/2)

= m___ / ) du = 1 – a. (5.20)

J л/2л

For example, if we choose a = 0.05, then в = 1.96 (see Appendix IV, Table

IV. 3), and thus in this case

P[X – 1.96a/Vn < a < X + 1.96a / Vn] = 0.95.

The interval [XT – 1.96a/Vn, XT + 1.96a/Vn] is called the 95% confidence interval of a...

Read More

Inner Product, Orthogonal Bases, and Orthogonal Matrices

Подпись: cos(y) Подпись: Ej=1 llx ll-llyll Подпись: T x1 y Подпись: (I.41)

It follows from (I.10) that the cosine of the angle y between the vectors x in (I.2) and y in (I.5) is


Figure I.5. Orthogonalization.

Definition I.13: The quantity x Ty is called the inner product of the vectors x andy.

IfxTy = 0,thencos(y) = 0;hence, у = n/2ory = 3n/4. This corresponds to angles of 90 and 270°, respectively; hence, x andy are perpendicular. Such vectors are said to be orthogonal.

Definition I.14: Conformable vectors x and y are orthogonal if their inner product x Ty is zero. Moreover, they are orthonormal if, in addition, their lengths are 1: ||x|| = ||y|| = 1.

In Figure I...

Read More

Conditioning on Increasing Sigma-Algebras

Consider a random variable Y defined on the probability space {^, &, P} satis­fying E [|Y |] < to, and let &n be a nondecreasing sequence of sub-a-algebras of &: &n c ■^n+1 C &■ The question I will address is, What is the limit of E[Y|&n] for n ^ to? As will be shown in the next section, the answer to this question is fundamental for time series econometrics.

We have seen in Chapter 1 that the union of a-algebras is not necessarily a a-algebra itself. Thus, UTO=1 &n may not be a a-algebra. Therefore, let


that is, &to is the smallest a-algebra containing UTO=1 &n ■ Clearly, &to c & because the latter also contains UTO=1 &n ■

The answer to our question is now as follows:

Theorem 3...

Read More