Category Introduction to the Mathematical and Statistical Foundations of Econometrics

Conditioning on Increasing Sigma-Algebras

Consider a random variable Y defined on the probability space {^, &, P} satis­fying E [|Y |] < to, and let &n be a nondecreasing sequence of sub-a-algebras of &: &n c ■^n+1 C &■ The question I will address is, What is the limit of E[Y|&n] for n ^ to? As will be shown in the next section, the answer to this question is fundamental for time series econometrics.

We have seen in Chapter 1 that the union of a-algebras is not necessarily a a-algebra itself. Thus, UTO=1 &n may not be a a-algebra. Therefore, let

image186(3.23)

that is, &to is the smallest a-algebra containing UTO=1 &n ■ Clearly, &to c & because the latter also contains UTO=1 &n ■

The answer to our question is now as follows:

Theorem 3...

Read More

Convergence of Characteristic Functions

Recall that the characteristic function of a random vector X in Kk is defined as

p(t) = E [exp(itTX)] = E [cos(tTX)] + i ■ E [sin(tTX)]

for t e Kk, where i = */—1. The last equality obtains because exp(i ■ x) = cos(x) + i ■ sin(x).

Also recall that distributions are the same if and only if their characteristic functions are the same. This property can be extended to sequences of random variables and vectors:

Theorem 6.22: Let Xn and X be random vectors in Kk with characteristic functions pn (t) and (p(t), respectively. Then Xn ^d X if and only if (p(t) = limn^TOpn(t) for all t e Kk.

Proof: See Appendix 6.C for the case k = 1.

Note that the “only if” part of Theorem 6.22 follows from Theorem 6.18: Xn ^d X implies that, for any t e Kk,

lim E [cos(tTXn)] = E [cos(tTX)];

n^TO

lim E [sin(...

Read More

Inner Product, Orthogonal Bases, and Orthogonal Matrices

Подпись: cos(y) Подпись: Ej=1 llx ll-llyll Подпись: T x1 y Подпись: (I.41)

It follows from (I.10) that the cosine of the angle y between the vectors x in (I.2) and y in (I.5) is

image843

Figure I.5. Orthogonalization.

Definition I.13: The quantity x Ty is called the inner product of the vectors x andy.

IfxTy = 0,thencos(y) = 0;hence, у = n/2ory = 3n/4. This corresponds to angles of 90 and 270°, respectively; hence, x andy are perpendicular. Such vectors are said to be orthogonal.

Definition I.14: Conformable vectors x and y are orthogonal if their inner product x Ty is zero. Moreover, they are orthonormal if, in addition, their lengths are 1: ||x|| = ||y|| = 1.

In Figure I...

Read More

The Student’s t Distribution

Let X ~ N(0, 1) and Yn ~ x2, where X and Yn are independent. Then the distribution of the random variable

VYn/n

is called the (Student’s2) t distribution with n degrees of freedom and is denoted

by tn.

exp(-(x[13] /n)y/2)
Vn/ yV2n

yn/2 1 exp(-y /2) _ Г(п/2)2”/2 ^

Подпись: TO hn (x) = j 0 Подпись: X

The conditional density hn (x |y) of Tn given Yn = y is the density of the N(1, n/y) distribution; hence, the unconditional density of Tn is

Г((п + 1)/2)

^ИЛГ(и/2)(1 + x 2/n)(n+1)/2

Подпись: var(Tn) = E[T2] Подпись: n n — 2 Подпись: (4.38)

The expectation of Tn does not exist if n = 1, as we will see in the next subsec­tion, and is zero for n > 2 by symmetry. Moreover, the variance of Tn is infinite for n = 2, whereas for n > 3,

See Appendix 4.A.

The moment-generating function of the tn distribution does not exist, but its characteristic fun...

Read More

A.2. A Hilbert Space of Random Variables

Let U0 be the vector space of zero-mean random variables with finite second moments defined on a common probability space {&, .^, P} endowed with the innerproduct (X, Y) = E[X ■ Y],norm ||X|| = ^E[X2],andmetric ||X — Y||.

Theorem 7.A.2: The space U0 defined above is a Hilbert space.

Proof: To demonstrate that U0 is a Hilbert space, we need to show that every Cauchy sequence Xn, n > 1, has a limit in U0. Because, by Chebishev’s inequality,

P [|Xn — Xm | > є] < E[(Xn — Xm )2]/є2

= ||X„ — Xm ||2/є2 ^ 0 as n, m ^ то

forevery є > 0, it follows that | Xn — Xmp 0as n, m ^ж. In Appendix 6.B of Chapter 6, we have seen that convergence in probability implies convergence a. s. along a subsequence. Therefore, there exists a subsequence nk such that Xnk — Xnm ^ 0 a. s. as n, m ^ж...

Read More

Continuity of Concave and Convex Functions

A real function p on a subset of a Euclidean space is convex if, for each pair of points a, b and every X e [0, 1], p(Xa + (1 — X)b) > Xp(a) + (1 — X)p(b). For example, p(x) = x2 is a convex function on the real line, and so is p(x) = exp(x). Similarly, p is concave if, for each pair of points a, b and every к e [0, 1], p(ka + (1 — k)b) < kp(a) + (1 — k)p(b).

I will prove the continuity of convex and concave functions by contradiction. Suppose that p is convex but not continuous in a point a. Then

p(a+) = lim p(b) = p(a) (II.6)

b^a

or

p(a—) = lim p(b) = p(a), (II.7)

bfa

or both. In the case of(II.6) we have

p(a+) = lim p(a + 0.5(b — a)) = lim p(0.5a + 0.5b)

b^a b^a

< 0.5p(a) + 0.5 lim p(b) = 0.5p(a) + 0.5p(a+);

b^a

hence, p(a+) < p(a), and therefore by (II.6), p(a+) < p(a)...

Read More