Category Introduction to the Mathematical and Statistical Foundations of Econometrics

Moment-Generating Functions and Characteristic Functions

2.8.1. Moment-Generating Functions

The moment-generating function ofa bounded random variable X (i. e., P [| X | < M] = 1 for some positive real number M < to) is defined as the function

m(t) = E[exp(t ■ X)], t e R, (2.31)

where the argument t is nonrandom. More generally:

Definition 2.15: The moment generating function of a random vector X in R* is defined by m(t) = E[exp(tTX)] for t e T c R*, where T is the set of nonrandom vectors t for which the moment-generating function exists and is finite.

Подпись: m (t) Подпись: E [exp(t ■ X)] = E Подпись: ^ t*X* ^ *! *0 image088

For bounded random variables the moment-generating function exists and is finite for all values of t. In particular, in the univariate bounded case we can write

It is easy to verify that the jth derivative of m(t) is

Подпись: m ( ft)dj m(t) ^ t*—j E[X*]

image090 Подпись: (2.32)

(dt )j = *=j (* — j)!

hence, th...

Read More

The Uniform Law of Large Numbers and Its Applications

6.4.1. The Uniform Weak Law of Large Numbers

In econometrics we often have to deal with means of random functions. A random function is a function that is a random variable for each fixed value of its argument. More precisely,

Definition 6.4: Let {^, P} be the probability space. A randomfunction f (в)

on a subset © of a Euclidean space is a mapping f (ш, в): ^ x © ^ К such that for each Borel setB in К and each в є ©, {ш є ^ : f (ш, в) є B }є Ж.

Usually random functions take the form of a function g(X, в) of a random vector X and a nonrandom vector в. For such functions we can extend the weak law of large numbers for i. i.d. random variables to a uniform weak law of large numbers (UWLLN):

Theorem 6.10: (UWLLN)...

Read More

The Gauss-Jordan Iteration for Inverting a Matrix

The Gaussian elimination of the matrix A in the first example in the previous section suggests that this method can also be used to compute the inverse of A as follows. Augment the matrix A in (I.22) to a 3 x 6 matrix by augmenting the columns of A with the columns of the unit matrix Ib :

/242

1

0

B = (A, Ib) = 1 2 3

0

1

1-І 1 —1

0

0

Now follow the same procedure as in Example 1, up to (I.25), with A replaced

A pivot is an element on the diagonal to be used to wipe out the elements below that diagonal element.

by B. Then (I.25) becomes

P2,3 Eb, i(1/2)E2,i(-1/2) B

= (P2,3Eb, i(1/2)E2,i(-1/2)A, P„Ез, і(1/2)E2,i(-1/2))

2

4

2

1

0

0

0

3

0

0.5

0

1 = (U*, c),

(I.30)

0

0

2

-0.5

1

0

image830

for inst...

Read More

Transformations of Absolutely Continuous Random Vectors 4.4.1. The Linear Case

Подпись: f (u1, u2) du1du2Подпись: f (u) du,

Подпись: F (x > -//
Подпись: / (— TO,xj ] x(— TO,x2]
Подпись: — TO —TO

Let X — (X1, X2)T be a bivariate random vector with distribution function

where x — (x1; x2)T, u — (u ь u2)T.

In this section I will derive the joint density of Y — AX + b, where A is a (nonrandom) nonsingular 2 x 2 matrix and b is a nonrandom 2 x 1 vector.

Recall from linear algebra (see Appendix I) that any square matrix A can be decomposed into

Подпись: (4.19)A — R —1L ■ D ■ U,

where R is a permutation matrix (possibly equal to the unit matrix I), L is a lower-triangular matrix with diagonal elements all equal to 1, U is an upper – triangular matrix with diagonal elements all equal to 1, and D is a diagonal matrix. The transformation Y = AX + b can therefore be conducted in five steps:

Z1 = UX Z 2 = DZ1

Z 3 = LZ2 (4.20)

Z4 = R-1 Z3 Y = Z4 + b.

Therefore, I will consider...

Read More

Mixing Conditions

Inspection of the proof of Theorem 7.5 reveals that the independence assumption can be relaxed. We only need independence of an arbitrary set A in F—T and an arbitrary set C in Ft—k = a (Xt, Xt—1, Xt—2,Xt—k) for k > 1. A sufficient condition for this is that the process Xt is a-mixing or y-mixing:

Definition 7.5: Let F— T = a (Xt, Xt-1, Xt -2, ■■■), = a (Xt, X+1,

Xt +2,…) and

a(m) = sup sup |P(A n B) — P(A) ■ P(B)|,

1 AeF”, Be^—T

y(m) = sup sup | P (A| B) — P (A)|.

1 ^ , bc F—m

If limm^Ta(m) = 0, then the time series process Xt involved is said to be а-mixing; iflimm^Ty(m) = 0, Xt is said to be y-mixing.

Note in the a-mixing case that

sup |P(A n B) — P(A) ■ P(B)|

Ae. F‘t—k, BeF— t

< limsupsup sup |P(A n B) — P(A) ■ P(B)|

m^T t, cz” D cz-t —k—m

Ae^ t —...

Read More

Generalized Eigenvalues and Eigenvectors

The concepts of generalized eigenvalues and eigenvectors play a key role in cointegration analysis. Cointegration analysis is an advanced econometric time series topic and will therefore not likely be covered in an introductory Ph. D.- level econometrics course for which this review of linear algebra is intended.

Подпись: (I.63)Nevertheless, to conclude this review I will briefly discuss what generalized eigenvalues and eigenvectors are and how they relate to the standard case.

Given two n x n matrices A and B, the generalized eigenvalue problem is to find the values of X for which

det(A — XB) = 0.

Given a solution X, which is called the generalized eigenvalue of A relative to B, the corresponding generalized eigenvector (relative to B) is a vector x in Kn such that Ax = XBx.

However, if B is singular,...

Read More