Category Introduction to the Mathematical and Statistical Foundations of Econometrics

The Multivariate Normal Distribution and Its Application to Statistical Inference

5.1. Expectation and Variance of Random Vectors

Multivariate distributions employ the concepts of the expectation vector and variance matrix. The expected “value” or, more precisely, the expectation vector (sometimes also called the “mean vector”) of a random vector X = (x1 ,…,xn )T is defined as the vector of expected values:

def t-‘

E(X) = (E(X1), E(xn))T.

Adopting the convention that the expectation of a random matrix is the matrix of the expectations of its elements, we can define the variance matrix of X as[14]

Var(X) = E [(X – E(X))(X – E(X))T]

/cov(Xb x1) cov(x1, x2) ••• cov(x1; xn )

Подпись: (5.1)cov(x2, x1) cov(x2, x2) ••• cov(x2, xn)

cov(Xn, X1) cov(Xn, X2) ••• cov(Xn, Xn )/

Recallthatthediagonalelementsofthematrix(5.1)arevariances: cov(xj, Xj) = var(xj)...

Read More

Likelihood Functions

There are many cases in econometrics in which the distribution of the data is neither absolutely continuous nor discrete. The Tobit model discussed in Section

8.3 is such a case. In these cases we cannot construct a likelihood function in the way I have done here, but still we can define a likelihood function indirectly, using the properties (8.4) and (8.7):

Definition 8.1: A sequence L n (в), n > і, of nonnegative random functions on a parameter space © is a sequence of likelihood functions if the following conditions hold:

(a) There exists an increasing sequence.^n, n > 0, of a-algebras such that for each в є © and n > і, Ln(в) is measurable ^n.

(b)

Подпись: P Подпись: E Подпись: L n (в )/іп-і(в) L n (во)/Ln-l(во) Подпись: ^n-1 Подпись: і Подпись: і.

There exists a в0 є © such that for all в є ©, P (E [ L і(в )/L і(в0)^0] < і) = і, and, for n > 2,

(c) Fo...

Read More

Taylor’s Theorem

The mean value theorem implies that if, for two points a < b, f (a) = f (b),then there exists a point c є [a, b] such that f'(c) = 0. This fact is the core of the proof of Taylor’s theorem:

image936 Подпись: (II.12)

Theorem II.9(a): Let f (x) be an n-times, continuously differentiable realfunc­tion on an interval [a, b] with the nth derivative denoted by f(n)(x). For any pair of points x, x0 e [a, b] there exists a X e [0, 1] such that

where Rn is the remainder term. Now let a < x0 < x < b be fixed, and consider the function

^ ГҐ ГҐ ^ (x — u ^ vik)f Rn(x — u T

Подпись: (x — x0 )ng(u) = f(x) — f (u) — ^ — —————– f( )(u) — (x_^ )И

image939

with derivative

image940

Then g(x) = g(x0) = 0; hence, there exists a point c e [x0, x] such that g/(c) = 0 :

Therefore,

Rn = ——f (n)(c) = ——f(n) (x0 +X(x — x0)), (II...

Read More

Mathematical Expectation

With these new integrals introduced, we can now answer the second question stated at the end of the introduction: How do we define the mathematical ex­pectation if the distribution of X is neither discrete nor absolutely continuous?

Definition 2.12: The mathematical expectation of a random variable X is defined as E(X) = f X(o)dP(o) or equivalently as E(X) = f xdF(x) (cf(2.15)), whereFis the distribution function ofX, provided that the integrals involved are defined. Similarly, if g(x) is a Borel-measurable function on Kk and Xis a random vector in Kk, then, equivalently, E[g(X)] = f g(X(o))dP(o) = f g(x )dF(x), provided that the integrals involved are defined.

Note that the latter part of Definition 2.12 covers both examples (2.1) and (2.3).

As motivated in the introduction, the mathemat...

Read More

Hypotheses Testing

Theorem 5.19 is the basis for hypotheses testing in linear regression analysis. First, consider the problem of whether a particular component of the vector Xj of explanatory variables in model (5.31) have an effect on Yj or not. If not, the corresponding component of в is zero. Each component of в corresponds to a component ві 0, i > 0, of в0. Thus, the null hypothesis involved is

H : ві,0 = 0. (5.49)

Let ei be component i of 0, and let the vector ei be column i of the unit matrix Ik. Then it follows from Theorem 5.19(a) that, under the null hypothesis (5.49),

в

ti = —, = ~ tn-k■ (5.50)

SjeJ( XT X)-1ei

The statistic t i in (5.50) is called the t-statistic or t-value of the coefficient ei,0. If ei 0 can take negative or positive values, the appropriate alternative hypothesis is

Hi...

Read More

The Inverse and Transpose of a Matrix

I will now address the question of whether, for a given m x n matrix A, there exists an n x m matrix B such that, with y = Ax, By = x. If so, the action of A is undone by B, that is, B moves y back to the original position x.

If m < n, there is no way to undo the mapping y = Ax. In other words, there does not exist an n x m matrix B such that By = x. To see this, consider the 1 x 2matrix A = (2, 1). Then, withxasin(I.12),Ax = 2x + x2 = y, butifwe knowy and A we only know that x is located on the line x2 = y – 2xi; however, there is no way to determine where on this line.

If m = n in (I.14), thus making the matrix A involved a square matrix, we can undo the mapping A if the columns3 of the matrix A are linear independent. Take for example the matrix A in (I.11) and the vector y in (I...

Read More