Category Introduction to the Mathematical and Statistical Foundations of Econometrics

Inner Product, Orthogonal Bases, and Orthogonal Matrices

Подпись: cos(y) Подпись: Ej=1 llx ll-llyll Подпись: T x1 y Подпись: (I.41)

It follows from (I.10) that the cosine of the angle y between the vectors x in (I.2) and y in (I.5) is

image843

Figure I.5. Orthogonalization.

Definition I.13: The quantity x Ty is called the inner product of the vectors x andy.

IfxTy = 0,thencos(y) = 0;hence, у = n/2ory = 3n/4. This corresponds to angles of 90 and 270°, respectively; hence, x andy are perpendicular. Such vectors are said to be orthogonal.

Definition I.14: Conformable vectors x and y are orthogonal if their inner product x Ty is zero. Moreover, they are orthonormal if, in addition, their lengths are 1: ||x|| = ||y|| = 1.

In Figure I...

Read More

The Student’s t Distribution

Let X ~ N(0, 1) and Yn ~ x2, where X and Yn are independent. Then the distribution of the random variable

VYn/n

is called the (Student’s2) t distribution with n degrees of freedom and is denoted

by tn.

exp(-(x[13] /n)y/2)
Vn/ yV2n

yn/2 1 exp(-y /2) _ Г(п/2)2”/2 ^

Подпись: TO hn (x) = j 0 Подпись: X

The conditional density hn (x |y) of Tn given Yn = y is the density of the N(1, n/y) distribution; hence, the unconditional density of Tn is

Г((п + 1)/2)

^ИЛГ(и/2)(1 + x 2/n)(n+1)/2

Подпись: var(Tn) = E[T2] Подпись: n n — 2 Подпись: (4.38)

The expectation of Tn does not exist if n = 1, as we will see in the next subsec­tion, and is zero for n > 2 by symmetry. Moreover, the variance of Tn is infinite for n = 2, whereas for n > 3,

See Appendix 4.A.

The moment-generating function of the tn distribution does not exist, but its characteristic fun...

Read More

A.2. A Hilbert Space of Random Variables

Let U0 be the vector space of zero-mean random variables with finite second moments defined on a common probability space {&, .^, P} endowed with the innerproduct (X, Y) = E[X ■ Y],norm ||X|| = ^E[X2],andmetric ||X — Y||.

Theorem 7.A.2: The space U0 defined above is a Hilbert space.

Proof: To demonstrate that U0 is a Hilbert space, we need to show that every Cauchy sequence Xn, n > 1, has a limit in U0. Because, by Chebishev’s inequality,

P [|Xn — Xm | > є] < E[(Xn — Xm )2]/є2

= ||X„ — Xm ||2/є2 ^ 0 as n, m ^ то

forevery є > 0, it follows that | Xn — Xmp 0as n, m ^ж. In Appendix 6.B of Chapter 6, we have seen that convergence in probability implies convergence a. s. along a subsequence. Therefore, there exists a subsequence nk such that Xnk — Xnm ^ 0 a. s. as n, m ^ж...

Read More

Continuity of Concave and Convex Functions

A real function p on a subset of a Euclidean space is convex if, for each pair of points a, b and every X e [0, 1], p(Xa + (1 — X)b) > Xp(a) + (1 — X)p(b). For example, p(x) = x2 is a convex function on the real line, and so is p(x) = exp(x). Similarly, p is concave if, for each pair of points a, b and every к e [0, 1], p(ka + (1 — k)b) < kp(a) + (1 — k)p(b).

I will prove the continuity of convex and concave functions by contradiction. Suppose that p is convex but not continuous in a point a. Then

p(a+) = lim p(b) = p(a) (II.6)

b^a

or

p(a—) = lim p(b) = p(a), (II.7)

bfa

or both. In the case of(II.6) we have

p(a+) = lim p(a + 0.5(b — a)) = lim p(0.5a + 0.5b)

b^a b^a

< 0.5p(a) + 0.5 lim p(b) = 0.5p(a) + 0.5p(a+);

b^a

hence, p(a+) < p(a), and therefore by (II.6), p(a+) < p(a)...

Read More

B. Extension of an Outer Measure to a Probability Measure

To use the outer measure as a probability measure for more general sets that those in F0, we have to extend the algebra F0 to a a-algebra F of events for which the outer measure is a probability measure. In this appendix it will be shown how Fcan be constructed via the following lemmas.

Lemma 1.B.1: For any sequence Bn of disjoint sets in ^, P*(U=j Bn) <

£Г=1 p "(Bn).

Proof: Given an arbitrary є > 0 it follows from (1.21) that there exists a countable sequence of sets An, j in F0 such that Bn c An, j and P *(Bn) >

ZjU P(An, j) – є2 n; hence,

CO CO CO CO CO CO

£ P*(Bn) > £ £ P(An, j) – £ £ 2-n = £ £ P(An, j) – є. n=1 n=1 j=1 n=1 n=1 j=1

(1.28)

Moreover, UO=1 Bn c UO=1 UO=1 An, j, where the latter is a countable union of sets in Ж0; hence, it follows from (1.21) that

(

CO CO

и BA...

Read More

Testing Parameter Hypotheses

Suppose you consider starting a business to sell a new product in the United States such as a European car that is not yet being imported there. To determine whether there is a market for this car in the United States, you have randomly selected n persons from the population of potential buyers of this car. Each person j in the sample is asked how much he or she would be willing to pay for this car. Let the answer be Yj. Moreover, suppose that the cost of importing this car is a fixed amount Z per car. Denote Xj = ln(Yj /Z), and assume that Xj is N(д, a2) distributed. If д > 0, then your planned car import business will be profitable; otherwise, you should forget about this idea.

To decide whether д > 0 or д < 0, you need a decision rule based on the random sample X = (X1, X2,…, Xn)T...

Read More