Generalized Eigenvalues and Eigenvectors
The concepts of generalized eigenvalues and eigenvectors play a key role in cointegration analysis. Cointegration analysis is an advanced econometric time series topic and will therefore not likely be covered in an introductory Ph. D. level econometrics course for which this review of linear algebra is intended.
Nevertheless, to conclude this review I will briefly discuss what generalized eigenvalues and eigenvectors are and how they relate to the standard case.
Given two n x n matrices A and B, the generalized eigenvalue problem is to find the values of X for which
det(A — XB) = 0.
Given a solution X, which is called the generalized eigenvalue of A relative to B, the corresponding generalized eigenvector (relative to B) is a vector x in Kn such that Ax = XBx.
However, if B is singular, then the generalized eigenvalue problem may not have n solutions as in the standard case and may even have no solution at all. To demonstrate this, consider the 2 x 2 case:
(«1,1 a!,2 B_ (^1,1 ^1,2 j
«2,1 «2,2 J b2,1 b2,2j
Then,
(«1,1 — Xb1,1)(«2,2 — Xb2,2)
— («1,2 — Xb1,2)(«2,1 — Xb2,1)
«1,1«2,2 — «1,2«2,1
+ («2,1b1,2 — «2,2b1,1 — «1,1b2,2 + b2,1«1,2)X + (b1,1b2,2 — b2,1b1,2)X2.
If B is singular, then b1,1b2,2 — b2,1b1,2 = 0, and thus the quadratic term vanishes. But the situation can even be worse! It is also possible that the coefficient of X vanishes, whereas the constant term a1,1a2,2 — a1,2a2,1 remains nonzero. In that case the generalized eigenvalues do not exist at all. This is, for example, the case if
Then
1 — X —X —X —1 — X
= (1 — X)(1 + X) — X2 = 1,
and thus the generalized eigenvalue problem involved has no solution.
Therefore, in general we need to require that the matrix B be nonsingular. In that case the solutions of (I.63) are the same as the solutions of the standard eigenvalue problems det(AB—1 — XI) = 0 and det(B—1A — XI) = 0.
The generalized eigenvalue problems we will encounter in advanced econometrics always involve a pair of symmetric matrices A and B with B positive definite. Then the solutions of (I.63) are the same as the solutions of the symmetric, standard eigenvalue problem
det(B1/2AB1/2 – XI) = 0. (I.64)
The generalized eigenvectors relative to B corresponding to the solutions of
(1.63) can be derived from the eigenvectors corresponding to the solutions of
(1.64) :
B1/2AB1/1x = Xx = XB1/2B1/2x ^ A(BX/2x)
= X B (B1/2x). (I.65)
Thus, if x is an eigenvector corresponding to a solution X of (I.64), then y = B1/2x is the generalized eigenvector relative to B corresponding to the generalized eigenvalue X.
Finally, note that generalized eigenvectors are in general not orthogonal even if the two matrices involved are symmetric. However, in the latter case the generalized eigenvectors are “orthogonal with respect to the matrix B” in the sense that, for different generalized eigenvectors y1 and y2, yTBy2 = 0. This follows straightforwardly from the link y = B1/2x between generalized eigenvectors y and standard eigenvectors x.
1. Consider the matrix

(a) Conduct the Gaussian elimination by finding a sequence Ej of elementary matrices such that (EkEk1 … E2 • Ej) A = U = upper triangular.
(b) Then show that, by undoing the elementary operations Ej involved, one gets the LUdecomposition A = LU withL a lowertriangular matrix with all diagonal elements equal to 1.
(c) Finally, find the LDU factorization.
2. Find the 3 x 3 permutation matrix that swaps rows 1 and 3 of a 3 x 3 matrix.
3. Let
/1 V 0 0
0 v2 0 0
= 0 v3 1 0 ’
^0 v4 0 1 у
where v2 = 0.
(a) Factorize A into LU.
(b) Find A1, which has the same form as A.
4. Compute the inverse of the matrix
’12 0 2 6 4
0 4 11
by any method.
5. Consider the matrix
( 1 2 
0 
2 
1 

A= 
1 2 
1 
1 
0 
1 2 
3 
7 
2 
(a) Find the echelon matrix U in the factorization PA = LU.
(b) What is the rank of A?
(c) Find a basis for the null space of A.
(d) Find a basis for the column space of A.
6. Find a basis for the following subspaces of K4:
(a) The vectors (x1, x2, x3, x4)T for which x1 = 2×4.
(b) The vectors (x1, x2, x3, x4)T for which x1 + x2 + x3 = 0 and x3 + x4 = 0.
(c) The subspace spanned by (1, 1, 1, 1)T, (1, 2, 3, 4)T, and (2, 3, 4, 5)T.
7. Let
/1 2 0 3 /*1
A = 0 0 0 0 and b = b2
2 4 0 1/ *3
(a) Under what conditions on b does Ax = b have a solution?
(b) Find a basis for the nullspace of A.
(c) Find the general solution of Ax = b when a solution exists.
(d) Find a basis for the column space of A.
(e) What is the rank of AT?
8.
Apply the GramSmidt process to the vectors
and write the result in the form A = QU, where Q is an orthogonal matrix and U is upper triangular.
9. With a, b, and c as in problem 8, find the projection of c on the space spanned by a and b.
10. Find the determinant of the matrix A in problem 1.
11. Consider the matrix
For which values of a has this matrix
(a) two different realvalued eigenvalues?
(b) two complexvalued eigenvalues?
(c) two equal realvalued eigenvalues?
(d) at least one zero eigenvalue?
12. For the case a = —4, find the eigenvectors of the matrix A in problem 11 and standardize them to unit length.
13. Let A be a matrix with eigenvalues 0 and 1 and corresponding eigenvectors (1, 2)T and (2, — 1)T
(a) How can you tell in advance that A is symmetric?
(b) What is the determinant of A?
(c) What is A?
14. The trace of a square matrix is the sum of the diagonal elements. Let A be a positive definite к x к matrix. Prove that the maximum eigenvalue of A can be found as the limit of the ratio trace( An )/trace( An—1) for n ^ ж.
Leave a reply