Gaussian Elimination of a Square Matrix and the GaussJordan Iteration for Inverting a Matrix
I.6.1. Gaussian Elimination of a Square Matrix
The results in the previous section are the tools we need to derive the following result:
Theorem I.8: Let A be a square matrix.
(a) There exists a permutation matrix P, possibly equal to the unit matrix I, a lowertriangular matrix L with diagonal elements all equal to 1, a diagonal matrix D, and an uppertriangular matrix U with diagonal elements all equal to 1 such that PA = LDU.
(b) If A is nonsingular and P = I, this decomposition is unique; that is, if A = LDU = L*D*U*, then L* = L, D* = D, and U* = U.
The proof of part (b) is as follows: LDU = L * D*U* implies
L1L * D* = DUU1. (I.21)
It is easy to verify that the inverse ofa lowertriangular matrix is lower triangular and that the product of lowertriangular matrices is lower triangular. Thus the lefthand side of (I.21) is lower triangular. Similarly, the righthand side of (I.21) is upper triangular. Consequently, the offdiagonal elements in both sides are zero: Both matrices in (I.21) are diagonal. Because D* is diagonal and nonsingular, it follows from (I.21) that L1L* = DUU1 D1 is diagonal. Moreover, because the diagonal elements of L1 and L * are all equal to 1, the same applies to L1L*, that is, L1L* = I; hence, L = L*. Similarly, we have U = U*. Then D = L1AU1 and D* = L1AU1.
Rather than giving a formal proof of part (a) of Theorem I.8, I will demonstrate the result involved by two examples, one for the case that A is nonsingular and the other for the case that A is singular.
Example 1: A is nonsingular.
Let
(I.22)
We are going to multiply A by elementary matrices and elementary permutation matrices such that the final result will be an uppertriangular matrix. This is called Gaussian elimination.
First, add 1/2 times row 1 to row 2 in (I.22). This is equivalentto multiplying A by the elementary matrix E2j1 (— 1/2). (Compare (I.20) with c = — 1 /2.) Then
/1 0 0/2 4 2 /2 4 2
E2 i(— 1/2) A = —0.5 1 01 2 3 = 0 0 2 .
V0 0 v V—1 1 —v V—1 1 —v
(I.23)
for instance. Moreover, because P2,3 is an elementary permutation matrix we have that P^[24] = P2,3; hence, it follows from Theorem I.7 and (I.25) that 
Next, add 1 /2 times row 1 to row 3, which is equivalent to multiplying (I.23) by the elementary matrix E3,i(1/2):
(I.27)
for instance. Combining (I.26) and (I.27), we find now that P2 3 A = LDU.
Example 2: A is singular.
Theorem I.8 also holds for singular matrices. The only difference with the nonsingular case is that, if A is singular, then the diagonal matrix D will have zeros on the diagonal. To demonstrate this, let
(I.28)
Because the first and last column of the matrix (I.28) are equal, the columns are linear dependent; hence, (I.28) is singular. Now (I.23) becomes
(I.22) becomes
2 
4 
2 
0 
0 
0 
1 
1 
1 
and (I.25) becomes
2 4 2 /2 4 2
0 0 0 = 0 3 0
0 3 0/ 0 0/
1 2 1
0 1 0 = DU. (I.29)
0 0 1/
The formal proof of part (a) of Theorem I.8 is similar to the argument in these two examples and is therefore omitted.
Note that the result (I.29) demonstrates that
Theorem I.9: The dimension of the subspace spanned by the columns of a square matrix A is equal to the number of nonzero diagonal elements of the matrix D in Theorem I.8.
Example 3: A is symmetric and nonsingular
Next, consider the case that A is symmetric, that is, AT = A. For example, let
2 4 2
4 0 1
2 1 1
Then
Eb,2(3/8) E2,1(1)E2,1(2) AEu(2) Eu(1) E„(3/8)
0
0 = D;
15/8/ hence,
A = (E3,2(3/8)E3,1(1) E2,1(2))1
x D( E1,2(2) E1,3(1) E2,3(3/8))1 = LDLT
Thus, in the symmetric case we can eliminate each pair of nonzero elements opposite the diagonal jointly by multiplying A from the left by an appropriate elementary matrix and multiplying A from the right by the transpose of the same elementary matrix.
Example 4: A is symmetric and singular
Although I have demonstrated this result for a nonsingular symmetric matrix, it holds for the singular case as well. For example, let
/2 4 2
A = 4 0 4 .
2 4 2
Then
/2 0 0
E3,1(1) E2,1(2) AE1,2(2) E1,3(1) = 0 8 0 = D.
0 0
Example 5: A is symmetric and has a zero in a pivot position
If there is a zero in a pivot position,7 then we need a row exchange. In that case the result A = LDLT will no longer be valid. For example, let
/0 4 2
A = 4 0 4 .
2 4 2
Then






but
L = (Eb,2(1) E3,i(1/2))1 = E3,i(1/2) E3,2(1)
і 1 0 0 T
0 10 = UT
1/2 1 1/
Thus, examples 3, 4, and 5 demonstrate that
Theorem I.10: If A is symmetric and the Gaussian elimination can be conducted without the needfor row exchanges, then there exists a lowertriangular matrix L with diagonal elements all equal to 1 and a diagonal matrix D such that A = LDLT.
Leave a reply