Gaussian Elimination of a Square Matrix and the Gauss-Jordan Iteration for Inverting a Matrix

I.6.1. Gaussian Elimination of a Square Matrix

The results in the previous section are the tools we need to derive the following result:

Theorem I.8: Let A be a square matrix.

(a) There exists a permutation matrix P, possibly equal to the unit matrix I, a lower-triangular matrix L with diagonal elements all equal to 1, a diagonal matrix D, and an upper-triangular matrix U with diagonal elements all equal to 1 such that PA = LDU.

(b) If A is nonsingular and P = I, this decomposition is unique; that is, if A = LDU = L*D*U*, then L* = L, D* = D, and U* = U.

The proof of part (b) is as follows: LDU = L * D*U* implies

L-1L * D* = DUU-1. (I.21)

It is easy to verify that the inverse ofa lower-triangular matrix is lower triangu­lar and that the product of lower-triangular matrices is lower triangular. Thus the left-hand side of (I.21) is lower triangular. Similarly, the right-hand side of (I.21) is upper triangular. Consequently, the off-diagonal elements in both sides are zero: Both matrices in (I.21) are diagonal. Because D* is diagonal and nonsingular, it follows from (I.21) that L-1L* = DUU-1 D-1 is diagonal. Moreover, because the diagonal elements of L-1 and L * are all equal to 1, the same applies to L-1L*, that is, L-1L* = I; hence, L = L*. Similarly, we have U = U*. Then D = L-1AU-1 and D* = L-1AU-1.

Rather than giving a formal proof of part (a) of Theorem I.8, I will demon­strate the result involved by two examples, one for the case that A is nonsingular and the other for the case that A is singular.

Example 1: A is nonsingular.

Let

Подпись: 2 4 2 A= 1 2 3 1 1 1 1 (I.22)

We are going to multiply A by elementary matrices and elementary permutation matrices such that the final result will be an upper-triangular matrix. This is called Gaussian elimination.

First, add -1/2 times row 1 to row 2 in (I.22). This is equivalentto multiplying A by the elementary matrix E2j1 (— 1/2). (Compare (I.20) with c = — 1 /2.) Then

/1 0 0/2 4 2 /2 4 2

E2 i(— 1/2) A = —0.5 1 01 2 3 = 0 0 2 .

V0 0 v V—1 1 —v V—1 1 —v

(I.23)

for instance. Moreover, because P2,3 is an elementary permutation matrix we have that P^[24] = P2,3; hence, it follows from Theorem I.7 and (I.25) that

image816

Next, add 1 /2 times row 1 to row 3, which is equivalent to multiplying (I.23) by the elementary matrix E3,i(1/2):

(I.27)

Подпись: 1 0 0-1 (E3,1(1/2)E2,1(-1/2))-1 = -0.5 1 0 0.5 0 1 1 0 0 = 0.5 1 0 = L, -0.5 0 1 for instance. Combining (I.26) and (I.27), we find now that P2 3 A = LDU.

Example 2: A is singular.

Theorem I.8 also holds for singular matrices. The only difference with the nonsingular case is that, if A is singular, then the diagonal matrix D will have zeros on the diagonal. To demonstrate this, let

Подпись: 2 4 2 A= 1 2 1  1 1 1

Подпись: hence,

(I.28)

Подпись: E2,1(-1/2) A Подпись: 0 0 2 4 -0.5 1 0 1 2 0 0 1 -1 1 Подпись: 4 0 1
image823 image824

Because the first and last column of the matrix (I.28) are equal, the columns are linear dependent; hence, (I.28) is singular. Now (I.23) becomes

(I.22) becomes

image515

2

4

2

0

0

0

1

1

1

Подпись: E3,1(1/2) E2,1(-1/2) A

and (I.25) becomes

Подпись: 1 0 0 0 0 1 0 1 0 2 0 0 0 3 0 0 0 0 Подпись: P2,3 E3,1(1/2) E2,1(-1/2) A2 4 2 /2 4 2

0 0 0 = 0 3 0

0 3 0/ 0 0/

1 2 1

0 1 0 = DU. (I.29)

0 0 1/

The formal proof of part (a) of Theorem I.8 is similar to the argument in these two examples and is therefore omitted.

Note that the result (I.29) demonstrates that

Theorem I.9: The dimension of the subspace spanned by the columns of a square matrix A is equal to the number of nonzero diagonal elements of the matrix D in Theorem I.8.

Example 3: A is symmetric and nonsingular

Next, consider the case that A is symmetric, that is, AT = A. For example, let

2 4 2

Подпись: A4 0 1

2 1 -1

Then

Eb,2(-3/8) E2,1(-1)E2,1(-2) AEu(-2) Eu(-1) E„(-3/8)

Подпись: 2 0 0 -8 0 0 0

0 = D;

15/8/ hence,

A = (E3,2(-3/8)E3,1(-1) E2,1(-2))-1

x D( E1,2(-2) E1,3(-1) E2,3(-3/8))-1 = LDLT

Thus, in the symmetric case we can eliminate each pair of nonzero elements opposite the diagonal jointly by multiplying A from the left by an appropriate elementary matrix and multiplying A from the right by the transpose of the same elementary matrix.

Example 4: A is symmetric and singular

Although I have demonstrated this result for a nonsingular symmetric matrix, it holds for the singular case as well. For example, let

/2 4 2

A = 4 0 4 .

2 4 2

Then

/2 0 0

E3,1(-1) E2,1(-2) AE1,2(-2) E1,3(-1) = 0 -8 0 = D.

0 0

Example 5: A is symmetric and has a zero in a pivot position

If there is a zero in a pivot position,7 then we need a row exchange. In that case the result A = LDLT will no longer be valid. For example, let

/0 4 2

A = 4 0 4 .

2 4 2

Then

4

0

4

0

4

2

0

0

-2

4

0

0

0

4

0

0

0

-2

 

Eb,2(-1) Eb, i(—1/2) Pu A

 

1

0

1

0

1

1/2 = DU,

0

0

1 /

 

but

L = (Eb,2(-1) E3,i(-1/2))-1 = E3,i(1/2) E3,2(1)

і 1 0 0 T

0 10 = UT

1/2 1 1/

Thus, examples 3, 4, and 5 demonstrate that

Theorem I.10: If A is symmetric and the Gaussian elimination can be con­ducted without the needfor row exchanges, then there exists a lower-triangular matrix L with diagonal elements all equal to 1 and a diagonal matrix D such that A = LDLT.

Leave a reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>