DETERMINANTS AND INVERSES
Throughout this section, all the matrices are square and n X n.
Before we give a formal definition of the determinant of a square matrix, let us give some examples. The determinant of a 1 X 1 matrix, or a scalar, is the scalar itself. Consider a 2 X 2 matrix
д __ an an
Cl 2i #22
Its determinant, denoted by A or det A, is defined by (11.3.1) A = <211^22 — <221«12
The determinant of а З X 3 matrix
ап 
а12 
аЪ 
а21 
а 22 
а23 
а31 
а32 
аЪЪ 
is given by
= «цяггя’зз — «11«32«23 — «21«12«33 + «21 «32 «13 + «31 «12«23 — «31«22«13 •
Now we present a formal definition, given inductively on the assumption that the determinant of an (n – 1) X (n — 1) matrix has already been defined.
DEFINITION 11.3.1 Let A = {«y} be an n X n matrix, and let A, y be the (n — 1) X (n — 1) matrix obtained by deleting the ith row and the jth column from A. Then we define the determinant of A, denoted by A, as
П
(11.3.3) A =X(1)<+4K’.
і =1
The j above can be arbitrarily chosen as any integer 1 through n without changing the value of A. The term ( —l)I+7Ayj is called the cofactor of the element a„.
Alternatively, the determinant may be defined as follows. First, we write A as a collection of its columns:
(11.3.4) A = (ab a2, . . . , a„),
where a1; a2, . . . , an are n X 1 column vectors. Consider a sequence of n numbers defined by the rule that the first number is an element of aj (the first column of A), the second number is an element of a2, and so on, chosen in such a way that none of the elements lie on the same row. One can define nl distinct such sequences and denote the zth sequence, і = 1, 2, . . . , n, by [«і(г), а2(г),. . . , an(i)]. Letr](i) be the row number of a (i), and so on, and consider the sequence [rj(i), г2(г),. . . , rn(i)]. Let N(i) be the smallest number of transpositions by which [гу (г), г2(г), . . . , rn(i) ] can be obtained from [1, 2, . . . , n. For example, in the case of а З X 3 matrix, N = 0 for the sequence («ц, «22, «зз), N = 1 for (ац, «з2, я2з), and N = 2 for (а21, аз2, аіз). Then we have
nl
(11.3.5) A = X (l)W(!)«i(fi«2W ‘ ‘ ‘ «„(*)•
2=1
Let us state several useful theorems concerning the determinant. THEOREM 11.3.1 A = A’.
This theorem can be proved directly from (11.3.5). Because of the theorem, we may state all the results concerning the determinant in terms of the column vectors only as we have done in (11.3.3) and (11.3.5), since the same results would hold in terms of the row vectors.
THEOREM 11.3.2 If any column consists only of zeroes, the determinant is zero.
Theorem 11.3.2 follows immediately from (11.3.3). The determinant of a matrix in which any row is a zero vector is also zero because of Theorem 11.3.1.
THEOREM 11.3.3 If the two adjacent columns are interchanged, the determinant changes the sign.
The proof of this theorem is apparent from (11.3.5), since the effect of interchanging adjacent columns is either increasing or decreasing N(i) by one. (As a corollary, we can easily prove the theorem without including the word “adjacent.”)
THEOREM 11.3.4 If any two columns are identical, the determinant is zero.
This theorem follows immediately from Theorem 11.3.3.
THEOREM 11.3.5 AB = A B if A and В are square matrices of the same size.
The proof of Theorem 11.3.5 is rather involved, but can be direcdy derived from Definition 11.3.1.
We now define the inverse of a square matrix, but only for a matrix with a nonzero determinant.
DEFINITION 11.3.2 The inverse of a matrix A, denoted by A is the matrix defined by
(11.3.6) A1 = щ {(l)i+?Ay),
provided that A Ф 0. Here ( —1)!+;A> is the cofactor of at] as given in Definition 11.3.1, and ((1)!+іАг>) is the matrix whose i, jth element is (i)i+;K.
The use of the word “inverse” is justified by the following theorem.
THEOREM 11.3.6 A *A = AA 1 = I for any matrix A such that A Ф 0.
This theorem can be easily proved from Definitions 11.3.1 and 11.3.2 and Theorem 11.3.4. It implies that if AB = I, then В = A 1 and В 1 = A.
THEOREM 11.3.7 If A and В are square matrices of the same size such that A Ф 0 and B Ф 0, then (AB)"1 = B_1A_1.
The theorem follows immediately from the identity ABB *A 1 = I.
THEOREM 11.3.8 Let А, В, C, and D be matrices such that
A В C D
is square and D Ф 0. (Note that A and D must be square, but В and C need not be.) Then
Proof. We have
I 
BD”1 
A 
В 
ABD XC 
0 

0 
I 
C 
D 
C 
D 
where 0 denotes a matrix of appropriate size which consists entirely of
zeroes. We can ascertain from (11.3.5) that the determinant of the first matrix of the lefthand side of (11.3.8) is unity and the determinant of the righthand side of (11.3.8) is equal to A — BD ХС D. Therefore, taking the determinant of both sides of (11.3.8) and using Theorem 11.3.5 yields (11.3.7). □
THEOREM 1 1.3.9
A 
В 
1 
E 1 
E XBD 1 
C 
D 
—D1CE1 
F1 
where E = A – BD_1C, F = D – CA 1B, E 1 = A 1 + A 1BF~1CA_1, and F1 = D 1 + D1CE1BD1, provided that the inverse on the lefthand side exists.
Proof. To prove this theorem, simply premultiply both sides by
C D
Leave a reply