# Inverse of a Matrix in Terms of Cofactors

Theorem I.31 now enables us to write the inverse of a matrix A in terms of cofactors and the determinant as follows. Define

Definition I.20: The matrix

is called the adjoint matrix of A.

Note that the adjoint matrix is the transpose of the matrix of cofactors with typical (i, j)’s element cof;,j(A). Next, observe from Theorem I.31 that det( A) — Jfk—1 ai, k cofijk (A) is just the diagonal element i of A ■ Aadjoint. More­over, suppose that row j of A is replaced by row i, and call this matrix B. This has no effect on cofj, k(A), but ^nk—1 ai, kcofj, k(A) — Yl—1 a, kcof, k(B) is now the determinant of B. Because the rows of B are linear dependent, det(B) — 0. Thus, we have

ТИ—1 ai, k cofj, k (A) — det( A) if i — j,

— 0 if i — j;

hence,

Theorem I.32: If det(A) — 0, then A 1 — jtj) Aadjoint.

Note that the cofactors cofj, k(A) do not depend on at, j. It follows therefore from Theorem I.31 that

Using the well-known fact that d ln(x )/dx — 1/x, we find now from Theorem I.32 and (I.58) that

Note that (I.59) generalizes the formula d ln(x)/dx = 1/x to matrices. This re­sult will be useful in deriving the maximum likelihood estimator of the variance matrix of the multivariate normal distribution.