The Multivariate Normal Distribution and Its Application to Statistical Inference

5.1. Expectation and Variance of Random Vectors

Multivariate distributions employ the concepts of the expectation vector and variance matrix. The expected “value” or, more precisely, the expectation vector (sometimes also called the “mean vector”) of a random vector X = (x1 ,…,xn )T is defined as the vector of expected values:

def t-‘

E(X) = (E(X1), E(xn))T.

Adopting the convention that the expectation of a random matrix is the matrix of the expectations of its elements, we can define the variance matrix of X as[14]

Var(X) = E [(X – E(X))(X – E(X))T]

/cov(Xb x1) cov(x1, x2) ••• cov(x1; xn )

Подпись: (5.1)cov(x2, x1) cov(x2, x2) ••• cov(x2, xn)

cov(Xn, X1) cov(Xn, X2) ••• cov(Xn, Xn )/

Recallthatthediagonalelementsofthematrix(5.1)arevariances: cov(xj, Xj) = var(xj). Obviously, a variance matrix is symmetric and positive (semi)definite. Moreover, note that (5.1) can be written as

Var(X) = E[XXT] – (E[X])(E[X])T. (5.2)

Similarly, the covariance matrix of a pair of random vectors X and Y is the matrix of covariances of their components:[15]

Cov(X, Y) = E [(X – E(X))(Y – E(Y))T]. (5.3)

Note that Cov(Y, X) = Cov(X, Y)T. Thus, for each pair X, Y there are two covariance matrices, one being the transpose of the other.

Leave a reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>