Appendix I – Review of Linear Algebra
I.1. Vectors in a Euclidean Space
A vector is a set of coordinates that locates a point in a Euclidean space. For example, in the twodimensional Euclidean space K2 the vector
(I.1)
is the point whose location in a plane is determined by moving a1 = 6 units away from the origin along the horizontal axis (axis 1) and then moving a2 = 4 units away parallel to the vertical axis (axis 2), as displayed in Figure I.1. The distances a1 and a2 are called the components of the vector a involved.
An alternative interpretation of the vector a is a force pulling from the origin (the intersection of the two axes). This force is characterized by its direction (the angle of the line in Figure I.1) and its strength (the length of the line piece between point a and the origin). As to the latter, it follows from the Pythagorean theorem that this length is the square root of the sum of the squared distances of point a from the vertical and horizontal axes, у a2 + af = V62 + 42 = Зл/б, and is denoted by a . More generally, the length of a vector
/хД
(I.2)
Xn/
in Kn is defined by
Two basic operations apply to vectors in R". The first basic operation is scalar multiplication:
c ■ Xi
C ■ X"
where c є R is a scalar. Thus, vectors in R" are multiplied by a scalar by multiplying each of the components by this scalar. The effect of scalar multiplication is that the point x is moved a factor c along the line through the origin and the original point x. For example, if we multiply the vector a in Figure 1.1 by c = 1.5, the effect is the following:
The second operation is addition. Let x be the vector (I.2), and let
Уп
Then
Xn + Уп
Thus, vectors are added by adding up the corresponding components. Of course, this operation is only defined for conformable vectors, that is, vectors with the same number of components.
As an example of the addition operation, let a be the vector (I.1), and let
Then
for instance. This result is displayed in Figure I.3. We see from Figure I.3 that the origin together with the points a, b, and c = a + b form a parallelogram
(which is easy to prove). In terms of forces, the combined forces represented by the vectors a and b result in the force represented by the vector c = a + b.
The distance between the vectors a and b in Figure I.3 is a – by. To see this, observe that the length of the horizontal line piece between the vertical line through b and point a is a1 – b1, and similarly the vertical line piece between b and the horizontal line through a has length b2 – a2. These two line pieces, together with the line piece connecting the points a and b, form a triangle for which the Pythagorean theorem applies: The squared distance between a and b is equal to (a1 – b1)2 + (a2 – b2)2 = a – b2. More generally,
The distance between the vector x in (I.2) and the vectory in (I.5) is
Moreover, it follows from (I.9) and the law of cosines[21] that
The angle у between the vector x in (I.2) and the vector y in (I.5) satisfies
The two basic operations, addition and scalar multiplication, make a Euclidean space Rn a special case of a vector space:
Definition I.1: Let V be a set endowed with two operations: the operation “addition," denoted by “+,’’ which maps each pair (x, y) in V x V into V, and the operation “scalar multiplication” denoted by a dot (■) that maps each pair (c, x) in К x V into V The set V is called a vector space if the addition and multiplication operations involved satisfy the following rules for all x, y, and z in V and all scalars c, c1, and c2 in R:
(a) x + y = y + x;
(b) x + (y + z) = (x + y) + z;
(c) There is a unique zero vector 0 in V such that x + 0 = x;
(d) For each x there exists a unique vector – x in Vsuch that x + (x) = 0;
(e) 1 ■ x = x;
(f (c1 c2) ■ x = c1 ■ (c2 ■ x);
(g) c ■ (x + y) = c ■ x + c ■ y;
(h) (c1 + c2) ■ x = c1 ■ x + c2 ■ x.
It is trivial to verify that, with addition “+” defined by (I.6) and scalar multiplication c ■ x defined by (I.4), the Euclidean space Rn is a vector space. However, the notion of a vector space is much more general. For example, let V be the space of all continuous functions on R with pointwise addition and scalar multiplication defined the same way as for real numbers. Then it is easy to verify that this space is a vector space.
Another (but weird) example of a vector space is the space V of positive real numbers endowed with the “addition” operation x + y = x ■ y and the “scalar multiplication” c ■ x = xc. In this case the null vector 0 is the number 1, and —x = 1/x.
Definition I.2: A subspace V0 of a vector space V is a nonempty subset ofV that satisfies the following two requirements:
(a) For any pair x, y in V0, x + y is in V0;
(b) For any x in V0 and any scalar c, c ■ x is in V0.
It is not hard to verify that a subspace of a vector space is a vector space itself because the rules (a) through (h) in Definition I.1 are inherited from the “host” vector space V. In particular, any subspace contains the null vector 0, as follows from part (b) of Definition I.2 with c = 0. For example, the line through the origin and point a in Figure I.1 extended indefinitely in both directions is a subspace of R2. This subspace is said to be spanned by the vector a. More generally,
Definition I.3: Let x1, x2,…,xn be vectors in a vector space V The space V0 spanned by x1, x2 ,…,xn is the space of all linear combinations of xi, x2, …,xn, that is, eachy in V0 can be written as y = ^f П" =1 cjxj for some coefficients cj in R.
Clearly, this space V0 is a subspace of V.
For example, the two vectors a and b in Figure I.3 span the whole Euclidean space R2 because any vector x in R2 can be written as
where
2 1
c2 =——– x1 +— x2.
2 15 5 2
The same applies to the vectors a, b, and c in Figure I.3: They also span the whole Euclidean space R2. However, in this case any pair of a, b, and c does the same, and thus one of these three vectors is redundant because each of the
vectors a, b, and c can already be written as a linear combination of the other two. Such vectors are called linear dependent:
Definition I.4: A set of vectors x1, x2,…,xn in a vector space V is linear dependent ifone or more of these vectors can be written as a linear combination of the other vectors, and the set is called linear independent if none of them can be written as a linear combination of the other vectors. In particular, xi, x2,…,xn are linear independent if and only ifYTj=1 cjxj — 0 implies that
c1 — c2 — ”’ — cn — °.
For example, the vectors a and b in Figure I.3 are linear independent because, if not, then would exist a scalar c such that b — c ■ a; hence, 6 — 3c and 4 — 7c, which is impossible. A set of such linearindependent vectors is called a basis for the vector space they span:
Definition I.5: A basis for a vector space is a set ofvectors having thefollowing two properties:
(a) They are linear independent;
(b) They span the vector space involved.
We have seen that each of the subsets {a, b}, {a, c}, and {b, c} of the set {a, b, c} of vectors in Figure I.3 is linear independent and spans the vector space K2. Thus, there are in general many bases for the same vector space, but what they have in common is their number. This number is called the dimension of V.
Definition I.6: The number of vectors that form a basis of a vector space is called the dimension of this vector space.
To show that this definition is unambiguous, let {x1;x2,…,xn} and {y1, y2,…,ym} be two different bases for the same vector space, and let m — n + 1. Each of the yi’s can be written as a linear combination of x1, x2,…,xn : yi — J2 nj—1 ci, jxj. If {y1, У2,…, yn+1} is linear independent,
thenj] Ziyi —J2 n—1J2 r+l zici, jxj — 0 if and only if Z1 — ■■■— Zn+i — 0.
But because {xbx2,…,xn} is linear independent we must also have that £п+! zici, j — 0 for j — 1,…, n. The latter is a system of n linear equations in n + 1 unknown variables Zi and therefore has a nontrivial solution in the sense that there exists a solution z1,…, zn+1 such that at least one of the z’s is nonzero. Consequently, {y1, y2,…, yn+1} cannot be linear independent.
Note that in principle the dimension of a vector space can be infinite. For example, consider the space of all countable infinite sequences
x = (x1, x2, x3of real numbers endowed with the addition operation x + y = (X1, X2, X3, …) + (У1, У2, У3, •••)
(x1 + У1, x2 + У2, x3 + У3, …)
and the scalar multiplication operation
c ■ X = (c ■ x1, c ■ x2, c ■ x3,…).
Let yi be a countable infinite sequence of zeros except for the ith element in this sequence, which is equal to 1. Thus, y1 = (1, ^…X У2 = (0, 1 0,••.),
and so on. Then {y1, y2, y3,…} is a basis for KTO with dimension to. Also in this case there are many different bases; for example, another basis for is y1 = (1, 0, 0, 0,…), y2 = (1, 1, 0, 0,…), y3 = (1, 1, 1, 0,…), and so on.
In Figure I.3 the location of point c can be determined by moving nine units away from the origin along the horizontal axis 1 and then moving eleven units away from axis 1 parallel to the vertical axis 2. However, given the vectors a and b, an alternative way of determining the location of point c is to move У a У units away from the origin along the line through the origin and point a (the subspace spanned by a) and then move b units away parallel to the line through the origin and point b (the subspace spanned by b). Moreover, if we take a as the new distance unit along the subspace spanned by a, and  b as the new distance unit along the subspace spanned by b, then point c can be located by moving one (new) unit away from the origin along the new axis 1 formed by the subspace spanned by a and then moving one (new) unit away from this new axis 1 parallel to the subspace spanned by b (which is now the new axis 2). We may interpret this as moving the point (J) to a new location: point c. This is precisely what a matrix does: moving points to a new location by changing the coordinate system. In particular, the matrix
(I.11)
moves any point
(I.12)
to a new location by changing the original perpendicular coordinate system to a new coordinate system in which the new axis 1 is the subspace spanned by the first column, a, of the matrix A with new unit distance the length of a, and





























This matrix C is called the product of the matrices B and A and is denoted by BA. Thus, with A given by (I.14) and B given by (I.16),
(bn • 
• b1,m^ 
(ац. 
• a1,n^ 
Vbk.1 • 
• bk, m J 
am 1 
• am, n / 
(Zm=1 b1,sas,1 .. 
• 1^= b1,sasn 

VEm=1 bk, sas,1 .. 
• 1^= bk, sas, n ) 
BA 
which is a k x n matrix. Note that the matrix BA only exists if the number of columns of B is equal to the number of rows of A. Such matrices are described as being conformable. Moreover, note that if A and B are also conformable, so that AB is defined,[23] then the commutative law does not hold, that is, in general AB = BA. However, the associative law (AB)C = A(BC) does hold, as is easy to verify.
Let A be the m x n matrix (I.14), and now let B be another m x n matrix:
B=
bm, n /
As argued before, A maps a point x є К” to a point y = Ax e Km, and B maps x to a point z = Bx e Km. It is easy to verify that y + z = Ax + Bx = (A + B)x = Cx, for example, where C = A + B is the m x n matrix formed by adding up the corresponding elements of A and B:
a1,1 
• • • a1,n ^ 
(Ъ, •• 
• b1,n ^ 

A+B= 
+ 

am, 1 
• • • am, n ) 
pm, 1 • • 
• bm, n J 

a1,1 
+ b1,1 ••• 
a1 , 
n + b1,n ^ 

am, 1 
+ bm,1 • • • 
am, 
n + bm, n У 
Thus, conformable matrices are added up by adding up the corresponding elements.
Moreover, for any scalar c we have A(c ■ x) = c ■ (Ax) = (c ■ A)x, where c ■ A is the matrix formed by multiplying each element of A by the scalar c:
/au. 
* a1,n^ 
c ■ a1,1 . 
* c ■ a1,n ^ 

am, 1 
• am, n) 
c ■ am, 1 . 
* c ■ am, n/ 
c ■ A = c ■ 
Now with addition and scalar multiplication defined in this way, it is easy to verify that all the conditions in Definition I.1 hold for matrices as well (i. e., the set of all m x n matrices is a vector space). In particular, the “zero” element involved is the m x n matrix with all elements equal to zero:
O
Zero matrices are usually denoted by O only without subscripts indicating the size.
Leave a reply