A quick example calculating the column space and the nullspace of a matrix.
Chapter 4 Chapter Content 1. Real Vector Spaces 2. Subspaces 3. Linear Independence 4. Basis...
-
Upload
joanna-griffith -
Category
Documents
-
view
227 -
download
0
Transcript of Chapter 4 Chapter Content 1. Real Vector Spaces 2. Subspaces 3. Linear Independence 4. Basis...
Chapter 4 Chapter Content
1. Real Vector Spaces
2. Subspaces
3. Linear Independence
4. Basis
5. Dimension
6. Row Space, Column Space, and Nullspace
8. Rank and Nullity
9. Matrix Transformations for Rn to Rm
Definition (Vector Space)Let V be an arbitrary nonempty set of objects on which two operations are defined: addition, and multiplication by scalars.
If the following axioms are satisfied by all objects u, v, w in V and all scalars k and m, then we call V a vector space and we call the objects in V vectors
1. If u and v are objects in V, then u + v is in V.2. u + v = v + u3. u + (v + w) = (u + v) + w4. There is an object 0 in V, called a zero vector for V, such that 0 + u= u + 0 = u for all u in V.5. For each u in V, there is an object -u in V, called a negative of u, such that u + (-u) = (-u) + u = 0.6. If k is any scalar and u is any object in V, then ku is in V.7. k (u + v) = ku + kv8. (k + m) u = ku + mu9. k (mu) = (km) (u)10. 1u = u
To Show that a Set with Two Operations is a Vector Space
1. Identify the set V of objects that will become vectors.
2. Identify the addition and scalar multiplication operations on V.
3. Verify Axioms 1(closure under addition) and 6 (closure under scalar multiplication) ;
that is, adding two vectors in V produces a vector in V, and multiplying a vector in V by a scalar also produces a vector in V.
4. Confirm that Axioms 2,3,4,5,7,8,9 and 10 hold.
Remarks
• Depending on the application, scalars may be real numbers or complex numbers.
•Vector spaces in which the scalars are complex numbers are called complex vector spaces, and those in which the scalars must be real are called real vector spaces.
• The definition of a vector space specifies neither the nature of the vectors nor the operations.
•Any kind of object can be a vector, and the operations of addition and scalar multiplication may not have any relationship or similarity to the standard vector operations on .
• The only requirement is that the ten vector space axioms be satisfied.
The Zero Vector Space
Let V consist of a single object, which we denote by 0, and define
0 + 0 = 0 and k 0 = 0 for all scalars k.
It’s easy to check that all the vector space axioms are satisfied.
We called this the zero vector space.
Example ( Is a Vector Space)
The set V = with the standard operations of addition and scalar multiplication is a vector space.
(Axioms 1 and 6 follow from the definitions of thestandard operations on ; the remaining axioms follow from Theorem 4.1.1.)
The three most important special cases of are R (the real numbers), (the vectors in the plane), and (the vectors in 3-space).
nR
nR
nR
nR 2R3R
Example (2×2 Matrices)
Show that the set V of all 2×2 matrices with real entries is a vector space if vector addition is defined to be matrix addition and vector scalar multiplication is defined to be matrix scalar multiplication.
Solution: Let and
(1) we must show that u + v is a 2×2 matrix.
(2) Want to show that u + v = v + u
(3) Similarly we can show that u + ( v + w ) = ( u + v )+ w.
(4) Define 0 to be such that
11 12
21 22
u uu
u u
11 12
21 22
v vv
v v
11 11 12 12
21 21 22 22
u v u vu v v u
u v u v
11 11 12 12
21 21 22 22
u v u vu v
u v u v
0 00
0 0
11 12
21 22
0 0u u
u u uu u
(5) Define the negative of u to be such that
(6) If k is any scalar and u is a 2X2 matrix, then is 2X2 matrix.
(7)-(9) will be obtained by similar approach.
(10)
11 12
21 22
u uu
u u
0 0
0 0u u u u
11 12
21 22
ku kuku
ku ku
11 12 11 12
21 22 21 22
1 11 .
1 1
u u u uu u
u u u u
Thus, the set V of all 2×2 matrices with real entries is a vector space.
Solution: We must check all ten properties:
(1) If (x, y, z) and (x’, y’, z’) are triples of real numbers, so is (x, y, z) + (x’, y’, z’) = (x + x’, y +y’, z + z’).
(2) (x, y, z) + (x’, y’, z’) = (x + x’, y + y’, z + z’)= (x’, y’, z’) + (x, y, z).
(3) (x, y, z) + [(x’, y’, z’) + (x’’, y’’, z’’)] = (x, y, z) + [(x’, y’, z’) + (x’’, y’’, z’’)].
(4) There is an object 0, (0, 0, 0), such that (0, 0, 0) + (x, y, z) = (x, y, z) + (0, 0, 0)= (x, y, z).
(5) For each positive real x, (-x, -y, -z) acts as the negative: (x, y, z) + (-x, -y, -z) = (-x, -y, -z) + (x, y, z) =(x, y, z)
Example: Given the set of all triples of real numbers ( x, y, z ) with the operations
Determine if it’s a vector space under the given operation.
( , , ) ( ', ', ') ( ', ', ')
( , , ) ( , , )
x y z x y z x x y y z z and
k x y z kx y z
(6) If k is a real and (x, y, z) is a triple of real numbers, then k (x, y, z) = (kx, y, z) is again a triple of real numbers.
(7) k[(x, y, z) + (x’, y’, z’)] = (k(x+x’), y+y’, z+z’) = k(x, y, z) + k(x’, y’, z’)
(8) (k + m) (x, y, z) = ((k + m)x, y, z) k (x, y, z) + m(x, y, z)
Axiom (8) fails
Thus, the set of all triples of real numbers ( x, y, z ) with the operations is NOT a vector space under the given operation.
Example.
Let V = R2 and define addition and scalar multiplication operations as follows: If u = (u1, u2) and v = (v1, v2), then define
u + v = (u1 + v1, u2 + v2)
and if k is any real number, then define
k u = (k u1, 0)
There are values of u for which Axiom 10 fails to hold. For example, if u = (u1, u2) is such that u2 ≠ 0,then
1u = 1 (u1, u2) = (1 u1, 0) = (u1, 0) ≠ u
Thus, V is not a vector space with the stated operations
Theorem 5.1.1
Let V be a vector space, u be a vector in V, and k a scalar; then:
(a) 0 u = 0 (b) k 0 = 0
(c) (-1) u = -u (d) If k u = 0 , then k = 0 or u = 0.
4.2 Subspaces
Definition A subset W of a vector space V is called a subspace of V if W is itself avector space under the addition and scalar multiplication defined on V.
Theorem 5.2.1
If W is a set of one or more vectors from a vector space V, then W is asubspace of V if and only if the following conditions hold:
a)If u and v are vectors in W, then u + v is in W.
b) If k is any scalar and u is any vector in W , then ku is in W.
RemarkTheorem 5.2.1 states that W is a subspace of V if and only if W is a closed under addition (condition (a)) and closed under scalar multiplication (condition (b)).
ExampleAll vectors of the form (a, 0, 0) is a subspace of R3.
• The set is closed under vector addition because
(a, 0, 0) + (b, 0, 0) = (a + b, 0, 0)
• It is closed under scalar multiplication because
k(a, 0, 0) = (ka, 0, 0)
Therefore it is a subspace of R3.
Example (Not a Subspace)
Let W be the set of all points (x, y) in R2 such that x ≥ 0 and y ≥ 0. These are the points in the first quadrant.
The set W is not a subspace of R2 since it is not closed under scalar multiplication.
For example, v = (1, 1) lines in W, but its negative (-1)v = -v = (-1, -1) does not.
Subspaces of Mnn
The set of n×n diagonal matrices forms subspaces of Mnn, since each of these sets is closed under addition and scalar multiplication.
This set is closed under vector addition since the sum of two integers is again aninteger.
However, it is not closed under scalar multiplication since the product kuwhere k is real and a is an integer need not be an integer.
Thus, the set is not a subspace.
The set of n×n matrices with integer entries is NOT a subspace of the vector space Mnn of n×n matrices.
Linear Combination
Definition in 3.1 A vector w is a linear combination of the vectors v1, v2,…, vr if it can be expressed in the form
w = k1v1 + k2v2 + · · · + kr vr
where k1, k2, …, kr are scalars.
Example:Vectors in R3 are linear combinations of i, j, and kEvery vector v = (a, b, c) in R3 is expressible as a linear combination of the standard basis vectors
i = (1, 0, 0), j = (0, 1, 0), k = (0, 0, 1)
Since v= a(1, 0, 0) + b(0, 1, 0) + c(0, 0, 1) = a i + b j + c k
ExampleConsider the vectors u = (1, 2, -1) and v = (6, 4, 2) in R3. Show that w = (9, 2, 7) is a linear combination of u and v and that w′ = (4, -1, 8) is not a linear combination of u and v.
Solution.In order for w to be a linear combination of u and v, there must be scalars k1 and k2 such that w = k1u + k2v;
(9, 2, 7) = (k1 + 6k2, 2k1 + 4k2, -k1 + 2k2)
Equating corresponding components gives
k1 + 6k2 = 9 2k1+ 4k2 = 2 -k1 + 2k2 = 7
Solving this system yields k1 = -3, k2 = 2, so
w = -3u + 2v
Similarly, for w‘ to be a linear combination of u and v, there must be scalars k1 and k2 such that w'= k1u + k2v;
(4, -1, 8) = k1(1, 2, -1) + k2(6, 4, 2)
or
(4, -1, 8) = (k1 + 6k2, 2k1 + 4k2, -k1 + 2k2)
Equating corresponding components gives k1 + 6k2 = 4 2 k1+ 4k2 = -1 - k1 + 2k2 = 8
This system of equation is inconsistent, so no such scalars k1 and k2 exist. Consequently, w' is not a linear combination of u and v.
Linear Combination and Spanning
Theorem 5.2.3If v1, v2, …, vr are vectors in a vector space V, then:
(a) The set W of all linear combinations of v1, v2, …, vr is a subspace of V.
(b) W is the smallest subspace of V that contain v1, v2, …, vr in the sense that every other subspace of V that contain v1, v2, …, vr must contain W.
Example
If v1 and v2 are non-collinear vectors in R3 with their initial points atthe origin, then span{v1, v2}, which consists of all linear combinations k1v1 + k2v2 is the plane determined by v1 and v2.
Similarly, if v is a nonzero vector in R2 and R3, then span {v}, whichis the set of all scalar multiples kv, is the line determined by v.
Example
Determine whether v1 = (1, 1, 2), v2 = (1, 0, 1), and v3 = (2, 1, 3)span the vector space R3.
SolutionIs it possible that an arbitrary vector b = (b1, b2, b3) in R3 can be expressed as a linear combination b = k1v1 + k2v2 + k3v3 ?
b = (b1, b2, b3) = k1(1, 1, 3) + k2(1, 0, 1) + k3(2, 1, 3) = (k1+k2+2k3, k1+k3, 2k1+k2+3k3)
Or
k1 + k2 + 2k3 = b1
k1 + k3 = b2
2k1 + k2 + 3 k3 = b3
This system is consistent for all values of b1, b2, and b3 if and only if thecoefficient matrix has a nonzero determinant.
However, det(A) = 0, so that v1, v2, and v3, do not span R3.
Solution Space
Solution Space of Homogeneous SystemsIf Ax = b is a system of the linear equations, then each vector x that satisfies this equation is called a solution vector of the system.
Theorem 5.2.2If Ax = 0 is a homogeneous linear system of m equations in n unknowns, then the set of solution vectors is a subspace of Rn.
Remark: Theorem 5.2.2 shows that the solution vectors of a homogeneouslinear system form a vector space, which we shall call the solution space of the system.
Theorem 4.2.5If S = {v1, v2, …, vr} and S′ = {w1, w2, …, wr} are two sets of vector in a vector space V, then
span{v1, v2, …, vr} = span{w1, w2, …, wr}
if and only if
each vector in S is a linear combination of these in S′ and each vector in S′ is a linear combination of these in S.
4. 3 Linearly Independence
DefinitionIf S = {v1, v2, …, vr} is a nonempty set of vector, then the vector equation k1v1 + k2v2 + … + krvr= 0
has at least one solution, namely
k1 = 0, k2 = 0, … , kr = 0.
If this the only solution, then S is called a linearly independent set. If there are other solutions, then S is called a linearly dependent set.
ExamplesGiven v1 = (2, -1, 0, 3), v2 = (1, 2, 5, -1), and v3 = (7, -1, 5, 8).Then the set of vectors S = {v1, v2, v3} is linearly dependent, since 3v1 + v2 – v3 = 0.
ExampleLet i = (1, 0, 0), j = (0, 1, 0), and k = (0, 0, 1) in R3. Determine it it’s a linear independent set
Solution: Consider the equation k1i + k2j + k3k = 0 ⇒ k1(1, 0, 0) + k2(0, 1, 0) + k3(0, 0, 1) = (0, 0, 0) ⇒ (k1, k2, k3) = (0, 0, 0) ⇒ The set S = {i, j, k} is linearly independent.
Similarly the vectors
e1 = (1, 0, 0, …,0), e2 = (0, 1, 0, …, 0), …, en = (0, 0, 0, …, 1)
form a linearly independent set in Rn.
Remark:To check whether a set of vectors is linear independent or not, write down the linear combination of the vectors and see if their coefficients all equal zero.
ExampleDetermine whether the vectors v1 = (1, -2, 3), v2 = (5, 6, -1), v3 = (3, 2, 1) form a linearly dependent set or a linearly independent set.
SolutionLet the vector equation k1v1 + k2v2 + k3v3 = 0
⇒ k1(1, -2, 3) + k2(5, 6, -1) + k3(3, 2, 1) = (0, 0, 0)
⇒ k1 + 5k2 + 3k3 = 0 -2k1 + 6k2 + 2k3 = 0 3k1 – k2 + k3 = 0
⇒ det(A) = 0
⇒ The system has nontrivial solutions
⇒ v1,v2, and v3 form a linearly dependent set
Theorems
Theorem 4.3.1A set with two or more vectors is:
(a) Linearly dependent if and only if at least one of the vectors in S is expressible as a linear combination of the other vectors in S.
(b) Linearly independent if and only if no vector in S is expressible as a linearcombination of the other vectors in S.
Theorem 4.3.2(a) A finite set of vectors that contains the zero vector is linearly dependent.
(b) A set with exactly one vector is linearly independent if and only if that vector is not the zero vector.
(c) A set with exactly two vectors is linearly independent if and only if neither vector is a scalar multiple of the other.
Theorem 4. 3.3Let S = {v1, v2, …, vr} be a set of vectors in Rn. If r > n, then S is linearly dependent.
Geometric Interpretation of Linear Independence
In R2 and R3, a set of two vectors is linearly independent if and only if the vectors do not lie on the same line when they are placed with their initial points at the origin.
In R3, a set of three vectors is linearly independent if and only if the vectors do not lie in the same plane when they are placed with their initial points at the origin.
Section 4.4 Coordinates and Basis
DefinitionIf V is any vector space and S = {v1, v2, …,vn} is a set of vectors in V, then S is
called a basis for V if the following two conditions hold:
(a) S is linearly independent.
(b) S spans V.
Theorem 5.4.1 (Uniqueness of Basis Representation)If S = {v1, v2, …,vn} is a basis for a vector space V, then every vector v in V can
be expressed in the form v = c1v1 + c2v2 + … + cnvn in exactly one way.
Coordinates Relative to a Basis
If S = {v1, v2, …, vn} is a basis for a vector space V, and
v = c1v1 + c2v2 + ··· + cnvn
is the expression for a vector v in terms of the basis S,
then the scalars c1, c2, …, cn, are called the coordinates of v relative to the basis S.
The vector (c1, c2, …, cn) in Rn constructed from these coordinates is called the coordinate vector of v relative to S; it is denoted by
(v)S = (c1, c2, …, cn)
Remark: Coordinate vectors depend not only on the basis S but also on the order in which the basis vectors are written. A change in the order of the basis vectors results in a corresponding change of order for the entries in the coordinate vector.
Standard Basis for R3
Suppose that i = (1, 0, 0), j = (0, 1, 0), and k = (0, 0, 1), then S = {i, j, k} is a linearly independent set in R3. This set also spans R3 since any vector v = (a, b, c) in R3 can be written as
v = (a, b, c) = a(1, 0, 0) + b(0, 1, 0) + c(0, 0, 1) = ai + bj + ck
Thus, S is a basis for R3; it is called the standard basis for R3.
Looking at the coefficients of i, j, and k, it follows that the coordinates of v relative to the standard basis are a, b, and c, so
(v)S = (a, b, c)
Comparing this result to v = (a, b, c), we have
v = (v)S
Standard Basis for Rn
If e1 = (1, 0, 0, …, 0), e2 = (0, 1, 0, …, 0), …, en = (0, 0, 0, …, 1), thenS = {e1, e2, …, en} is a linearly independent set in Rn. This set also spans Rn
since any vector v = (v1, v2, …, vn) in Rn can be written asv = v1e1 + v2e2 + … + vnen
Thus, S is a basis for Rn; it is called the standard basis for Rn.
The coordinates of v = (v1, v2, …, vn) relative to the standard basis are v1 ,v2, …, vn, thus
(v)S = (v1, v2, …, vn)
As the previous example, we have v = (v)s, so a vector v and its coordinate vector relative to the standard basis for Rn are the same.
Example
Let v1 = (1, 2, 1), v2 = (2, 9, 0), and v3 = (3, 3, 4). Show that the set S = {v1, v2, v3} is a basis for R3.
Solution: To show that the set S spans R3, we must show that an arbitrary vector b = (b1, b2, b3) can be expressed as a linear combination
b = c1v1 + c2v2 + c3v3
of the vectors in S.
Let (b1, b2, b3) = c1(1, 2, 1) + c2(2, 9, 0) + c3(3, 3, 4)c1 +2c2 +3c3 = b1
2c1+9c2 +3c3 = b2
c1 +4c3 = b3
Let A be the coefficient matrix , then det(A) = -1 ≠ 0So S spans R3.
1 2 3
2 9 3
1 0 4
To show that the set S is linear independent, we must show that the only solution of c1v1 + c2v2 + c3v3 =0 is a trivial solution.
c1 +2c2 +3c3 = 02c1+9c2 +3c3 = 0c1 +4c3 = 0
Note that det(A) = -1 ≠ 0, so S is linear independent.
So S is a basis for R3.
Example
Example
Let v1 = (1, 2, 1), v2 = (2, 9, 0), and v3 = (3, 3, 4), and S = {v1, v2, v3} be the basis for R3 in the preceding example.
(a) Find the coordinate vector of v = (5, -1, 9) with respect to S.(b) Find the vector v in R3 whose coordinate vector with respect to the basis S is (v)s = (-1, 3, 2).
Solution (a)We must find scalars c1, c2, c3 such that v = c1v1 + c2v2 + c3v3, or, interms of components, (5, -1, 9) = c1(1, 2, 1) + c2(2, 9, 0) + c3(3, 3, 4)
c1 +2c2 +3c3 = 52c1+9c2 +3c3 = -1c1 +4c3 = 9
Solving this, we obtaining c1 = 1, c2 = -1, c3 = 2.Therefore, (v)s = (1, -1, 2).
Solution
Solution (b)Using the definition of the coordinate vector (v)s, we obtain
v = (-1)v1 + 3v2 + 2v3 = (11, 31, 7).
Finite-Dimensional
DefinitionA nonzero vector space V is called finite-dimensional if it contains a finite set of
vector {v1, v2, …,vn} that forms a basis.
If no such set exists, V is called infinite-dimensional. In addition, we shall regard the zero vector space to be finite-dimensional.
ExampleThe vector space Rn is finite-dimensional.
4.5 Dimension
Theorem 4. 5.2Let V be a finite-dimensional vector space and {v1, v2, …,vn} any basis.
(a) If a set has more than n vector, then it is linearly dependent.
(b) If a set has fewer than n vector, then it does not span V.
Which can be used to prove the following theorem.
Theorem 4.5.1All bases for a finite-dimensional vector space have the same number of
vectors.
Dimension
DefinitionThe dimension of a finite-dimensional vector space V, denoted by dim(V), is
defined to be the number of vectors in a basis for V. In addition, we define the zero vector space to have dimension zero.
Example:dim(Rn) = n [The standard basis has n vectors]
dim(Mmn) = mn [The standard basis has mn vectors]
Example
Determine a basis for and the dimension of the solution space of the homogeneous system
2x1 + 2x2 – x3 + x5 = 0-x1 - x2 + 2x3 – 3x4 + x5 = 0
x1 + x2 – 2x3 – x5 = 0x3+ x4 + x5 = 0
Solution: By Gauss-Jordan Elimination method, we have
Thus, x1+x2+x5=0, x3+x5=0, x4=0. Solving for the leading variables yieldsthe general solution of the given system: x1 = -s-t, x2 = s, x3 = -t, x4 = 0, x5 = t
2 2 1 0 1 0 1 1 0 0 1 0
1 1 2 3 1 0 0 0 1 0 1 0
1 1 2 0 1 0 0 0 0 1 0 0
0 0 1 1 1 0 0 0 0 0 0 0
rref
��������������
Solution
Therefore, the solution vectors can be written as
Which shows that the vectors
span the solution space.
Since they are also linearly independent, {v1, v2} is a basis, and the solution space is two-dimensional.
1 2
1 1
1 0
0 1
0 0
0 1
v and v
1
2
3
4
5
1 1
0 1 0
0 0 1
0 0 0 0 0
0 0 1
x s t s t
x s s
x s tt t
x
x t t
Some Fundamental Theorems
Theorem 4.5.3 (Plus/Minus Theorem)Let S be a nonempty set of vectors in a vector space V.
(a) If S is a linearly independent set, and if v is a vector in V that is outside ofspan(S), then the set S {∪ v} that results by inserting v into S is still linearlyindependent.
(b) If v is a vector in S that is expressible as a linear combination of othervectors in S, and if S – {v} denotes the set obtained by removing v from S,then S and S – {v} span the same space; that is, span(S) = span(S – {v})
Theorem 4.5.4If V is an n-dimensional vector space, and if S is a set in V with exactlyn vectors, then S is a basis for V if either S spans V or S is linearlyindependent.
Theorems
Theorem 4.5.5Let S be a finite set of vectors in a finite-dimensional vector space V.
(a) If S spans V but is not a basis for V, then S can be reduced to a basis for V by removing appropriate vectors from S.
(b) If S is a linearly independent set that is not already a basis for V, then S can be enlarged to a basis for V by inserting appropriate vectors into S.
Theorem 4.5.6If W is a subspace of a finite-dimensional vector space V, then
(a) W is finite-dimensional.
(b) dim(W) ≤ dim(V);
(c) W = V if and only if dim(W) = dim(V).
Section 4.7 Row Space, Column Space, and Nullsapce
Definition. For an mxn matrix
The vectors
in Rn formed from the rows of A are called the row vectors of A,
11 12 1
21 22 2
1 2
...
...
. . . .
. . . .
. . . .
...
n
n
m m mn
a a a
a a a
A
a a a
1 11 12 1
2 21 22 2
1 2
...
...
.
.
.
...
n
n
m m m mn
r a a a
r a a a
r a a a
Row Vectors and Column Vectors
And the vectors
In Rn formed from the columns of A are called the column vectors of A.
11 12 1
21 22 2
1 2
1 2
. . ., ,...,
. . .
. . .
n
n
n
m m mn
a a a
a a a
c c c
a a a
Nullspace
Theorem
Elementary row operations do not change the nullspace of a matrix.
Example
Find a basis for the nullspace of
The nullspace of A is the solution space of the homogeneous system Ax=0.
2x1 + 2x2 – x3 + x5 = 0
-x1 - x2 + 2x3 – 3x4 + x5 = 0
x1 + x2 – 2x3 – x5 = 0
x3+ x4 + x5 = 0.
2 2 1 0 1
1 1 2 3 1
1 1 2 0 1
0 0 1 1 1
A
Nullspace Cont.
Then by the previous example, we know
Form a basis for this space.
1 2
1 1
1 0
0 1
0 0
0 1
v and v
Theorems
Theorem
Elementary row operations do not change the row space of a matrix.
Note: Elementary row operations DO change the column space of a matrix.
However, we have the following theorem
Theorem
If A and B are row equivalent matrices, then
(a) A given set of column vectors of A is linearly independent if and only if the corresponding column vectors of B are linearly independent.
(b) A given set of column vectors of A forms a basis for the column space of A if and only if the corresponding column vectors of B form a basis for the column space of B.
Theorems Cont.
Theorem
If a matrix R is in row-echelon form, then the row vectors with the leading 1’s (the nonzero row vectors) form a basis for the row space of R, and the column vectors with the leading 1’s of the row vectors form a basis for the column space of R.
Example
Example
Find bases for the row and column spaces of
Solution. Since elementary row operations do not change the row space of a matrix, we can find a basis for the row space of A by finding a basis for the row space of any row-echelon form of A.
1 3 4 2 5 4
2 6 9 1 8 2
2 6 9 1 9 7
1 3 4 2 5 4
A
1 3 0 14 0 37
0 0 1 3 0 4
0 0 0 0 1 5
0 0 0 0 0 0
R
Example
By Theorem, the nonzero row vectors of R form a basis for the row space of R and hence form a basis for the row space of A.
These basis vectors are
Note that A ad R may have different column spaces, but from Theorem that if we can find a set of column vectors of R that forms a basis for the column space of R, then the corresponding column vectors of A will form a basis for the column space of A.
1
2
3
1 3 0 14 0 37
0 0 1 3 0 4
0 0 0 0 1 5
r
r
r
Example
Note
Form a basis for the column space of R; thus the corresponding column vectors of A,
Form a basis for the column space of A.
1 2 5
1 0 0
0 1 0, ,
0 0 1
0 0 0
c c c
1 2 5
1 4 5
2 9 8, ,
2 9 9
1 4 5
c c c
Section 4.8 Rank and Nullity
Theorem
If A is any matrix, then the row space and column space of A have the same dimension.
Definition
The common dimension of the row space and column space of a matrix A is called the rank of A and is denoted by rank(A);
the dimension of the nullspace of A is called the nullity of A and is denoted by nullity(A).
Example
Example
Find the rank and nullity of the matrix
Solution. The reduced row-echelon form of A is
1 2 0 4 5 3
3 7 2 0 1 4
2 5 2 4 6 1
4 9 2 4 4 7
A
1 0 4 28 37 13
0 1 2 12 16 5
0 0 0 0 0 0
0 0 0 0 0 0
Example
Since there are two nonzero rows (or, equivalently, two leading 1’s), the row space and column space are both two-dimensional, so rank(A)=2.
To find the nullity of A, we must find the dimension of the solution space of the linear system Ax=0. This system can be solved by reducing the augmented matrix to reduced row-echelon form.
The corresponding system of equations will be
X1-4x3-28x4-37x5+13x6=0
X2-2x3-12x4-16x5+5x6=0
Solving for the leading variables, we have
x1=4x3+28x4+37x5-13x6
X2=2x3+12x4 +16x5-5x6
It follows that the general solution of the system is
Example Cont.
x1=4r+28s+37t-13u
X2=2r+12s +16t-5u
X3=r
X4=s
X5=t
X6=u
Equivalently,
Because the four vectors on the right side of the equation form a basis for the solution space, nullity(A)=4.
1
2
3
4
5
6
4 28 37 13
2 12 16 5
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
x
x
xr s t u
x
x
x
Theorems
Theorem
If A is any matrix, then rank(A)=rank(AT).
Theorem (Dimension Theorem for Matrices)
If A is a matrix with n columns, then
Rank(A)+nullity(A)=n
Theorem
If A is an mxn matrix, then
(a) rank(A)= the number of leading variables in the solution of Ax=0.
(b) nullity(A)= the number of parameters in the general solution of Ax=0.
Theorems
Theorem (Equivalent Statements)
If A is an nxn matrix, and if TA: Rn Rn is multiplication by A, then the following
are equivalent.
(a) A is invertible.
(b) Ax=0 has only the trivial solution.
(c) The reduced row-echelon form of A is In.
(d) A is expressed as a product of elementary matrices.
(e) Ax=b is consistent for every nx1 matrix b.
(f) Ax=b has exactly one solution for every nx1 matrix b.
(g) Det(A)0.
(h) The range of TA is Rn.
(i) TA is one-to-one.
(j) The column vectors of A are linearly independent.
Theorem Cont.
(k) The row vectors of A are linearly independent.
(l) The column vectors of A span Rn.
(m) The row vectors of A span Rn.
(n) The column vectors of A form a basis for Rn.
(o) The row vectors of A form a basis for Rn.
(p) A has rank n.
(q) A has nullity 0.
A function is a rule f that associates with each element in a set A one and only one element in a set B.
If f associates the element b with the element a, then we write b = f(a) and say that b is the image of a under f or that f(a) is the value of f at a.
The set A is called the domain of f and the set B is called the codomain of f.
The subset of B consisting of all possible values for f as a varies over A is called the range of f.
4.9 Transformations from tonR mR
nRFunctions from to R
Here, we will be concerned exclusively with transformations from Rn to Rm. Suppose f1, f2, …, fm are real-valued functions of n real variables, say w1 = f1(x1,x2,…,xn) w2 = f2(x1,x2,…,xn) … wm = fm(x1,x2,…,xn)
These m equations assign a unique point (w1,w2,…,wm) in Rm to each point(x1,x2,…,xn) in Rn and thus define a transformation from Rn to Rm. If wedenote this transformation by T: Rn → Rm, then T (x1,x2,…,xn) = (w1,w2,…,wm)
Function from tonR mR
Example: The equations 1 1 2
2 1 2
2 23 1 2
3
w x x
w x x
w x x
Defines a transformation 2 3: .T R R
With this transformation, the image of the point (x1, x2) is
2 21 2 1 2 1 2 1 2( , ) ( ,3 , )T x x x x x x x x
Thus, for example, T(1, -2)=(-1, -6, -3)
A linear transformation (or a linear operator if m = n) T: → isdefined by equations of the form
or
or w = Ax
The matrix A = [aij] is called the standard matrix for the linear transformation T, and T is called multiplication by A.
Linear Transformations from tonR mR
nR mR
1 11 1 12 2 1
2 21 1 22 2 2
1 1 2 2
...
...
...
...
n n
n n
m m m mn n
w a x a x a x
w a x a x a x
w a x a x a x
1 11 12 1 1
2 21 22 2 2
1 2
n
n
m m m mn m
w a a a x
w a a a x
w a a a x
Example: If the linear transformation is defined by the equations 4 3:T R R
1 1 2 3 4
2 1 2 3 4
3 1 2 3
2 3 5
4 2
5 4
w x x x x
w x x x x
w x x x
Solution: T can be expressed as 1
12
23
34
2 3 1 5
4 1 2 1
5 1 4 0
xw
xw
xw
x
So the standard matrix for T is
2 3 1 5
4 1 2 1
5 1 4 0
A
Find the standard matrix for T, and calculate (1, 3,0,2)T
Furthermore,
, (1, 3,0,2) (1,3,8)Thus T
1 1 2 3 4
2 1 2 3 4
3 1 2 3
2 3 5 1
4 2 3
5 4 8
w x x x x
w x x x x
w x x x
1 2 3 4( , , , ) (1, 3,0, 2)if x x x x
Or
1
2
3
12 3 1 5 1
34 1 2 1 3
05 1 4 0 8
2
w
w
w
Notations:
If it is important to emphasize that A is the standard matrix for T. We denote the linear transformation T: → by TA: → . Thus, TA(x) = Ax
We can also denote the standard matrix for T by the symbol [T], or T(x) = [T]x
Remark:
We have establish a correspondence between m×n matrices and lineartransformations from to :
To each matrix A there corresponds a linear transformation TA (multiplication by A), and to each linear transformation T: → , there corresponds an m×n matrix [T] (the standard matrix for T).
Remarks
nR nR
nR
mR mR
mR
mRnR
Properties of Matrix Transformations
The following theorem lists four basic properties of matrix transformations that follow from the properties of matrix multiplication.
Theorem 4.9.2If TA: Rn Rm and TB: Rn Rm are matrix multiplications, and if TA(x)=TB(x) for every vector x in Rn, then A=B.
Zero Transformation from toIf 0 is the m×n zero matrix and 0 is the zero vector in , then for every vector x in T0(x) = 0x = 0
So multiplication by zero maps every vector in into the zero vector in . . We call T0 the zero transformation from to .
Identity operator on If I is the n×n identity, then for every vector in
TI(x) = Ix = x
So multiplication by I maps every vector in into itself. We call TI the identity operator on .
Examples
nR mRnR
nR
nRmRmR nR
nRnR
nRnR
A Procedure for Finding Standard Matrices
In general, operators on and that map each vector into its symmetric image about some line or plane are called reflection operators.
Such operators are linear.
Reflection Operators
2R 3R
In general, a projection operator (or more precisely an orthogonal projection operator) on or is any operator that maps each vector into its orthogonal projection on a line or plane through the origin.
The projection operators are linear.
Projection Operators
2R 3R
An operator that rotate each vector in through a fixed angle θ is called a rotation operator on .
Rotation Operators
2R
2R
A Rotation of Vectors in R3
• A rotation of vectors in R3 is usually described
in relation to a ray emanating from the origin, called
the axis of rotation.
• As a vector revolves around the axis of rotation
it sweeps out some portion of a cone.
• The angle of rotation is described as “clockwise”
or “counterclockwise” in relation to a viewpoint that is
along the axis of rotation looking toward the origin.
• The counterclockwise direction for a rotation about its axis can be determined by a “right hand rule”.
Example: Use matrix multiplication to find the image of the vector (1, 1) when it is rotated through an angle of 30 degree ( )/ 6
Solution: the image of the vector is xx
y
3 1cos / 6 sin / 6 3 / 2 1/ 2 2 2sin / 6 cos / 6 1/ 2 3 / 2 1 3
2 2
x yx xw
y yx y
So
3 13 / 2 1/ 2 1 2
11/ 2 3 / 2 1 3
2
w
If k is a nonnegative scalar, the operator on or is called a contraction with factor k if 0 ≤ k ≤ 1 and a dilation with factor k if k ≥ 1 .
Dilation and Contraction Operators
2R 3R
Expansion and Compressions
In a dilation or contraction of R2 or R3, all coordinates are multiplied by a factor k. If only one of the coordinates is multiplied by k, then the resulting operator is called an expansion or compression with factor k.
Shears
A matrix operator of the form T(x, y)=(x+ky, y) is called the shear in the x-direction with factor k.
Similarly, a matrix operator of the form T(x, y)=(x, y+kx) is called the shear in the y-direction with factor k.
If TA : → and TB : → are linear transformations, then foreach x in one can first compute TA(x), which is a vector in , and TB
then one can compute TB(TA(x)), which is a vector in .
Thus, the application of TA followed by TB produces a transformation from to . This transformation is called the composition of TB with TA and isdenoted by . Thus
The composition is linear since
The standard matrix for is BA. That is,
4.10 Properties of Matrix Transformations
nR kR kR mRnR kR
mRnR
mRB AT T
( )(x) ( ( (x))B A B AT T T T
B AT T
( )(x) ( ( (x)) ( x) ( )xB A B AT T T T B A BA
B AT T B A BAT T T
Compositions of Linear Transformations
Remark:
captures an important idea: Multiplying matrices is equivalent to composing the corresponding linear transformations in the right-to-left order of the factors.
B A BAT T T
Alternatively, If are linear transformations, then becausethe standard matrix for the composition is the product of the standard matrices of T2 and T1, we have
1 2: :n k k mT R R and T R R 2 1T T
2 1 2 1T T T T
Example: Find the standard matrix for the linear operator that first reflectsA vector about the y-axis, then reflects the resulting vector about the x-axis.
2 2:T R R
Solution: The linear transformation T can be expressed as the composition
2 1T T T
Where T1 is the reflection about the y-axis, and T2 is the reflection aboutThe x-axis.
1 2
1 0 1 0,
0 1 0 1T T
Sine the standard matrix for T is 2 1 2 1T T T T T
1 0 1 0 1 0
0 1 0 1 0 1T
Which is called the reflection about the origin.
Note: the composition is NOT commutative.
Example: Let be the reflection operator about the line y=x, and let be the orthogonal projection on the y-axis. Then
2 21 :T R R
2 22 :T R R
1 2 1 2
0 1 0 0 0 1
1 0 0 1 0 0T T T T
2 1 2 1
0 0 0 1 0 0
0 1 1 0 1 0T T T T
2 1 1 2T T T T
Thus, have different effects on a vector x. 2 1 2 1T T and T T
One–to-One Matrix Transformations
Linearity Properties
Section 5.1 Eigenvalue and Eigenvector
In general, the image of a vector x under multiplication by a square matrix A differs from x in both magnitude and direction. However, in the special case where x is an eigenvector of A, multiplication by A leaves the direction unchanged.
Depending on the sign and magnitude of the eigenvalue λ corresponding to x, the operation Ax= λx compresses or stretches x by a factor of λ, with a reversal of direction in the case where λ is negative.
Computing Eigenvalues and Eigenvectors
Example. Find the eigenvalues of the matrix
Solution.
This shows that the eigenvalues of A are λ=3 and λ=-1.
3 0
8 1A
3 0det( ) ( 3)( 1) 0
8 1A I
Finding Eigenvectors and Bases for Eigenspaces
Since the eigenvectors corresponding to an eigenvalue λ of a matrix A