Intro to linear alg

640

Click here to load reader

description

whole book pdf

Transcript of Intro to linear alg

Library of CongrtSs Cataloging-in-Publication Data on File.

Editorial Director. Computer Science. Engineering. and Advanced Mathematics: Marcia J. HortonSenior Editor: Holly Stark

Editorial Assistant: Jcnnircr Lonschcin Senior Managing Editor/Production Editor: Scott DisannoArt Director: John Christiana Cover Designer: Tamar.t Newnan Art Editor: Thom~L'\ Benfatti Manufacturing Manager: Alexis Heydt-Long Manufacturing Buyer: Lisa McDowell Senior Marketing Manager: Tim Galligan

@

2008 PearsOncnt' are nonnegative and have a sum of I. Show that if p and q arc probability vectors and a and b are nonnegatile scalars with ll + b = I. then a p + bq is a probabtlity vector.

56. If A is a matrix for which the sum A +A r is defined. then A i' a 'quare matrix.

57. Let A and 8 be matrices of the same size. (a) Prove that the j th column of II + B is11 1

+ bi.

(b) Prove that for any scalar c. the jth colunm of cA is

car58. For any m x 11 matrix A, prove that OA = 0. the m x 11 zero matrix. 59. For any 111 x11

matrix A. prove thai lA =A.

This exercise is used in Seent Au as a

linear combination of the columns of A. hr rercises 71- 74,1t't A = [

-~ ~] a11d u = [~

l

71. Show that Au is the reRcction of u about the yaxis. 72. Prove that A(A u) = u .73. Modify the matrix A to obtain a matrix 8 ..a that B u i> the reRection of u about the ,, -axis.

74.

() = 180.

~t

C denote the rotation matrix that corre,pond> to

(a) Find C . (b) Use the matrix B in Exercise 73 to >how that A(C u) = C(A u) = B u B(C u) = C(B u) = Au . (c) Interpret these equations in terms of renections and rotations./11

n

and

Erercises 75-79. let A

= [~ ~] and u =

(;:

l

75. Show that Au is the projection of tt on the xaxis. 76. Prove that A(Au ) = Au .

n

77. Show that if v is any vector whose endpoint lies on the x-axis. then Av = '' 78. Modi fy the matrix A to obtain a matrix 8 so that B u is the proJection of u on the \'-axi>. 79.~t C denote the rotation matrix that corresponds to 8 = 180 . (See Exerme 74(a).)

(a) Prove that A(C u) = C(Au ). (b) Interpret the result111

(a) geometncally.

64. A matnx havmg nonncgauve entries such that the sum of the entrie' in each column is I is called a stochastic matrix. 65. Use a matrix vector product to >how that if () Aov = v for all v in 2

80. ~t u1 and u 2 be vectors rn n. Pro\"e that the >urn of two linear combination; of these vectors is also a tinear combination of the>e vecton..8 1. ~t 111 and 112 be vecton. in n. Let v and w be linear combination' of u 1 and u 2. Prove that any linear combination of v and w is also a linear combination of u 1 and u 2 82. u 1 and u 2 be vectors in n. Prove that a >c:llar multiple of a linear combinatio n of thc>c vectors is also a linear combination of these vectors.~t

n

= 0 . then

66. Usc a matrix vector product to show that if 0 = 180 . - v for :til v in 2 then Aov

=

n

67. Usc matrix- vector products to >how that. for any angles 0 and fJ and any vector v in 2 Ao(A,9 v) = A0+.8" 68. Compute A~(Ao u) and Ao(A~ u) for any vector u in n 2 and any :mgle 0. 69. Suppose that in a metropolitan area there me 400 thousand people living in the city and 300 thousand people living in the ;uburbs. Use the >tochastic matrix in Example 3 to determine (a) the number of people living in the ci ty and s uburbs after one year; (b) the number of people living in the city and suburbs after two years.

n

83. Prove (b) of Theorem 1.3.84. Prove (c) of Theorem 1.3.

85. Prove (d) of Theorem 1.3. 86. Prove (e) of Theotc m 1.3.87. Prove (t) of Theore m 1.3.

88. Prove (g) of Theorem 1.3. 89. Prove (h) of Theorem 1.3./11 Exercises 90 a11d 91. ll.fl! either a wlclll(l(or ll"ith matrh capo

bilitirs or compwu softwarr suclr as MATLAB to sohc cacltproblem.

1.3 Systems of Linear Equations

27

90. In reference to Exercise 69. dete rmine the number of pcopie living in the city and suburbs after 10 years. 91. For the matrices

and the vectors

A=

andB

=

['' [''1.3 4.4 0.2

1.3

-9.9-2.2

9.81.1 4.8 2.4 -2.1

-0.1 4.5 5.7 1.1 3.0 2.4 -5.8 -5.3

-8.5

-1.2 1.3 6.0

'] ']6.2 2.0 6.0 2.8 8.2

"{l](a) compute Au: (b) compute B (u (c) compute (A

and

+ v);

+ B )v:

(d) compute A(Bv).

SOLUTIONS TO THE PRACTICE PROBLEMSI. (a) The vectors inS are nonparallel vectors in 1?.2

Using elementary algebra. we sec thatx2

XI

=4

and

(b) To express w as a linear combination of the vectors in S. we must find scalars Xt and x2 such that

= - 3. So

[-1] [2] +x [ 3] = [2x1+ 3x2]10=Xt

I

2 -2

Xt

-2r2

.

[~~J = [~J _ -n4 3[

That is. we must solve the following system:2.tt

2.(a)Av =[~ -~ -~] [_:] = [,~](b)(Av/ = [~~r =( 11] 4

+ 3x2 = 2x2 =

Xt-

I 10

CD]

SYSTEMS OF LINEAR EQUATIONS

A linear equation in the variables (unknowns) x 1,x2 . . . ,x,. is an equation that can be written in the fom1

where a 1,a2, ... ,a,., and b are real numbers. The scalars a 1,a2, . .. ,a,. are called the coefficients, and b is called the constant term of the equation. For example, 3x 1 - ?x2 +x3 19 is a linear equation in the variables x 1, x2, and x3, with coefficients 3. -7, and l. and constant tcnn 19. The equation 8x2- 12xs = 4x t - 9x3 + 6 is also a li near equati on because it can be wrinen as

=

On the other hand. the equations

2xt -1x2 +xi= -3,

and

4J,X;' - 3x2 = IS

are 11ot linear equations because they contai n terms in volvi ng a product of variables, a square of a variable. or a square root of a variable. A system of linear equations is a set of m linear equati ons in the same 11 vruiablcs. where m and 11 arc positive integers. We can write such a system in the

28

CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations

form

a,,x, + a12x2 + + a,,x,a21X1

= b,= h111 ,

+ 022X2 + + a2,X, = b2+ a,2X2 + + a,,x,

0 111 1X1

where a;j denotes the coefficient of Xj in equation i. For example, on page 18 we obtained the following system of 2 linear equations in the variables x,, x2. and x3: .80x, .20x,

+ .60x2 + .40x3 =+ .40x2 + .60x3 =

5 3

( l)

A solution of a system of linear equations in the variables x,,xz, ... ,x, is a vector [:;] in 1?!' such that every equation in the system is satisfied when each x;

s,is replaced by s;. For example, .80(2) + .60(5) + .40( 1)

[~] is a solution of system ( I ) because=5and .20(2) + .40(5) + .60( 1)

= 3.

The set of all solutions of a system of linear equations is called the solution set of that system.

Practice Problem 1 ""'

Determine whether (a) u = linear equations

[ -~~lx, +

" ' (b)

~ [1] ~l ,.ioMoreX4

of oho

''"~

of

2x, - xz

+ 6x3

Sx3 -

=7= 8.

SYSTEMS OF 2 LINEAR EQUATIONS IN 2 VARIABLESA linear equation in two variables x and y has the form ax+ by= c. When at least one of a and b is nonzero, this is the equation of a line in the xy-planc. Thus a system of 2 linear equat ions in the variables x andy consists of a pair of equations, each of which describes a line in the plane.

a,x

+ b,y = c, a2x + bzy = c2

is the equation of line L,. is the equation of line

Lz.

Geometrically, a solution of such a system corresponds to a point lying on both of the lines L 1 and L2. There are three d ifferent situations that can arise. If the lines are different and parallel, then they have no point in common. In this case, the system of equations has no solution. (See Figure 1.16.) If the lines are different but not parallel. then the two lines have a unique point of intersection. In this case, the system of equations has exactly one solution. (See Figure 1.17 .)

1.3 Systems of Linear Equationsy

29

)'

c,X

c,X

1 and 2 arc parallel.No solution

1 and 2 arc diffcrcnl buo nol parallel.Exactly one solution

Figure 1.16

Figure 1.17

Finally. if the two lines coincide, then every point on , and Lz satisfies both of the equations in the system, and so every point on 1 and Lz is a solution of the system. ln this case. there arc infinitely many solutions. (Sec Figure 1.18.) As we will soon see, no matter how many equations and variables a system has, there arc exactly three possibilities for its solution set.X

Every system of linea r equations has no solution, exactly one solution, or infinitely many solutions ..C 1and .C.2 are the sa. me. Infinitely many solulions

Figure 1.18

A system of linear equations that has one or more solutions is called consistent; otherw ise, the system is called inconsistent. Figures 1.17 and 1.18 show consistent systems, while Figure 1.16 shows an inconsistent system.

ELEMENTARY ROW OPERATIONSTo find the solution set of a system of linear equations or determine that the system is inconsistent, we replace it by one with the same solutions that is more easily solved. Two systems of linear equations that have exactly the same solutions are called equivalent. Now we present a procedure for creating a simpler, equivalent system. It is based on an important technique for solving a system of linear equations taught in high school algebra classes. To illustrate this procedure, we solve the following system of three linear equations in the variables x 1, xz, and x3:

x, - 2xz - X3 = 3 3x, - 6xz - Sx3 = 3 2x1 - Xz + XJ = 0

(2)

We begin the simplification by eliminating from every equation but the first. To do so, we add appropriate mu ltiples of the first equation to the second and third equations so that the coefficient of x 1 becomes 0 in these equations. Adding - 3 times the first equation to the second makes the coefficient of equal 0 in the result.

x,

x,

-3x, + 6xz + 3x3 = -9 {-3 times equation I) 3xl - 6x2 - Sx3 = 3 (equalion 2) - 2x3 = - 6

30

CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations

Likewise, add ing - 2 times the fi rst equation to the th ird makes the coeffic ient of x 1 0 in the new third equation.

-2x, + 4x2 2r, - x2 3x2

+ 2x3 = -6 + XJ = 0 + 3x3 = -6

(-2 times equation I) (equalion 3)

We now replace equation 2 with - 2r3 - 6, and equation 3 with 3x2 transform system (2) into the following system:X1 -

=

+ 3x3 = -

6 to

2t2 - X3 = 3 - 2x3 = - 6 3x2 + 3x3 = -6

In this case, the calculation that makes the coefficient of x 1 equal 0 in the new second equation also makes the coefficient of x2 equal 0. (This does not always happen, as you can sec from the new third equation.) If we now interchange the second and third equations in th is system, we obtain the following system:

Xi - 2x2 -

X3

=

3

3x2 + 3x3

= -6

(3)

- 2r3 = - 6We can now solve the third equation for X3 by mu ltiplying both si des by - ~ (or equivalently, dividing both sides by - 2). This producesX( -

2r2 - X3 3x2 + 3x3X3

= =

=- 63.

3

By adding appropriate multiples of the third equation to the first and second. we can el iminate X3 from every equation but the th ird. If we add the third equation to the first and add -3 times the third equation to the second , we obtain

6= - ISX3

=

3.

Now solve for x2 by multiplying the second equation byX( -

!. The result is

2x2X2X3

6= -5

=

3.

Finall y, adding 2 times the second equation to the first produces the very simple systemX(

= - 4X2X3 =

=- 53,x2

(4)

whose solution is obvious. You should check that replacing x, by -4, andX3

by -5,

by 3 makes each equation in system (2) tme. so that [

=~]

is a solution of

system (2). Indeed, it is the only solution, as we soon will show.

1.3 Systems of Linear Equations

31

In each step just presented, the names of the variables played no essentia l role. All of the operations that we performed on the system of equations can also be performed on matrices. In fact, we can express the original systemXt - 2r2 X3 = 3 3xt - 6x2 - 5x3 3 2xt - X2 + XJ = 0

=

(2)

as the matrix equation Ax = b , where

A=

[i -I]- I I

-2 -6 -5 '

X =

XI] [X2 XJ

,

and

b=

[il

Note that the columns of A contain the coefficien ts of x 1, x2 , and x3 from system (2) . For this reason. A is called the coefficient matrix (or the matrix of coefficients) of system (2 ). All the information that is needed to find the solution set of this system is contained in the matrix

[ionly if Au = b. l11us [

-2 - I -6 -5- I

which is called the augmented matrix of the system. This matrix is formed by augmenting the coefficient matrix A to include the vector b. We denote the augmented matrix by [A b ]. If A is an m x 11 matrix, then a vector u in 'R" is a solution of Ax b if and

=~] is a solution of system-2 -6- I

=

(2) because

Example 1

For the system of linear equationsXt

+

2rt - x2

+ 6x3

5XJ -

X4

=

= - 8,0 5 - 1 6- I

7

the coefficient matrix and the augmented matrix arc

[~

0 5 - 1 6

and

[~

0

respectively. Note that the variable x2 is missing from the first equation and x4 is missing from the second equation in the system (that is. the coefficients of x2 in the first equation and X4 in the second equation are 0). As a result, the ( I, 2)- and (2, 4)entries of the coefficient and augmented matrices of the system are 0.

In solving system (2), we performed three types of operations: interchanging the position of two equations in a system, multiplying an equation in the system by a

32

CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations

nonzero scalar, and adding a multiple of one equation in the system to another. The analogous operations that can be performed on the augmented matrix of the system are given in the following definition . Defin ition Any one of the following three operations performed on a matrix is called an elementa r y row operation : I. Interchange any two rows of the matrix. (scaling operation) (in ter change operation)

2. Multiply every entry of some row of the matrix by the same nonzero scalar. 3. Add a mu ltiple of one row of the matrix to another row. (r ow addition oper ation) To denote how an elementary row operation changes a matrix A into a matrix 8, we use the following notation:r;~ r;

l. A - - -cr; - r,

8 indicates that row i and row j are interchanged.

2. A - - - - 8 indicates that the entries of row i arc multiplied by the scal ar c.3. A

8 indicates that c times row i is added to row j.

ijifi ..!.!tjM

LetI

2

and

B=

[~1- I

21

-5

-3 -7

~] .

The following seq uence of ele mentary row operations transforms A into B:

A=

[r

2

- I I

0

~]2

r1 # r1

[i

2

0

n[~2 I1

- 2r1+r2- r2

[i

I

-3

-3 0I

-n_ ;]- \rJ -+ 1"2

-3r 1+r, - r., [I . 0

2

-3 -3 0 -5 - 3 -7

-5 -3 -7

~] = 8.

We may perform several elementary row operations in succession, indicating the operations by stack ing the individual labels above a single arrow. These operations are performed in top- to-bottom order. In the previous example. we cou ld indicate how to transform the second matrix into the fourth matrix of the example by using the following notation: 2- I I

0

-3 -3 -5 -3 -7

_ ;]

1.3 Systems of Linear Equations

33

Every elementary row operation can be reversed. That is. if we perform an e lementary row operation on a matrix A to produce a new matrix 8, then we can perform an e lementary row operation of the same kind on 8 to obtain A. If, for example. we obtain 8 by interchanging two rows of A, then interchanging the same rows of 8 yields A. Also, if we obtain 8 by mu ltiplying some row of A by the nonzero constant c. then multiplying the same row of 8 by ~ yields A. Finally. if we obtain 8 by adding c times row i of A to row j , then adding - c times row i of 8 to row j results in A. Suppose that we perform an elementary row operation on an augmented matrix lA bl to obtain a new matrix lA' b' l. The reversibility of the elementary row operations assures us that the solutions of Ax b are the same as those of A'x b'Thus performing an elementary row operation on the augmell/ed matrix of a system of linear equations does not change the so/wion set. That is, each elemematy row operation produces the augmellled matrix of an equivalent system of linear equations. We assume this result throughout the rest of Chapter I; it is proved in Section 2.3. Thus, because the system of linear equations (2 ) is equivalent to system (4 ). there is only one solution of system (2).

=

=

REDUCED ROW ECHELON FORMWe can use elementary row operations to simplify any system of linear equations unti l it is easy to see what the solution is. First. we represent the system by its augmented matrix, and then use elementary row operations to transform the augmented matrix into a matrix having a special form. which we call a reduced row echelon form. The system of linear equations whose augmented matrix has this form is equivalent to the original system and is easily solved. We now define this special form of matrix. In the following discussion, we call a row of a matrix a zero row if all its e ntries are 0 and a nonzero row otherwise. We call the leftmost nonzero entry of a nonzero row its leading entry.

De finitions A matrix is said to be in row ech elon form if it satisfies the foll owing three conditions: I . Each nonzero row I ies above every zero row. 2. The lead ing entry of a nonzero row lies in a column to the right of the column containing the leading entry of any preceding row. 3. lf a column contains the leading entry of some row, then all entries of that column below the leading entry are 0. 5 If a mauix also satisfies the following two additional conditions. we say that it is in

reduced row echelon form 64. If a column contains the leading entry of some row, then all the other entries of that column are 0. 5. The leading entry of each nonzero row is I.

5 Condition 3 is a direct consequence of condition 2. We include it in this definition for emphasis, as is

usually done when defining the row echelon form.6 Inexpensive calculators are available that can compute the reduced row echelon form of a matrix. On

such a calculator, or in computer software, the reduced row echelon form is usually obtained by using the command rre f .

34

CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations

A matrix having e ither of the forms that follow is in reduced row echelon form. In these diagrams. a denotes an arbitrary entry (that may or may not be 0).

*

[0

~ ~ ~ g: 0 0 0 1 *0 0 0 0

lI

0 I 0 **00* 1 * * 0 0 *0

[ 0 0 0 0 ~ 0 0 0 0 0 0 0

OOOl__!_ 0 *

l

Notice that the leading entries (which must be I ' s by condition 5) form a pattem suggestive of a fl ight of stairs. Moreover, these lead ing e ntries of I are the on ly nonzero entries in their columns. Also, each nonzero row precedes all of the zero rows.

Example 3

The following matrices are nor in reduced row echelon form:

0 0 6

A= [! 0

0 I

5 7 0 0 2 4 0I

0 0

0

"]

8 =

[I 720 0 0 0 0 0 0 0I

-3 94

63

0 0 0

2

0 0 0 0

~]

Matrix A fai ls to be in reduced row echelon form because the leading e ntry of the third row does not lie to the right of the leading entry of the second row. Notice, however, that the matrix obta ined by interchanging the second and third rows of A is in reduced row echelon form. Matrix 8 is not in reduced row echelon form for two reasons. The leading entry of the third row is not I. and the leading entries in the second and third rows are not the only nonzero entr ies in their columns . That is, the third column of 8 contains the first nonzero entry in row 2, but the (2, 3)-entry of 8 is not the onl y nonzero entry in column 3. Notice. however, that although 8 is not in reduced row echelon form, 8 is in row eche lon form. A system of linear equations ca n be eas ily solved if its augmented matrix is in reduced row echelon form. For example, the systemXt

X2X3

-4 = -5 3

= =

has a solution that is immediately evident. If a system of equations has in fin ite ly many solutions, then obtaining the solution is somewhat more complicated. Consider, for example, the system of linear equationsXt Jx2

x3

+ 2x4 + 6x4

=7xs = 2 0=0.= 9(5 )

The augmented matrix of this system is

[!

-3 0 2 0 0 I 6 0 0 0 0 I 0 0 0 0

which is in reduced row echelon form .

l

1.3 Systems of Linear Equations

35

Since the equation 0 0 in system (5) provides no useful in formation, we can disregard it. System !5) is consistent. but it is not possible to find a unique value for each variable because the system has infini tely many solutions. Instead, we can solve for some of the variables. called basic variables. in terms of the others, called the free variables. The basic variables corTes pond to the leading entries of the augmented matrix. ln system (5). for example. the basic variables arc x 1 XJ. and xs because the lead ing entries of the augmented matrix are in columns I, 3, and 5, respectively. The free variables are x2 and X4. We can easily solve for the basic variables in tem1s of the free variables by moving the free variables and their coefficients from the left s ide of each equation to the right. The resulti ng equations xr x2X3

=

= 7 + 3x2 free

2x4

=9 free

6x4

x4

xs

=2

provide a gener a l sol ution of system (5). Th is means that for every choice of values of the free variables, these equations give the corresponding values of x r, XJ, and xs in one solution of the system, and furthermore, every solution of the system has this 0 and X4 0 form for some values of the free variables. For example, choosing X2

=

=

gi~

yblem is equivalent toS.

SOLUTIONS TO THE PRACTICE PROBLEMSI. (a) Since 2(- 2) - 3 + 6(2) 5. u is not a solution of the second equation in the given system of equations. Therefore u is not a solution of the system. Another method for solving this problem is to represent the given ystem as a matrix ee

41

Since the ghen matrix contains no row who>e only nonzero entry lies in the last column. this ;ystem is consistent. The general solution of thi> >yMem isXt

'l0 5 62. In the given matrix. the only nonzero entry in the third row lie> in the la>t column. lienee the >)'Stem of linear equations corresponding to this matrix is not consistent.x~

\2

=

' =

free 4 freefree.

+ hJ .tj

2.\j

5 +

Note that.\ t. which i' not a ba...ic variable. is therefore a free variable. The general soluuon tn vector form .,

3. The corresponding >y>tcm of linear equations is

+ 2.rs = 4X4X5

= 5.

H-[ [!l ... [~l !l+ ..

+ [ - ;].

ITIJ GAUSSIAN ELIMINATIONIn Section 1.3, we learned how to solve a system of linear equations for wh ich the augmented matrix is in reduced row echelon form. In thi s section. we describe a procedure that can be used to tr.tnsfonn any matrix into this form. Suppose that R is the reduced row echelon form of a matrix A. Recall that the fir>! nonzero entry in a nonzero row o f R is called the leading enlry of that row. The positions that contain the leading entries of the nonzero rows of R are called the pivot positions of A, and a column of A that conta ins some pivot position of A is called a pivol column of A. For example. later in this section we show that the reduced row echelon form of 2 - I 2 I I 2 3 [ - I - 2 I

A=

2 4 -3 -6

-3 2 0 2 0 3-I

~]

is

0 0 0 I 0 R = [" 0 I 0 0 0 0 0 0 0

0 -3 2 . I 0 0

-5]

Here the first three rows of R are its no nLero row>, and so A has three pivot positions. The fi rst pivot position is row I. column I because the leading entry in the first row of R li es in column I. The second pivot position is row 2, column 3 because the lead ing entry in the second row of R lies in column 3. Finally. the third pivot position is row 3. column 4 because the leadi ng entry in the th ird row of R lies in column 4. Hence the pivot columns of A are columns I , 3. and 4. (Sec Fig me 1.19.) The pi vot posi tions and pivot columns are easi ly determined from the reduced row echelon form of a matrix. However, we need a method to locate the pivot position s so we can compute the reduced row echelon form. The algori thm that we use to obtai n the reduced row echelon form of a matrix is called G aussian climinalion.s 'l11is This method is named after Carl Friedrich Gauss (1777-1855), whom many consider to be the greatest mathematician of all t ime. Gauss described this procedurtem of 11 linear equation.' in q variable>. then"'=fl.

11

8 I. Let A be a 4 x 3 matrix . Is it possible that Ax = b is Explain your an>wcr. consistent for every b in

n?

82. Let A be an m x 11 matrix and b be a vector in n"'. What must be true ahout the rank of A if Ax = b has a unique solution? Justify your ;mswcr. 83. A >ystern of linear equations is called wrdertletermi11ed if it has fewer equation; than variables. What can be said ahout the number of solut ions of an underdetennined system'! 84. A system of linear equations is called Ol'etrletumillOiution>.(b) exactly one solution. and

61. If a S)"Stem of m linear equations in II variables is equi\'3 lent to a system or p linear equations in q variables. thenII

=q.

62. The equation Ax = b s consistent if and only if b is a linear combrnatron of the column; of A.

(c) infinitely many solullons. 85. Prove that if A is an m x 11 matrix with rank m. then AX = b b con'i'tent for ecry b in

63. If the equation Ax = b ;, inconsi~tent. then the rank of [A bJ is greater than the rank of A. 64. If the reduced row echelon form of lA b ) contains a zero row. then Ax = b mu>t have infinitely many sol utions. 65. If the reduced row echelon form of lA b l contains a zero row. then Ax = h mu>t be con>istent. 66. If some column of m:1trix A is a pivot column. the n the corresponding column in the reduced row echelon form of A is a ;tandurd vector.67. If A is a matrix with r:u1k k. the n the vectors e 1 e2 et :rppe;rr as columns of the reduced row echelon form of A. 68. The surn of the rank and nullity of a matrix equals the number of rows in the matrix . 69. Suppose that the pivot ows of a matrix A :~re rows I. 2..... k. and row k + I becomes zero when applying the Gaussian e limination algorithm. The n row k + I must

n,..

86. Prove that a matrix equation Ax = b is consi>tent if and only if the rank of A and [A bJ arc equal. 87. Let u be a solution of Ax = 0. where A i' an m x n matrix. Must c u be a solution of Ax = 0 for every scalar c? Justify your answer. 88. Let u and v be solutions of Ax = 0. where A is an m x 11 matrix. Must u + v be a solution of llx = 0? Justi fy youranswer.

89. Let u and '' be solution$ o f Ax = b. where A is ~u1 m x 11 matrix and b is a vector in Prove that u - v is a solutio n of Ax = 0 .

nm.

90. Let u be a solution of Ax = b and v be a solution of Ax = 0. where A is an m x 11 matrix :md b i' a vector in n'". Prove that u + v is a solution of Ax = b. 91. Let II be an m x11

matrix and b be a vector in

equal borne linear combination of rows I. 2..... k,70. The third pivot po;ition in a matrix lies in row 3.

n'"

such

that Ax = b i~ con~istenl. Prove that Ax = cb is consistentfor e\ery scalar c.

1.4 Gaussia n Elimination 92. Let A be an m x 11 matrix and b 1 and b2 be vectors in n"' such that both Ax = h t and Ax = b2 are consistent. Prove that Ax = b t + h2 is consistent. 93. Let u and v be solutions of Ax = b, where A is yMem :1lso to be consi>tent. we must have 4 - s = 0. TI1u< the origmal >y>tem has infinitely many solutions if r 3 and s 4. (c ) Let A denote the coefficient matrix of the system. For the ~ystem to have a unique solution, there must be

=

=

two basic v:uiables. so the rank of A mu>t be 2. Since deleting the la>t column of the preceding matrix gives a row echelon form of A. the mnk of A is 2 precisely when r - 3 = 0 : that is. "hen r = 3.

l1.s j APPLICATIONS OF SYSTEMS OF LINEAREQUATIONSSystems of linear equations arise in many applications of mathematics. In this section. we present two such app)jcations.

THE LEONTIEF INPUT- OUTPUT MODELIn a modern industrialized country, there arc hundreds of different industries that supply goods and services needed for production. These industries arc often mutually dependent. The agricultural industry, for instance, requ ires farm machinery to plant and harve~t crops. whereas the makers of farm machi nery need food produced by the agricultural industry. Because of this interdependency. events in one industry, such as a st rike by factory workers. can significantl y a ffect many other ind ustries. To better understand these complex interactions, economic planners usc mathematical models of the economy. the most important of which was developed by the Ru ssian-born economist Wassily Leontief. While a student in Berlin in the 1920s, Leontief developed a mathematical model, called the inpw- owpurmodel, for analyzing an economy. After arriving in the United States in 1931 to be a professor of economics at Harvard University, Leontief began to coUect the data that would enable him to implement his ideas. Finally. after the end of World War II. he succeeded in extracting from government statistics the data necessary to create a model of the U.S. economy. This model proved to be highly accurate in predicting the behavior of the postwar U.S. economy and earned Leontief the 1973 Nobel Prize for Economics. Leontiefs model of the U.S. economy combined approximately 500 industrie. into 42 sectors that provide products and services. such as the electrical m:~chincry sector. To illustrate Leontiefs theory. which c;m be :~pplicd to the economy of any country or region. we next show how to construct a general input- output model. Suppose that an economy is divided into n sectors and that ~ector i produce~ M>me commodity or service S, (i = I. 2 .... , n ). Usually. we measure amounts of commodities and services in common monetary units and hold costs fixed so that we can compare diverse sectors. For example. the output of the steel industry could be measured in mi ll ions of dollars worth of stee l produced. For each i and j. let Cij denote the amount of S, needed to produce one unit of S . 1 Then the 11 x n matrix C whose (i .})-entry is c,; is cttllcd the iniJUt - output matrix (or the consumption matrix) for the economy. To illustrate these ideas with a very simple example. consider an economy that is divided into three sectors: agriculture, manufacturing. and services. (Of course. a model of any real economy, such as Leontiefs orig in ttl model, wi ll in volve many more sectors and much larger matrices.) Suppose that each dollar's worth of agricultural output requires inputs of $0.10 from the agricultural sector. $0.20 from the

This section can be omined without toss of continuity.

1.5 Appl ications of Systems of Linear Equations

57

manufacturing sector, and $0.30 from the services sector; each dollar's worth of manufacturing output requires inputs of $0.20 from the agricultural sector. $0.40 from the manufacturing sector, and $0.10 from the services sector; and each dollar's worth of services output requires inputs of $0.10 from the agricultural sector, $0.20 from the manufacturing sector, and $0.10 from the services sector. From this information, we can form the following input-output matrix:

C=

['I .I .I] .I.2 .3

Ag.

Man. .2 .4

Svcs.

.2

Agriculture Manufactunng Services

Note that the (i ,j)-entry of the matrix represents the amount of input from sector i needed to produce a dollar 's worth of output from sector j. Now let x 1, x 2. and x3 denote the total output of the agriculture, manufacturing, and services sectors, respectively. Sinc- x 1 dollar' s worth of agricultural products arc being produced, the e first column of the input-output matrix shows that an input of .lx1 is required from the agriculture sector. an input of .2r1 is required from the manufacturing sector, and an input of .3x1 is requ ired from the services sector. S imilar statements apply to the manufacturing and services sectors. Figure 1.20 shows the total amount of money flowing among the three sectors. Note that in Figure 1.20 the three arcs leaving the agriculture sector give the total amount of agricultural output that is used as inputs for all three sectors. The sum of the labels on the three arcs .. lx 1 + .2r2 + .lx3. represents the amount of agricultural output that is consumed during the production process. Similar statements apply to the other two sectors. So the vector

Figure 1.20 The flow of money among the sectors

1.5 Appl ications of Systems of Linear Equations

59

For an economy with 11 x 11 input-output matrix C. the gross production necessary to satisfy exactly a demand d is a solution of(/, - C)x = d .

Example 2

For the economy in Example I. determine the gross production needed to meet a consumer demand for $90 mill ion of agricu lture, $80 million of manufacturing, and $60 million of services.Solution We must solve the matrix equation (13 - C )x = d. where C is the input-output matrix and

is the demand vector. Since

0 0] [.1 .2 .l]I 0 I

.9 - .2 [ - .3 - .I- .2.6

.2

.4 .2.l

-.1].9

- .2 .

0

.3

.I

the augmented matrix of the system to be solved is

.9 - .2 [ - .3Thus the solution of (/3 - C)x = d is

- .2 - . 1 90] .6 - .2 80 . - .1 .9 60

170] [240 . ISOso the gross production needed to meet the demand is $170 million of agriculture, $240 million of manufacturing, and $ 150 million of services.

Practice Problem 1

~

An island 's economy is divided into three sectors-tourism, transportation, and services. Suppose that each dollar's wonh of tourism output requires inputs of $0.30 from the tourism sector. $0.10 from the transpo1tation sector, and $0.30 from the services sector; each dollar' s wonh of transportation output requires inputs of $0.20 from the tourism sector, $0.40 from the transpo1tation sector, and $0.20 from the services sector; and each dollar's wonh of services output requires inputs of $0.05 from the tourism sector, $0.05 from the transpo1tation sector, and $0. 15 from the services sector. (a) Write the input-output matrix for th is economy.(b) If the gross production for th is economy is $ 10 million of tourism, S I S million

of transportation. and $20 mlllion of services. how much input from the tourism sector is required by the services sector? (c) If the gross production for this economy is $10 million of tourism, $15 million of transportation, and $20 million of services, what is the total va lue of the inputs consumed by each sector during the production process?

60

CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations

(d) If the total outputs of the tou rism, transportation, and services sectors are $70 million. $50 million. and $60 million. respectively. what is the net production of each sector? (e) What gross production is required to satisfy exactly a demand for $30 million of tourism. $50 million of transportation, and $40 million of services? economy is S30 million or agriculture. $40 million of manufacturing. S30 million of services. and $20 milhon of entertainment. what is the total value of the inputs from each scctor con,umed during the production procc>s? 14. If the gross production for thi s economy i' $20 million of agriculture. $30 million of manuf:rcturi ng. $20 million or services, and $ 10 million of entert:rinmcnt. what is the total value of the inputs fro m each sector con>umcd during the production procc? 15. If the gross production for this economy is $30 million of agricu lture. $40 mill ion of mtem 1s inconsi:.lent. lienee u is not in the span of S . On the o1hcr ltl 1hree columns. Thus the vec1or [-:] c:m be removed from S wilholll changing ils span. So

[~

Thus Ax = v is consis1en1. and so v is in the span of S. ln f:1c1, 1hc reduced mw echelon form of [A v] shows thai

2. Let A be lhe m:urix whose columns arc the vectors in S. The reduced row echelon form of A is

[~

0 0

0

I 0

-2

~].

is a subset of S 1hat is a generating set for 3 Moreover. this set is lhe smallest generating set possible because removing any vcclor from S ' leaves u :.et of only 2 vcclors. Since 1he malrix who...c: columns :tre the vec1ors in S ' is :1 3 x 2 matrix. it cannot have rank 3 and so cann01 be a generatmg set for 3 by Theorem 1.6.

n

n

1.7 Linear Dependence and Linear Independence

75

[!I]

LINEAR DEPENDENCE AND LINEAR INDEPENDENCE

In Section 1.6, we saw that it is possible to reduce the size of a generating set if some vector in the generating set is a li near combi nation of the others. Ln fact, by Theorem 1.7, this vector can be removed without affecti ng the span. In this section. we consider the problem of recognizing when a generating set cannot be made smaller. Consider. for example. the setS= {u 1, u z. UJ. u 4}, where

In this case, the reader should check that u4 is not a linear combination of the vectors u 1, uz, and UJ. However, this does nor mean that we cannot find a smaller set having the same span asS because it is possibl e that one of u 1. liz. and liJ might be a linear combination of the other vectors in S. Ln fact, this is precisely the situation because u 3 = Sll1 - 3ll2 + 01.14. Thus checking if one of the vectors in a generating set is a linear combination of the others could require us to solve many systems of linear equati ons. Fortunate ly, a better method is available. In the preceding example, in order that we do not have to guess which of ll 1, liz, liJ. and u 4 can be expressed as a linear combination of the others, let us fommlate the problem d ifferently. Note that because liJ = Su 1 - 3uz + Ou4, we must have - Su 1 + 3u 2 + u 3 - 0114

= 0.

Thus, instead of tryi ng to wri te some u; as a linear combination of the others, we can try to write 0 as a linear combin ation of lit. liz. liJ. and 1.14. Of course, this is always possible if we take each coefficient in the linear combination to be 0. But if rhere is a linear combinaTion of li t. liz, liJ. and 1.14 rhat equals 0 in which nor all of tile coefficienTs are 0, then we can express one of the u; 's as a linear combinarion of rhe otl1ers. In this case. the equation -Sli t + 3u z + l13- Ou4 = 0 enables us to express any one of u 1, uz, and liJ (but nor 1.14) as a linear combination of the others. For example. since - Sli t + 3llz + ll3 - 0114 = 0, we have - Su 1 = -3llz- ll3 + Ou4 li t = - uz + - liJ + 0lJ4 .

3

I

s

s

We see that at least one of the vectors depends on (is a linear combi nation of) the othe rs. Th is idea motivates the following defi ni tions. Definitions A set of k vectors {u 1. u 2, .. .. uk} in R" is called linearly dependent if there exist scalars c 1, Cz, ... , ck, not all 0, such that

In this case, we also say that the vectors u 1, liz, . ... llk are linearly d ependent .

76

CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations

A set of k vectors (u 1, u2, .. . , u*) is called linearly independent if the only scalars c , , c2, ... , Ck such that

are c , = c2 = = ck = 0. In this case, we also say that the vectors u , , u 2, ... , llk are linearly independent. Note that a set is linearly independent if and o nl y if it is not linearly dependent.

lifhl!tjl

Show that the sets and are linearly dependent.

Solution The equation

is true with c , 2, c2 - I. and CJ = I. Since not all the coefficients in the preceding linear combination are 0, S 1 is linearly dependent. Because

=

=

and at least one of the coefficients in th is linear combination is nonzero, S 2 is also linearly dependent.

As Example I suggests, any finite subset S = {0. u 1, u2, .. . , uk) of 'R" rltar conrains the zero vector is linearly dependelll because

is a linear combination of the vectors in S in which at least one coefficient is nonzero.

0 CAUTION

Whi le the equation

is true, it te lls us nothing about the linear independence or dependence of the set S 1 in Example 1. A similar statement is tnte for any set of vectors {u,. u 2. . . . . llk ):

Ou,

+ Ou2 + + Ot~k = 0.

For a set of vectors to be linearly dependent, the equation

must be satisfied with at least one nonzero coefficient.

1.7 Linear Dependence and Linear Independence

77

Since the equation c 1u 1 + c2u 2 + + q vector product

uk

=0

can be written as a matrix-

we have the followi ng useful observation: The set {u 1, uz, ... , u k} is linearly dependent if and on ly if there exists a nonzero solution of Ax = 0. where A = (u , u 2 . . . uk].

i#iflrrl,!tfW

Determine whether the set

s=Solution

I[n [~J [~J [iJI= 0 has a nonzero solution. whereI I I

is linearly dependent or linearly independent. We must determine whether Ax

A=

[ I

2

0I

41

i]=0

is the matrix whose columns are the vectors in S. The augmented matrix of Ax is

[~and its reduced row echelon form is

I

I

0

4 2 1 3

~l ~l

[~

0I

2- I

0 0

0

0

Hence the general solution of this system is

x, = -2x3X2X3X4

= =

X3

free

0.

Because the solution of Ax = 0 contains a free variable, this system of linear equations has infinitely many solutions, and we can obtain a nonzero solution by choos ing any 1, for instance, we see that nonzero value of the free variable. Taking XJ

=

78

CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations

is a nonzero solution of Ax = 0. Thus S is a linearly dependent subset of n 3 since

is a representation of 0 as a linear combination of the vectors in S.

+;:1,!.111

Determine whether the set

is linearly dependent or linearly independent.

Solutionwhere

As in Exa mple 2, we must check whether Ax= 0 has a nonzero solut ion,

There is a way to do this without actually solving Ax= 0 (as we did in Exa mple 2). Note that the system Ax = 0 has nonzero solutions if and on ly if its general solution contains a free vari able . Since the reduced row echelon form of A is

[~=

0 0] , 0I

0

I

the rank of A is 3. and the nullity of A is 3 - 3 = 0. Thus the general solution of Ax = 0 has no free variables. So Ax 0 has no nonzero solutions, and hence S is linearly independent.

ln Example 3. we s howed that a particular set S is linearly independent without actua ll y solving a system of linear equat ions. Our next theorem shows that a similar technique can be used for any set whatsoever. Note the relationship between this theorem and Theorem 1.6.

THEOREM 1.8The following s tatements about an111

x n matrix A are equivalent:

(a) The columns of A are linearly independent. (b) The equation Ax = b has at most one solution for each b in 'R"'. (c) The nullity of A is zero.

1.7 Linear Dependence and Linear Independence 79

(d) The rank of A is n, the number of columns of A. (c) The columns of the reduced row echelon form of A arc distinct standard vectors in

nm.

(f) The only solution of Ax = 0 is 0. (g) There is a pivot position in each column of A.

PROOF We have already noted that (a) and (f) are equivalent, and clearly (f) and (g) are equivale nt. To complete the proof, we show that (b) implies (c), (c) implies (d), (d) implies (c), (c) implies (f). and (f) implies (b). (b) implies (c) Since 0 is a solution of Ax = 0, (b) implies that Ax= 0 has no nonzero solu tions. Thus the general solution of Ax = 0 has no free variables. Since the number of free variables is the nullity of A, we see that the nullity of A is zero. (c) implies (d) Because rankA + nullity A= n, (d) follows immediately from (c). (d) implies (e) L the rank of A is n, then every column of A is a pivot f column , and therefore the reduced row echelon form of A consists entirely of standard vectors. These are necessarily distinct because each column contains the first nonzero e ntry in some row. (e) implies (f) Let R be the reduced row eche lon form of A. If the columns of R are distinct standard vectors in 'R"'. then R = (e1 !!;! .. . e,.]. Clearl y, the only solution of Rx = 0 is 0. and since Ax = 0 is equivalent to Rx = 0, it follows that the onl y solution of Ax = 0 is 0. (f) implies (b) Let b be any vector in 1?!". To show that Ax = b has at most one solution, we assume that u and v are both solutions of Ax = b and prove that u = v. Since u and v are solutions of Ax = b, we have

A(u - v) =A u - Av = b - b = 0. So u - v is a solution of Ax = 0. Thus (f) implies that u - v = 0: that is. u = v. It follows that Ax = b has at most one solution. Practice Problem 1 ...

Is some vector in the set

a linear combination of the others? The equation Ax = b is called homogeneous if b = 0 . As Examples 2 and 3 illustrate, in checking if a subset is linearly independent. we are led to a homogeneous equation. Note that, un like an arbitrary equation, a homogeneous equation must be consistent because 0 is a solution of Ax = 0. As a result. the important question concerni ng a homogeneous equation is not !f it has solutions, but whether 0 is the only solution. If not, then the system has infinitely many solutions. For example, the general solution of a homogeneous system of linear equations with more variab les than equati ons must have free variables. Hence a homogeneous system of linear equarions wirtz more variables rhan equations has infinirely many solwions. According

80

CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations

to Theorem 1.8, the number of solutions of Ax 0 determines the linear dependence or independence of the columns of A. In order to investigate some other properties of the homogeneous equation Ax 0, let us consider this equation for the matrix

=

=

A=[~

-4 2 - I -8 3 2

Since the reduced row echelon form of [A 0) is

[~the general solution of Ax

-4 0 0

7

-85

-4

= 0 isXt X2

= 4xz - 7x4

+ 8xs

free 4x4 - 5xs free free.

x3 =X4

xs

Expressing the solurions of Ax = 0 in vector form yields

Thus the solution of Ax

= 0 is the span of

In a similar manner, for a matrix A , we can express any solution o f Ax 0 as a linear combination of vectors in which the coefficients arc the free variables in the general solution. We call such a representation a vector form of the general solution of Ax = 0. The solution set of this equation equals the span of the set of vectors that appear in a vector form of its general solution. For the preceding set S. we sec from equation ( II ) that the only linear combination of vectors in S equal to 0 is the one in which all of the coefficients are zero. So S is linearly independent. More generally. the following result is tmc:

=

When a vector form of the general solution of Ax = 0 is obtained by the method described in Section 1.3. the vectors that appear in the vector form are linearly independent.Practice Problem 2 .,.

Determine a vector fonn for the general solution ofXt -

3xz -

2xr - 6x2 - 2xr + 6xz

+

X3

+

X4 -

XS

=0

X3 -

3x4 -

9xs = 0

+ 3x3 + 2r4 + I lxs = 0.

1.7 Linear Dependence and Linear Independence 81

LINEARLY DEPENDENT AND LINEARLY INDEPENDENT SETSThe following result provides a useful chamcterization of linearly dependent sets. In Section 2.3, we deve lop a simple method for implementing Theorem 1.9 to write one of the vectors in a linearly dependent set as a linear combination of the preceding vectors.

THEOREM 1.9Vectors UJ. u 2, ... , u k in R " are linearly dependent if and only if Ut 0 or there exists an i :::; 2 such that u ; is a linear combination of the preceding vectorsli t, ll2 , . . . , Ui - 1

=

PROOF Suppose first that the vectors u 1. u 2, . .. , u k in R" are linearly dependent. If u 1 = 0. then we arc finished: so suppose u 1 # 0. There exist scalars c 1, c2, ... , ck, not all zero, such that

Let i denote the largest index such that c; # 0. Note that i :::; 2, for otherw ise the preceding etlttation would reduce to c 1ll 1 0, which is false because c 1 # 0 and U t # 0. Hence the preceding equation becomes

=

where

C;

#

0. Solving this equation for u;, we obtain

ll;

= -

- Ct

C2ut C;

Ci-1 u 2 - - - - u ;-J.C;

C;

Tims u ; is a linear combination of ll t , ll2, ... , ll;- t. We leave the proof of the converse as an exercise.

The followi ng properties relate to linearly dependent and linearly independent sets. Properties of Linearly Dependent and Independent Setsl. A set consisting of a single nonzero vector is linearly independent, but (0} is linearl y dependent.

2. A set of two vectors (u 1, u 2} is linearly dependent if and only if u 1 0 or u2 is in the span of ( ll t}: that is, if and only if li t = 0 or u 2 is a multiple of li J. Hence a set of two vectors is linearly dependent if and only if one of the vecron is a multiple of the other. 3. Let S (li t, ll2... , Uk} be a linearly independent subset of R". and v be in R". Then v does not belong to the span of S if and onl y if {li t. ll2, ... , u k. v) is linearly independent. 4. Every subset of R" contai ning more than n vectors must be linearly dependent. 5. If S is a subset of R" and no vector can be re moved from S without changing its span, then S is linearly independent.

=

=

82

CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations

0 CAUTION

The result ment ioned in item 2 is valid only for sets contain ing two vectors. For example. in n 3, the set (Ct. e2. Ct + e2} is linearly dependent. but no vector in the set is a multiple of another. Properties 1, 2. and 5 follow from Theorem 1.9. For a justification of propetty 3, observe that by Theorem 1.9, u 1 "# 0, and for i 2::. 2. no u; is in the span of (u 1 u2. .. .. u, _ J). If v does not belong to the span of S . the vectors u 1, uz, ... , uk. v are also linearly independent by l11eorem 1.9. Conversely. if the vectors u ,. u 2..... Uk . v are linearly independent, then v is not a linear combination of u 1, u2, ... , u k by Theorem 1.9. So v does not belong to the span of S. (Sec Figure 1.27 for the case that k = 2.)

. k0

Span l u 1 u 2 )

u,( u 1, u 2 v} i~ linearly dependent.

Span {u 1 u 2)

{ua. u2. l'} is linearly independent.

Figure 1.27 Linearly independent and linearly dependent sets of 3 vectors

k >

To justify property 4, consider a set (u ,, u 2.... , uk} of k vectors from R". where 11. The 11 x k matrix [u 1 u 2 . . . uk] cannot have rank k because it has on ly 11 rows. Thus the set (u t. u2..... uk} is linearly dependent by Theorem 1.8. However, the next exa mple shows that subsets of R" containing n or fewer vectors may be either linearl y dependent or linearly independent.

Example 4

Determine by inspection whether the following sets are linearly dependent or linearly independent:

Solution

Since St contains the zero vector. it is linearly dependent. To determine if S2. a set of two vectors, is linearly dependent or linearly independent, we need only check if either of the vectors in S2 is a multiple of the other. Because

2we see that

5 [-!0]' [-4] ,o12 615

s2 is linearly dependent.

1.7 Linear Dependence and Linear Independence 83 To see if $ 3 is linearly independent. consider the subsetS= (u 1, 112} , where and

Because S is a set of two vectors, neither of which is a multiple of the other, S is linearly independent. Vectors in the span of S are linear combi nations of the vectors in S, and therefore must have 0 as their third component. Since

has a nonzero third component, it docs not belong to the span of S. So by propctty 3 in the preceding list, S3 = {Ut, 112, V} is linearly independent. Finally, the set S 4 is linearly dependent by propcny 4 because it is a set of 4 vectors from 1?.3. Practice Problem 3~

Determine by inspection whether the following sets are linearly dependent or linearly independent:

ln this chapter. we introduced matrices and vectors and learned some of their fundamental propenies. Since we can wri te a system of linear equations as an equation involving a matrix and vectors, we can use these arrays to solve any system of linear equations. It is surprising that the number of solutions of the equation Ax = b is related both to the si mple concept of the rank of a matrix and also to the complex concepts of generating sets and linearly independent sets. Yet this is exactly the case, as Theorems 1.6 and 1.8 show. To conclude this chapter, we present the follow ing table, which summarizes the relationships among the ideas that were established in Sections 1.6 and 1.7. We assume that A is an 111 x n matrix with reduced row echelon fom1 R. Properties listed in the same row of the table are equivalent.

The rank of A rankA =m

The number of solutions of Ax = b Ax = b has at least one solution for every b in n_m. Ax = b has at most one solution for every b in n_m.

The columns of A

The reduced row echelon form R of A Every row of R contains a pivot position. Every column of

The columns of A are a generating set for'Rm. The columns of A are linearly independent.

rank A = n

R contains a pivotposition.

84 CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations

EXERCISESIn urcises 1- 12, determine by inspection ll'hether the gilen sets are linearly depentlent.

I

J. { [

u:nm 'I[_:n-m " 1 [~Wl l [l -il[ } -~] ! [~l [-n } Exe~i:s 23~30. d~tennine l4 ' { [-

. {[- 2 .[o]. [-4!] } ] 21 0 4 3 0

In mdependem.

whether the given set is linearly

5. { [Jl[~l[-~] } 6.![_!].[;] .[-~]}7

23. {[:ll[-~J.[~] }24.

n~;J.[-!Jl

&.

mJ.[-;J.[~Jl

{[-:]. [-~].[:]}

' l[~!lII.

10 0 I

{[~l [~l [~] }

WH:H-:]l

25 1[

i].[-i],[j]j-2 3

-I

In Exercises 13- 22, a setS is given. Detennine by impection a subset of S comaining the fewest vectors that has the same span asS.

"1 Hl U [-iHil ll-~l ~:] }

[ [~l [ ] -: } ,w ;n:nm "mnn [iHlll " 1 t:Hm Hl l[-~l [ [_~J. [-~J l -n13. { [ 14. {

"I[l [1][-;]! I[J.-l ,: ,27.

1 [ 0 [1 [OJ ! ] ] ]~

28

"I[ll f!lfil Pll=30

1nllll tm [il

IDH-!].[-!][=ll l

I&.

In Exe~ises 31 - 38, a linearly dependem setS is given. Write some vector in S as a linear combintllinn of the others.

19.

n~l

[-;J. mJ

20 . {[

_iJ. [-~l [ [ ~~l =~]}

1.7 Linear Dependence and Linear Independence

85

' 1[iH-iH!lDllIn Exercises 51- 62, wrire rhe vecror fonn of rile general .wlwion of rile given sysrem of linear equations. x, + Sx3 0 51. x, - 4xz + 2r3 0 52. x2 - 3x, 0

=

53.

Xt

+ 3xz

+ 2r4 = 0X3 6x~

=0

54.

XI

= = + 4x, = 0 x2 - 2r4 = 0

55. -x1 + x2 - 7x3 2x, + 3.r2 - XJ

x1

Xt 56.2rt -

- 2rt 57.In Exercises 39- 50, derennine, if possible, a l'tliue of r for which rile given ser is linearly dependenT.

-x, x1x1

+ 4X2 + X3 + 5X4 = 0 + 2x3 - 5x + xs - X6 =-

+ 4x3 - 2r4 =0 + 7x4 = 0 + llx = 0 2X2 - XJ - 4X4 = 0 4xz + 3xJ + 7x = 0 + 3x4 xs x4 + xs

X3

+

+ 2r6 = 0+ 4x6 =0

0

x3 -

58

.

x. - 5xs - 2x3 - x2 + 3x3 + 2.r4 - 2.r, + xz + XJ - x. + 8xs 3x1 - xz - 3x3 - x. - 15x5 -x 1XI +

=0=0=0

=0

59.

Xt 2xt

X2 + X4 = 0 + 2xz + 4x = 0

- 4x

=0

60.

x, - 2rz x, - 2x2 2x 1 - 4.r2

+ XJ + x. + 1xs = 0 + 2x3 + 10xs = 0 + 4x4 + 8xs = 0 x 1 + 2.. z - X3 r + 2.rs - X6

6

1. 2x,

2r3 - .r4 - .r, - 2xz + XJ + x, x,

+ 4xz -

=0 5x6 0 + 2.rs + 4x6 = 0 + 4x5 + 3x6 = 0-

=

43.

H7J. m. [~J l

x1-XI

x2Xl

X3 -

. 2rt - 2rz 62

2x, - x 5 7.r - xs

+ 4.r6 = + 5x6 =-

0

+

+ X) + 5X4 + XSX)

45.

47. { [

n m. [~]} -~l jJ. L!l [-~J }-~l Gl [n

+ 3X4 + XS

0 3X6 = 0 X6 = 0

~

~

In Erercises 63-82. derennine wheTher rile state ments are true or false.

63. If Sis linearly independent, then no vector inS is a lineMcombi nation of the others.64. If the onl y solution of Ax = 0 is 0. then the rows of A are linearly independent.

l . lUHllPll48 . {[

65. If the nullity of A is 0, then the columns of A are linearlydependent.

66. If the columns of the reduced row echelon form of A are distinct s tandard vectors, then the only solution of A x 0

=

is 0.

67. If A is an 111 x 11 matrix with rank A are linearly independent.

11,

then the columns of

86

CHAPTER 1 Matrices, Vectors, and Systems of linear Equations

68. A homogeneous equation i> alway consistent. 69. A homogeneous equation always has infinitely many solulion~.

90. Complete the proof of Theorem 1.9 by s howing that if u o = 0 or u, is in the span of {u o. uz..... u, t} for some i ~ 2. then {u t. uz..... Ut} is linearly dependent. Him:Separately consider the case in which u 1 = 0 and the case in which vector u, is in the span of (u t. u 2 u, _ 1).9 1~ Prove th:ot :ony nonempty subset of a linearly independent subset of n" is linearly independent.4

70. If a vector form of the general ;olution of Ax= 0 is obtained by the method described in Section 1.3, then the vectors that appear in the vector fonn are linearly independent. 71. For any vector v. {v i is linearly dependent.

72. A set of vectors in R!' is linearly dependent if and onlyif one of the vectors is a multo pie of one of the others. 73. lf a subset of n is linearly dependent. then it must contain at least 11 \'CCtors. 74. If the columns of a 3 x .f matrix arc distinct. then they are linearly dependent. 75. For the ;ystcm of linear equation; Ax = b to be homogeneous, b must equal 0. 76. If a subset of n contains more than linearly dependent.11

92. Prove th:ot if S o is a linearly dependent subset ofR" that is contained in a fi nite set S 2. then S2 is li nearly dependent.t} 93. Let S = {u 0 111 ..... U be a nonempty set of veeton< from n. Prove that if S is linearly independent. then every vector in Span S can be written as Ct Ut + czu z - .. + Ct Ut for wuqu~ scalars l't.l'l ... . . c . 9.f. State and prove the con\'crsc of Exercise 93. 95. LetS = {u ,. u z..... Ut } be a noncmpty sub>ct of R. and A be an m x 11 matrix. Prove that if S i~ linearly dependent and S' {Au 1.Au 2 , Auk} contains k distinct vectors. then S' is linearly dependent.

vector., then it is

=

77. If every column of an m x 11 mmrix A contains a pivot position, then the matrix equation Ax = b is consistent for every b in n. 78. If every row of an m x 11 matrix A contains a pivot position. then the matrix equation Ax = b is con;i;tcnt for C\Cry b in n. 79. 1f linearly independent. and so no vector in S is a linear combination of the others. 2. The augmented matrix of the given system is

----------------------

[j[~

3

6 6

- I I

I

- I

-32

-9II

XI] = [3Xl + -lis] = lz [3] +xs [-~ 2] ~~ 1s ~ [xs -lt two vectors in S2 arc linearly dependent. Therefore S2 is linearly dependent by Theorem 1.9. By properly 2 on page 81. S1 is linearly independent. By property 4 on page 81.

0 0

3 0 0 - 2 OJI 0 0 I l 2

0 . 0

the general solution of the given >ystem is

s. is linearly dependent.

11~2..l \

=h2+free

l1s

.14

1s

=

=

free.

-l a,1 b,1 ~2.

9. ACx13. BC

15. c8r19. C 2

17. A3

In Exerc:is~s 21 U. E.rucift'\ 5 -20.

us~ til~

llllllrices A. B. C. and z from

For any matrices A and 8 for which the product AB is defined. the (i .j)-cntry of AB equals the ;um of the prod uct' of corrc,ponding cntrie' from the llh column of A and thejth row of B.

21. Verify that I2C- Ch =C. 22. Verify that (AB)C 23. Verify that (A8) 7 24. Verify thtll z(AC)

= A(BC).

43. If A. 8. and C arc matnccs for which the product A(BC) is defined, then A(BC) (1\8 )C.

=

= BIA'.=(7A )C.

44. If A and 8 are 111 x 11 then (A+ B)C A8

=

m;~trices

and C is an

11

x p matrix,

+ BC .

111 Exercise.< 25 -32. 11se til~ following matricl!s to comp111e each req11esred ewry or column of the motrix prod11c1, wirllo11t computing rhe entire matri~:

45. If A and 8 ;~re 11 x 11 matrices. then the diagon"l entries of the product matrix A8 are attblt.ll22bzz .... ,ll,,b., . 46. If the product AB is defined :mel either A or 8 is a zero matrix, then A8 is a zero matrix. 47. If the product AB b defined and AB is u zero matrix , then either A or 8 is a zero matrix. 48. If Aa and A~ are both 2 x 2 rotation matrices. then A 0 Ap is a 2 x 2 rotation matrix. 49. The product of two diagonal matrices is a diagonal matrix. 50. In a symmetric 11 x 11 matrix. the (i ,})- :md (j. i)-entries are equal for all i I. 2... . 11 and j = I. 2..... 11 .

A = [

~ -~ -3 - 2

!]. [-! ~] ,8

=

C

= [;

I

0

3

-2

3

-I] -2

25. the (3, 2)-entry of AB 27. the (2. 3)-entry of CA 29. column 2 of AB 31. column I of CA

26. the (2. I )-entry of BC 28. the (I. I) entry of C8 30. column 3 of BC 32. column 2 of CB

=

2.1 Matrix Multiplication51. L..ct

1OS

(b) Suppose that 111 = 100 pupils buy a hot lunch and 112 = 200 pupils bring a bag lunch on the fi~t

da].

FroC'lt) Cny ToSuhurth

S 1l>urt"

of school. Compute A["'] A2 ["]. and A1 ["'"2

uz

u2

.

['85 . 15

.03] = A .97

Explain the significance of each result. (c) To do this problem, you will need a calcu lator with matrix capabilities or access to computer soflware s uch as MATLAB. Using the notation o f (b), compute A IOO ["t]ll2

be the stochastic matrix used in Example 6 to predict popuhltion movement between the city anu its suburbs. Suppose that 70% of city residents live in single-unit houses (as opposed to multiple-unit or apartment housing) and that 95% of suburb re;,idents live in single-unit houes. people live (a) Find a 2 x 2 matrix B such that if in the city and l'z people live 111 the suburbs, then

= [IV']. Explain the significance U!'2[~~']. U'l

of

thi' rcult. Now compute A result with

and compare thipen to the population in 50

years? (d) Find by trial und error a value of q for which the li1.ard populltion reache~ a nonrero stable distribution. What is this stable dis tributJon?

(c) If b = 4, what will happen to the population in 50 years?(d) Find by trial and error a value of b for which the vole population reaches a nonzero stable distribution. What is this stable distribution?

2.2 Applications of Matrix Multiplication (c) For the value of b found in (d). what happens 10 an initial population of 80 females of age les.~ than I, 200 of age I. and 397 of age 2?(f) For what value of b does (A - !))x

119

Exercises 17 aud 18 use the teclmique de1e/oped in the traffic flow applicatiou. 17. A cenain medical foundation rccetves money from two sources: donattons and mtercst earned on endowments. Of the donations rccci,cd. 30'.l i' used to defray the costs of raising funds: only IO9 P 10 and P11 at the rates of a. b. and c gallons per minute. respectively. Find a matrix M such that the vector of outputs is given by

=0

have a

nonzero solutton' (g) Let P =

~~l be an arbitrary population vector. andP,

p}~

Po 0.8p9

let b ha\c 1 c value found in (f). O'er time, p approaches a stable population vector q . Express q in terms of fl! and fiJ.

I'

I 0.4P,p.

10.2~ PIO

16. The maximum membership term for each member of the Service Club is 3 years. Each first-year and seoond-ycar member recruit< one new I>Cr>On who begins the membership term in the following year. Of those in their first year of membership. 50~ of the members resign. and of those in their second year of membership. 70% resign. (a) Write a 3 x 3 matrix A so that if x, is c urrentl y the number of Service Club members in their i th year of membership und y, is the number in their i th year of membership a year from now, then y = ilx. (b) Suppose that there arc 60 Service Club members in their lin.t year. 20 members in their second year. and 40 member> in their third year of membership. Find the di>tribution of members for next year and for 2 years from now.

Ps1'2

1o.50.5Flgure2.6

"

1o.3J'll

0.7

Exercises 19- 23 ttre COIIcemed ll'itlt (0, I)-matrices. 19. With the interpretation of a (0. I )-matrix found in Prnctice Problem 3. ~uppo;e that we have five cities with the

120

CHAPTER 2 Matrices and Li near Transformationsassociated matrix A given in the block fonn (sec Section 2.1)Exerr:ises 22 and 23 rt'fer to till' .vcltedulirrg example.

22. Suppo-c that student preference for a ;.et of courses isgiven in the following table: \\'here 8i~

a 3 x 3 matrix. 0 1 is the 3 x 2 zero matrix.Student 1

Coun Numbltt'

Oz is the 2 x 3 zero matrix. and C is a 2 x 2 matrix.(a) What docs the matn~ A tellm. about Hight connections bet\\CCn the citie\? (b) U.c block multiplication to obtain A 2 A 3 and At for any positive integer k. (c) Interpret your result the citie,,tn

l

J

4

5

(b) in terms of Rights between

20. Recall the (0. 1)-matrix

A ~ [l

0

0 0

0

l

0 0I

0 0 0

l1(a) G ive all pairs of courses that arc desired by the most

in which the entri es describe countries that maintain diplomatic relations with one anotheL (a) Which pairs of countries maintain diplomatic relations'/ (b) I low many countries link country I with country 3? (c) Give an imcrpretation of the (1, 4)-entry of A 3 21. Suppose that there is a group of four people and an associated -' x 4 matrix A dehned by

students.(b) Give a ll pairs of courses that arc desired by the fewest students.

(c) Construct a m:llrix whose diagonal entrie> determine the number of student;. who prefer e:tch course.

23. Let A be the matrix in E'ample 2.(a) Justify the following interpretation: For i # j. the (i .j )-entry of AA r ts the number of classes that are desired by both students i and j. (b) Show that the (I. 2)-cntry of AA r i> I and the (9. I)entry of AA T IS 3. (c) Interpret the answen. to (b) in the context of (a) and the data in the scheduhng example. (d) Give an interpretation of the diagonal entries of AA r.urcises 24 arrd 25 art' corrum~d wrtlr tilt' amhropology application.

a,,

1 if i '# j and person i likes person j = { 0 otherwise.

We say that persons 1 and j are frimds if they like each I. Suppose that A is given by other: that i~. if a,1 a"

= =

(it) List

:111 pail" of friends.

24. Recall the binomial formula for 'caJars positive integer~ :(a+ 1>)1

tl

and b and any

(b) Give an interpretation of the entries of A 2 . (c) Let 8 be the 4 x 4 matrix defined by l = {0 if persons i and j are friends otherwise.

= a1 + ka1

1

1> + ...

+ ---'-o -'b' +. + li . * 1 i!(k - i)!where i! (i factorial) is given by i! = I 2 (i- I) . i. (a) Sup1X>se that A and 8 arc m x m matrices that commute: that is. AB = BA . Prove that the binomial formula holds for A and 8 when k = 2 and k 3. Specifically. prove that

1>,,

Determine the matrix B. Is 8 a symmetric matrix? (d) A clique is a set of three or more people, each of whom is friendly with al l the other members of the set. Show that person i belongs to a clique if and only if the (i. i)-entry of 8 1 is positive. (e) Usc computer software or a calculator that performs matrix arithmetic to coumlhe cliques that exist among the four friends.

=

and

2.2 Applications of Matrix Multiplication (b) Usc mathematical induction to extend the results in pan (a) to any positive integer k. (c) For the matricc' A and 8 on page 117. >how 8 3 0 and that A :md 8 commute. Then use (b) to prove (a) Show that A is a (0. I )-matrix.

121

=

(b) Give an interpretatiOn of what it means for the term (IJ2(/2J to equal one. (c) Show that the (3. I )-entry of A 2 represents the number of ways that person 3 can send a mc~sage to person I in two stago

33.

[j

I 2 - I - I- 2 3 7 -3

_j]-3 -8

by 2p 3, where P3 is the third column o f A-

55. Prove directly that statement (a) in the Invertible Matrix Theorem implies statements (e) and (h).

034.

- I- I

[-l

-I I

52

-2

_ :]-3

In Exercises 56- 63, a sysrem of linear equarions is given. (a) Wrire each sysrem as ammrix equarion Ax = b. (b) Show rlwr A is invertible. and find A(c) Use A- 1 ro solve each sysrem.1

-I

~

~

In Exercises 35- 54, determine wherlter rile sraremenrs are rrue or false.

56.

Xt + 2x2 2Xt + 3X2

=9 =3 =

57.

-x 1 - 3x2 2Xt + 5X2

= -6 = 4 = =

35. A matrix is invertible if and o nly if its reduced row echelon form is an identity matrix. 36. For any two matrices A and B, if A8 itive integer 11 , then A is invertible. 37. For any two BA 1,.

=1,

for some pos-

4 XJ + X2 + X3 58. 2XJ + X2+4XJ= 7 3xt + 2xz + 6x3 = - I -5 x 1 +x2 +x3 60. 2rt + xz + x.1 = -3 3xt 2 + x3 =Xt -

59.

-XJ + X3 -4 3 XJ + 2Y2 - 2x3 2rt - X2 + X3 = -6 2Yt + 3x2- 4x3 x 2 + 2r,~ = 5 -X2 + XJ = 3

=

11

x n matrices A and B. if AB = 1 then ,,

=

=

38. For any two 11 x 11 matrices A and B, if AB is invettible and A- t = B. 39. If an 40. If an11 11

= 1,, then A11.

6 1.

-X t -

x

11

matrix has rank

11 .

then it is invertible. 62

XJ + x.J

x n matrix is invertible. then it has rank

41. A square matrix is invertible if and only if its reduced row echelon fonn has no zero row. 42. lf A is an 11 x 11 mattix such that the onl y solution of Ax = 0. then A is in vertible. 43. An 11 x 11 matrix is invettible if and o nly if its columns are linearly independent. 44. An 11 x 11 matrix is invertible if :md only if its rows arc linearly independent. 45. If a square matrix has a column consisting of all zeros. then it is not invertible. 46. If a square matrix has it is not invertible.;.-~

2xt - Xt +

X2 X2 X2

X3X3

+

+

X,J

= 3 = -2 = 4

+ X3 + X,J = -}X) +X,J X,J

Xt -

2x2 X2

63.

Xt

+

=

4

-x1 -3xt +

x2X2

+

x3 + 2r3

+ xo~

= -2 = I=-I

64. Let A = [ :

a-

(a) Verify that A2

3A + l2 = 0.

row consisting of all zeros. 1hcn

(b) Let B = 3/2 -A. Use 8 to prove that A is in vertible and 8 = A- 165. Let A = [ _

47. Any invertible matrix can be wrillen as a product o f e lementary matrices. 48. If A and 8 a re invertible invertible.11 x

~ ~-

_

n matrices. then A + 8 is

nI.

49. If A is :m n x " matrix s uch that Ax b is consistent for ever)' b in 'R." , then Ax b has a unique solution for every b in n.

=

=

(a) Verify that A 3 - 5A 2 + 9A- 413 0. I (b) Let 8 = ;j(A 2 - 5A + 9/j). Use 8 to prove that A is invertible and 8 =A (c) Explain how 8 in (b) can be obtained from the equation in (a).

=

50. If A is an invertible 11 x 11 matrix and the reduced row eche lon fo m1 of lA B ] is [/, C ], then C = s - tA.

51. If the reduced row echelon fonn of [A /,] is [R B]. then8 =A 1

66. Let A be an

11

x 11 matrix such that A2 "" 1, . Prove that A1

is invertible and A-

=A.

144 CHAPTER 2 Matrices and Linear Transformations67. Let A be an n x 11 matrix such that AA = 1 for some . posttl\e integer k. (a) Prove that A is invertible. (b) Express A I

78. In Exerci\e 20(c) of Section 1.5. how much is required in addition:tl inputs from each sector to increase net production in the nongovernment sector by S I million? 79. In Exercise 21(b) of Section 1.5. how much is required in additional inputs from each sector to increase the net production of scrvtces by $40 million? 80. In Exercise 22(b) of Section 1.5. how much is required in udditional inputs from each sector to increase the net production of manufacturing by $24 mill ion? 8 I. Suppo;.e that the input- output matrix C for an economy i> ;.uch that 1 - C is invenible and every entry of . (1. -C) t is positive. If the net production of one panicular sector of the economy must be increased. how does this affect the gross production of the economy? 82. U;c matnx transposes to modify the algonthm for computing A- 18 to devise an algorithm for computing A8 - t. and JU'tify your method. 83. Let A be un m x n matrix with reduced row echelon form R . (u) Prove that if rank A m, then there is u unique m x m matrix P such that PA = R. Furthermore. I' is invertible. /lim: For each j. let u; denote the jth pivot column of A. Prove that the m x m matrix U (u 1 u 1 . . Um] is invertible. Now let PA = R. and show that P u-t.

as a power of A.

68. Prove that if A is an m x 11 matrix and P is an invertible 111 x m matrix. then rank PA = r:mk A. /lim: Apply Exercise 62 of Section 2.3 to PA and to P - I (PA). 69. Let 8 be an11

x p matrix . Prove the fol lowing:

(a) For any m x 11 matrix R in reduced row echelon fonn. rank R8 !:: rank R. Him: Usc the definition of the reduced row echelon fonn. (b) If A is any m x 11 matrix. then rank A8 !:: rank A. /lim: Use Exercises 68 and 69(a). 70. Prove that if A is an m x 11 matrix and Q is an imertible 11 x 11 matrix. then rank AQ = rank A. /lim: Apply the reult of Exercise 69(b) to AQ and (AQ)Q - 1 71. Prove that for any matrix A. rank AT= rank A. Him: u,e Exercise 70. Exerci'e 87 of Section 2.3, and Theorems 2.2 and 2.3. 72. Usc the In vertible Matrix Theorem to prove that for any subsetS o f 11 vectors in R.". the setS h. linearly independent if and only if S is a generating set for

=

nn.

7 3. Let R and S be matrices with the same number of rows. and suppose that the matrix [R S I is in reduced row echelon fonn. Pro,e that R is in reduced row echelon fonn. 74. Consider the system of linear equations Ax

=

=

= b. where

A=

[i

234 and b

= [!~

20] .

(b) Pro' e that tf rank A < m. Lben there is more than one invcnible m x m matrix P such that PA = R. flint: There is an elementary m x m matrix E. distinct from 1,.. ;.uch that ER = R.

(a) Solve this matrix equation by using Gaussian eliminatlon.

Let A and 8 ~ 11 x n matric~s. \\'1> say that A is similar ro 8 if 8 = P 1AP for Willi! imt'rtible matrix P. Exercises 84 88 are co11cemed with this relation.84. Let A, fl. :tnd C be statements:11

x

11

matrices. Pwvc the following

(b) On a TJ-85 calculator. the value of A_, b is given as

(a) II is similar to A. (b) If II is similar to 8, then 8 is similar to A. (c) If II is similar to 8 and 8 is similar to C. then A is Mmilar to C. But this is11ot

a solution of Ax

= b . Why?

85. Let A be an

II X II

matrix.

75. Repeat Exercise 74 with

. (a) Prove that if A is similar to 1 then A

=1.11

2 3 776. Repc:tt Exercise 74 with

(b) Pro'e that if A is similar to 0. the 11 x then A= 0.

t.ero matrix.

(c) Suppose thut 8 = cl., for some calar 1 . (l11e mutrix 8 is called u ~calor matri..r.) Whut can you suy about A if A is simihtr to 8 ? 86. Suppose that A and 8 are 11 x 11 mutrices such that A is similar to 8 . Prove that if A is inveniblc. then 8 is invertible. and 11 - t is similar to 8 - 1 . 87 . Suppo-c that A and 8 arc 11 x n matrice' such th:ll ;\ h similar to 8 . Prove that AT is similar to 8 T. 88. Suppo-c that A and 8 are n x 11 matrices 'uch that A is ~imtlar to B. Pro\e that rank A = rank B. /lim: Use Exerci>es 68 and 70.

A=

[i

2

5 8

77. In Exercise 19(c) of Section 1.5. ho" much is required in additional inputs from each sector to increa.~e the net production of oil by $3 million?

2.4 ThelnverseofaMatrix

145

In Exercises 89- 92. use either a calcularor with marrix capabiliries or compwer softwarr! such os MATI.A8 ro solve each problem.Exercises 89- 9 I refer to rhe matrix 5 8 6 96

91. Show that A is invertible by computing its rank and using the Invertible Matrix Theorem.

92. Show that the matrix

9

57

89. Show that A is invertible by c-omputing its reduced row echelon form and using Theorem 2.5. 90. Show that A is invertible by solving the system Ax = 0 and using the Invertible Matrix Theorem.

l

2 3 4

-1 2 -1

6

-2

l

is inve11ible. lllustrate Exercise 68 by creating several random 4 x 4 matrices A and showing that rank PA = rank A.

SOLUTIONS TO THE PRACTICE PROBLEMS

I. The reduced row echelon f01m of A is

G~

=

ll( b) Because the reduced row echelon of the 3 x 3 matrix in (a) is 13 the matrix is invertible. (c) The solution of Ax = b is-I I

Since this matrix is not / 3 , Theorem 2.5 implies that A is not invertible. The reduced row echelon form of 8 is / 3, so as Theorem 2.5 implies, 8 is invertible. To compute 8 - t, we find the reduced row echelon form of [8 / 3 ].

04

I

0

0

I

4 4 0 0- Jq + 2- rzr1 + r-3~

~]0I

r.\

[~I5

I

I 4

-30

- 5r2+r , - r 1

[

~

I

I

0

-I

-3 16I

T hus the unique solution of the given system of linear equations is Xt = 5, x 2 = -1 , and x3 = -2. 3. (a) First observe that Xt = A-t b . If x1 is the solution of Ax = b + c. then

_ -__ .,_ [~ J-_

0I

0

-3 -160 0 00 13 (b) In the context of (a). let A = l3 - C. where C is the input- output matrix in Example 2 of Section 1.5 and b = d , the demand vector used in that example. The increase in the demands is given by the vector c=

[~ [~11ms8-t= [-:~- 16_:

I I

00 I

-16-1213

0

- 16

Gl=

and therefore the increase in the gross pro-

duction is given by

-:].

5

- 1

2. (a) The matrix form of the given system of linear equations is

1.3 0.6 [ 0.5

0.475 1.950 0.375

0.25] [5] = [ 8.9]0.50 4 11.8

.

1.25

2

6.5

146 CHAPTER 2 Matrices and Linear Transformations

12.5* I PARTITIONED MATRICES AND BLOCK

MULTIPLICATIONSuppose that we wish to compute A 3 . where

I

0I

A=

6 8 5 0 . [- 7 9 0 5

0

00 0 0]

This is a laborious process because A is a 4 x 4 matrix. However. there is another approach to matrix multipl ication that simplifies thi s calculation. We start by writing A as an array of 2 x 2 submatrices.

A=

[-~;,- -,!. -t-~:;,- -,~'" "l-7 9 0 5

We can then write A more compactly as

A= [/82 Sh ' 0]where

h= [~ ~]

and

8=[-~ ~].Sh as if

Next. we compu te A2 by the row-column rule. treat ing / 2, 0, B, and they were scalar entries of A.A2

= [/28

S/2

0] [/2 5h = [8!2 ++ OB 0] /2/z (S/2)8 8

Finally,

/2

= [ 318This method for computing A3 requ ires only that we multiply the 2 x 2 matrix B by the scalar 3 1 and / 2 by 53. We can break up any matrix by drawing horizontal and vettical lines within the matrix , which divides the matrix into an array of submatrices called blocks. The resulting array is called a partition of the matrix, and the process of forming these blocks is ca lled partitioning. This section can be omitted without loss of continuity.

2.5 Partitioned Matrices and Block Multiplication

147

While there are many ways to partition any g iven matrix, there is often a natural prutition that si mplifies matrix multiplication. For example, the matrix

A= [2 0 Ican be written as

0 23

2 0

-i]-1 ]3 .

A=

[ 2 0I

0 23

I

2 0

0

The horizontal and vertical lines prutition A into an array of four blocks. The first row of the pa11it ion consists of the 2 x 2 matrices

2h

= [~ ~]

and

[I -1]2

3

and the second row consists of the I x 2 matrices 11 31 and 0 = 10 0]. We can also pm1ition A as

2 0 11 0 2 2 [ I 3 0

-1]3 0

.

In this case, there is only one row and two columns. The blocks of this row are the 3 x 2 matrices

and

[I -1]2

3 .

0

0

The first pmtition of A contains the submatrices 2/z and 0 , which are easily multiplied by other matrices, so this partition is usually more desirable. As shown in the preceding example, prutit ioning matrices appropriate ly can simplify matrix multiplication. Two pa11itioned matrices can be multiplied by treating the blocks as if they were scalars, provided that the products of the individual blocks are de fined.

lifi,!.!tjl

Let

A =

[_;~1 _~:o+---:--_;~1._] :0 3 -: : ~

and

We can use the given partition to find the entries in the upper left block of AB by computing

J 2] 0 3] [0l 3] [I O+ [ 4 6 [2 -1 -_[45 5 21 - 1

148 CHAPTER 2 Matrices and Linear Transformations

Similarly, we can find the upper right block of AB by computing

We obtain the lower left block of AB by computing

[I 01 [:

~] + [3 - 11[~ -~]=[1

0) + [6 - 6)=[7 - 6].

Finally, we obtain the lower right block of AB by computing

I

I OJ[~]+ [3AB =

- I][~] =12 [

[3J+ [SJ = [81.

Putting these blocks together, we have

8

;

29 -6

In general, we have the following rule:

Block MultiplicationSuppose two matrices A and 8 are partitioned into blocks so that the number of blocks in each row of A is the same as the number of blocks in each column of B. Then the matrices can be multiplied according to the usual rules for matrix mul tiplication, treating the blocks as if they were scalars, provided that the individua l products arc defined.

TWO ADDITIONAL METHODS FOR COMPUTING A MATRIX PRODUCTGiven two matrices A and B such that the product AB is defined, we have seen how to compute the product by using blocks obtained from partitions of A and B. ln this subsection, we look at two specific ways of partitioning A and B that lead to two new methods of computing their product. By rows Given an m x n matrix A and an n x p matrix 8, we partition A into an m x I array of row vectors a;, a~, ... , a;,. and regard B as a single block in a I x I array. In this case, AB=

' al [aBl ; a;s [a:.. a;,.sa~

1

B=

.

(7)

Thus the rows of AB are the products of the rows of A with B. More specifically. the ith row of AB is the matrix product of the ith row of A with B.

M;:f! .. l.!t!fM

Let A= [- II

2

-2and

I

B

=[ :

-3- I

- I

~].

2.5 Partitioned Matrices and Block Multiplication

149

S ince

a;B=[Iand

2

- 1] [-~ -~ ~]=[- 1I - 1 - 1I

- 4 9]

a;B = [- 1we have AB =

-3- I

- I

~] =[6

-7I] '

[

-I 6

-4 -7

It is interesting to compare the method of computing a product by rows with the definition of a matrix product, which can be thought of as the method of computing a product by columns.

By outer products Another method to compute a matrix product is to partition A into columns and B into rows. Suppose that a1, a2 . , a, are the columns of an ... m x n matrix A, and b;, b2.. .. . b;, arc the rows of an n x p matrix B. Then block mult iplication gives

AB

= [a 1 a2

(8)

Thus AB is the sum of matrix products of each column of A wi th the corresponding row of B. The terms a; b; in equation (8) arc matrix products of two vectors. namely. column i of A and row i of B. Such products have an especially simple form. In order to present this result in a more standard notation, we consider the matrix product of v and wT, where

V=

v'] [v~,vz

and

It follows from equation (7) that

Thus the rows of the m x 11 matrix vwT are all mu ltiples of wr. If follows (see Exercise 52) that the rank of the matrix vwT is I if both v and w are nonzero vectors. Products of the form vwr , where \' is in R"' (regarded as an m x I matrix) and w is in 1?/' (regarded as ann x I matrix), are called outer products. In this terminology,

150 CHAPTER 2 Matrices and Linear Transformations

equation (8) states that the product of an m x 11 matrix A and an 11 x p matrix 8 is the sum of n matrices of rank at most I, namely, the outer products of the columns of A with the corresponding rows of B. ln the special case that A is a I x n matrix. so that A = (a 1 a2 ... a,] is a row vector, the product in equation (8) is the linear combination

of the rows of 8 with the corresponding entries of A as the coefficients. For example,

[2

3)[-~ ~] =2[- 1

4]+3[5 0]=[13 8].

Example 3

Use ou ter products to express the product AB in Example 2 as a sum of matrices of rank I.

Solutionrank I,

First form the outer products in equation (8) to obtain three matrices of

-6 8 ]and

-3 4 .

[-']3

(I

- I

- IJ

=

[-'3 I

I

-3

Then

- I

-3

as guaranteed by equation (8).

We summarize the two new methods for computing the matrix product AB.

Two Methods of Computing a Matrix Product AB11

We assume here that A is an m x n matrix with rows a'1 a; ..... a;,. and 8 is an x p matrix with rows b'1 , b~ . . . , b;, .I. By rows The ith row of AB is obtained by multiplying the ith row of A by

B-that is, a;s. 2. By outer products The matrix AB is a sum of matrix products of each column of A with the corresponding row of 8. Symbolically,

2.5 Partitioned Matrices and Block Multiplication

151

EXERCISESIn Exercises 1- 12, compwe the product of each partitioned marrix using block nmltiplication. In Exert:i.fe.s 13- 20, compute the indicated row of the given product without compwing rite entire matrix.

1. [-1 3I] [

-~I:J02

A=

[ ~ -~ !] , = [-! ~] ,B

-3

-2

0

3

- 2

2. [

I 0

-I I

J[i]

c-

[2 ~ =~J 414. row I of CA 16. row 2 of BC 18. row 2 of B T A 20. row 3 of A 2

3. [

b

I 1 [~ J n-1 1

13. row I of AB 15. row 2 of CA 17. row 3 of BC 19. row 2 of A2

4.

[~

-: 1 [+J n- 1 2 2 23 0

In Exercises 21- 28, use the mmrices A. B, and C from Exercises 13-20.

21. Use outer products to represent AB as the s um of 3 matrices of rank I.

22. Usc outer products to represent 8C as the sum of 2 matri5.

6.

[-lll [ [-lll [

ces of rank I.23. Use outer products to represent CB as the s um of 3 matrices of rank I.

-1

2

24. Use outer products to represent CA as the sum of 3 matri-

ces of rank I.3

- 1 2 2 2

-1

1n0

25. Use outer products to represent 8 r A as the sum of 3matrices of rank I.

26. Usc outer products to represent ACT as the sum of 3matrices of rank I.

7.

[-lll [

- 1 2

I~0