OHSx XM511 Linear Algebra: Offline Homework Solutions ...ohsx+XM511_Fall2016+Linear... · OHSx...

34
OHSx XM511 Linear Algebra: Offline Homework Solutions Lecture 14 (§2.1) In Problems 1-19, a set S of objects is given together with a definition for vector addition and scalar multiplication. Determine whether S is a vector space. If not, identify at least one property that fails to hold. 1. S = ( a b c d M 2×2 c =1 ) under standard matrix addition and scalar multiplication. Solution: No. This set contains no zero vector. 2. The set S of all 2 × 2 real matrices A = a ij with a 11 = -a 22 under standard matrix addition and scalar multiplication. Solution: Yes, S is a vector space. 3. a b R 2 a + b =2 under standard matrix addition and scalar multiplication. Solution: No, S is not a vector space. The zero vector 0 0 is not in S since 0+0 = 0 6= 2. 4. a b R 2 a = b under standard matrix addition and scalar multiplication. Solution: Yes, S is a vector space. 5. All 2-tuples representing points in the first and third quadrants of the plane, including the origin, under standard addition and scalar multiplication for 2-tuples. Solution: No. S is not closed under vector addition. For instance, the vectors (2, 1) and (-1, -2) are in the set, but their sum (2, 1) (-1, -2) = (1, -1) is not. 6. All 2-tuples representing points that are on the straight line y = -2x, under standard addition and scalar multiplication for 2-tuples. Solution: Yes, S is a vector space. 7. All 2-tuples representing points in the plane that are on the straight line y = -2x + 1, under standard addition and scalar multiplication for 2-tuples. Solution: No. S contains no zero vector, as the point (0, 0) does not lie on the line. 8. The set consisting of the single element 0 with vector addition and scalar multiplication defined as 0 0 = 0 and α 0 = 0 for any real number α. Solution: Yes. All of the vector space properties are easily verified. 1

Transcript of OHSx XM511 Linear Algebra: Offline Homework Solutions ...ohsx+XM511_Fall2016+Linear... · OHSx...

OHSx XM511 Linear Algebra:Offline Homework Solutions

Lecture 14 (§2.1)

In Problems 1-19, a set S of objects is given together with a definition for vector addition andscalar multiplication. Determine whether S is a vector space. If not, identify at least oneproperty that fails to hold.

1. S =

{[a bc d

]∈M2×2

∣∣∣∣∣ c = 1

}under standard matrix addition and scalar multiplication.

Solution: No. This set contains no zero vector.

2. The set S of all 2 × 2 real matrices A =[aij]

with a11 = −a22 under standard matrixaddition and scalar multiplication.

Solution: Yes, S is a vector space.

3.{[a b

]∈ R2

∣∣ a+ b = 2}

under standard matrix addition and scalar multiplication.

Solution: No, S is not a vector space. The zero vector[0 0

]is not in S since 0+0 = 0 6= 2.

4.{[a b

]∈ R2

∣∣ a = b}

under standard matrix addition and scalar multiplication.

Solution: Yes, S is a vector space.

5. All 2-tuples representing points in the first and third quadrants of the plane, including theorigin, under standard addition and scalar multiplication for 2-tuples.

Solution: No. S is not closed under vector addition. For instance, the vectors (2, 1) and(−1,−2) are in the set, but their sum (2, 1)⊕ (−1,−2) = (1,−1) is not.

6. All 2-tuples representing points that are on the straight line y = −2x, under standardaddition and scalar multiplication for 2-tuples.

Solution: Yes, S is a vector space.

7. All 2-tuples representing points in the plane that are on the straight line y = −2x + 1,under standard addition and scalar multiplication for 2-tuples.

Solution: No. S contains no zero vector, as the point (0, 0) does not lie on the line.

8. The set consisting of the single element 0 with vector addition and scalar multiplicationdefined as 0⊕ 0 = 0 and α� 0 = 0 for any real number α.

Solution: Yes. All of the vector space properties are easily verified.

1

9. The set S of all solutions of the homogeneous set of linear equations Ax = 0, understandard matrix addition and scalar multiplication.

Solution: Yes, S is a vector space.

10. The set of all solutions of the set of linear equations Ax = b, b 6= 0, under standard matrixaddition and scalar multiplication.

Solution: No, S is not a vector space. It contains no zero vector since A0 = 0 6= b.

11. {p(t) ∈ P3∣∣ p(0) = 0} under standard addition and scalar multiplication of polynomials.

Solution: Yes, S is a vector space.

12. The set of all real two-dimensional row matrices[a b

]with standard matrix addition, but

scalar multiplication defined as α�[a b

]=[0 αb

].

Solution: Property (S3) is not satisfied since

1�[1 0

]=[0 1 · 0

]=[0 0

]6=[1 0

].

(All other properties are satisfied.) So the set is not a vector space.

13. S ={[x x2

] ∣∣ x ∈ R}

with standard matrix addition and scalar multiplication.

Solution: No. For instance,[1 1

]∈ S, but[

1 1]⊕[1 1

]=[2 2

],

which is not an element of S.

14. S ={[x x2

] ∣∣ x ∈ R}

with vector addition defined as[x x2

]⊕[y y2

]=[x+ y (x+ y)2

],

and scalar multiplication defined as

α�[x x2

]=[αx α2x2

].

Solution: Yes, S is a vector space.

15. S ={[x p(x)

] ∣∣ x ∈ R}

, where p(x) is any polynomial, vector addition is defined as[x p(x)

]⊕[y p(y)

]=[x+ y p(x+ y)

],

and scalar multiplication defined as

α�[x p(x)

]=[αx p(αx)

].

Solution: Yes, S is a vector space.

2

16. The set of all real two-dimensional row matrices[a b

]with standard matrix addition, but

scalar multiplication defined as α�[a b

]=[2αa 2αb

].

Solution: No. For instance, Property (S3) is not satisfied since

1�[a b

]=[2a 2b

]6=[a b

]as long as a and b are not both zero. Likewise, Property (S2) does not hold. For anyscalars α and β,

α�(β �

[a b

])= α�

[2βa 2βb

]=

[4αβa 4αβb

],

while(αβ)�

[a b

]=[2αβa 2αβb

].

So unless α = β = 0 or a = b = 0,

α�(β �

[a b

])6= (αβ)�

[a b

].

17. The set of all real three-dimensional row matrices[a b c

]with standard scalar multi-

plication, but vector addition defined as[a b c

]⊕[x y z

]=[a+ x b+ y + 1 c+ z

].

Solution: Property (S4) is not satisfied. For instance,

2�[1 0 0

]⊕ 1�

[1 0 0

]=

[2 0 0

]⊕[1 0 0

]=

[3 1 0

]6= 3�

[1 0 0

].

(Property (S5) is not satisfied either.) So the set is not a vector space.

Note, however, that there is a vector that satisfied (A4), but it is not[0 0 0

].

18. The set of all real two-dimensional row matrices[a b

]with standard matrix addition, but

scalar multiplication defined as α�[a b

]=[α2a α2b

].

Solution: No. Property (S4) is not satisfied. For instance,

1�[1 0

]+ 1�

[1 0

]6= (1 + 1)�

[1 0

].

19. [G-III(4)] Prove that the set S ={[x p(x)

] ∣∣ x ∈ R}

, where p(x) is a polynomial, isa vector space with the standard addition and scalar multiplication of R2 if and only ifp(x) = ax for some scalar a.

Solution: First, suppose that p(x) = ax for some scalar a. We now verify the ten vectorspace properties. In what follows,

[x ax

],[y ay

], and

[z az

]are elements of S, and

α and β are scalars.

Property (A1):[x ax

]⊕[y ay

]=[x+ y ax+ ay

]=[x+ y a(x+ y)

]∈ S

3

Property (A2):[x ax

]⊕[y ay

]=[x+ y ax+ ay

]=[y + x ay + ax

]=[y ay

]⊕[x ax

]Property (A3):[

x ax]⊕([y ay

]⊕[z ax

])=

[x ax

]⊕[y + z ay + az

]=

[x+ y + z ax+ ay + az

]=

[x+ y ax+ ay

]⊕[z az

]=

([x ax

]⊕[y ay

])⊕[z az

]Property (A4): Clearly

[0 0

]∈ S, and[

0 0]⊕[x ax

]=[0 + x 0 + ax

]=[x ax

].

Property (A5): If[x ax

]∈ S, then

[−x −ax

]∈ S as well, and[

x ax]⊕[−x −ax

]=[x− x ax− ax

]=[0 0

].

Property (S1):α�

[x ax

]=[αx αax

]=[αx a(αx)

]∈ S.

Property (S2):

α�(β �

[x ax

])= α�

[βx βax

]=

[αβx αβax

]= (αβ)

[x ax

].

Property (S3):1�

[x ax

]=[1 · x 1 · ax

]=[x ax

].

Property (S4):

(α+ β)�[x ax

]=

[(α+ β)x (α+ β)ax

]=

[αx+ βx αax+ βax

]=

[αx αax

]⊕[βx βax

]= α�

[x ax

]+ β �

[x ax

].

Property (S5):

α�([x ax

]⊕[y ay

])= α�

[x+ y ax+ ay

]=

[αx+ αy αax+ αay

]=

[αx αax

]⊕[αy αay

]= α�

[x ax

]⊕ α�

[y ay

].

Thus, S is a vector space.

Conversely, suppose that S is a vector space. Since S is closed under scalar multiplication,for any scalar α and any

[x p(x)

]∈ S,

α�[x p(x)

]=[αx αp(x)

]∈ S,

so we must have αp(x) = p(αx). Plugging in x = 1, this gives us p(α) = p(1) · α. Hence,p(x) = ax, where a = p(1).

4

OHSx XM511 Linear Algebra:Offline Homework Solutions

Lecture 15 (§2.1)

1. For any scalar α and any vector u, prove that −(α� u) = (−α)� u = α� (−u).

Solution:

−(α� u) = (−1)� (α� u) Theorem 5

= (−α)� u, Property (S2)

which establishes the first step in the equality. Then,

(−α)� u = (α · (−1))� u

= α� ((−1)� u) Property (S2)

= α� (−u), Theorem 5

which establishes the second step.

2. If u− v is shorthand for u⊕ (−v), then prove that v ⊕ (u− v) = u.

Solution:

v ⊕ (u− v) = v ⊕ (u⊕ (−v))

= (u⊕ (−v))⊕ v Property (A2)

= u⊕ (−v ⊕ v) Property (A3)

= u⊕ 0 Property (A5)

= u. Property (A4)

3. If 2u is shorthand for 2� u, then prove that u⊕ u = 2u.

Solution:

u⊕ u = (1� u)⊕ (1� u) Property (S3)

= (1 + 1)� u Property (S4)

= 2u.

4. Suppose that u⊕ u = 2v. Prove that u = v.

Solution: By Problem ??, u⊕ u = 2u, so 2u = 2v. Adding −(2v) to both sides gives

2u⊕ [−(2v)] = 2v ⊕ [−(2v)]

By definition, the right side of this equation is zero. On the left side, we use the fact that−(2v) = 2(−v) to obtain

2u⊕ 2(−v) = 0

2(u⊕ (−v)) = 0.

By Theorem 7,u⊕ (−v) = 0.

1

And finally, adding v to both sides gives us

(u⊕ (−v))⊕ v = 0⊕ v

u⊕ ((−v)⊕ v) = v

u⊕ 0 = v

u = v,

which is what we wanted to prove.

5. Prove that if u 6= 0 and α� u = β � u, then α = β.

Solution: Suppose that u 6= 0 and α � u = β � u. Adding −(β � v) to both sides, weobtain

α� u⊕ [−(β � u)] = β � u⊕ [−(β � u)]

α� u⊕ (−β)� u = 0

(α− β)u = 0.

By Theorem 7, this implies that α− β = 0, and hence α = β.

6. Suppose that V satisfies axioms (A1)–(A4) and (S1)–(S5). In addition, assume we knowthat for any u ∈ V , 0� u = 0. Prove that V is a vector space.

Solution: To prove that V is a vector space, we need only show that it satisfies axiom(A5), i.e. that every element of V has an additive inverse. Consider any u ∈ V . Now,observe that

u⊕ (−1)� u = 1� u⊕ (−1)� u

= (1 + (−1))u

= 0� u

= 0.

Thus, (−1)� u is an additive inverse of u, which implies that V satisfies axiom (A5).

7. Suppose that V satisfies axioms (A1)–(A3) and (S1)–(S5). Furthermore, suppose that foreach u ∈ V , there exists a unique vector zu ∈ V such that u + zu = u. Prove that V is avector space.

Solution: We begin by showing that V satisfies axiom ((A4)) (i.e. has a zero vector).Take a fixed vector u ∈ V . Now, consider any other vector v ∈ V . First, we observe that

(u⊕ v)⊕ zu = (u⊕ zu)⊕ v

= u⊕ v.

Hence, zu = zu⊕v. Similarly,

(u⊕ v)⊕ zv = u⊕ (v ⊕ zv)

= u⊕ v.

Thus, zv = zu⊕v as well, which implies that zu = zv. Therefore, for any v ∈ V , v⊕zu = v.This implies that zu is a zero vector, which we will now denote 0.

2

Now, for any v ∈ V , observe that

v ⊕ (0� v) = (1� v)⊕ (0� v)

= (1 + 0)� v

= 1� v

= v.

Thus, 0 � v = zv = 0. At this point, we know that V satisfies axioms (A1)–(A4) and(S1)–(S5). So by Problem ??, this proves that V is a vector space.

3

OHSx XM511 Linear Algebra:Offline Homework Solutions

Lecture 16 (§2.2)

In Problems 1–??, determine whether each set is a vector space. If so, prove it. If not, find aproperty that fails to hold.

1. S ={[a b

]∈ R2

∣∣ b = −5a}

.

Solution: Yes, S is a vector space. For any[a −5a

],[c −5c

]∈ S and any scalars α

and β,

α�[a −5a

]⊕ β �

[c −5c

]=

[αa −5αa

]⊕[βc −5βc

]=

[αa+ βc −5αa+−5βc

]=

[αa+ βc −5(αa+ βc)

].

And clearly, this is an element of S as well.

2. S ={[a b

]∈ R2

∣∣ b = a+ 3}

.

Solution: No. S does not contain the zero vector,[0 0

].

3. S ={[a b

]∈ R2

∣∣ a = b = 0}

.

Solution: Yes. S consists only of the zero vector of R2, and as we have seen before, thisis a vector space.

4. S ={[a b c

]∈ R3

∣∣ a = b}

.

Solution: Yes, S is a vector space. For any[a1 a1 c1

],[a2 a2 c2

]∈ S and any

scalars α and β,

α�[a1 a1 c1

]⊕ β �

[a2 a2 c2

]=

[αa1 αa1 αc1

]⊕[βa2 βa2 βc2

]=

[αa1 + βa2 αa1 + βa2 αc1 + βc2

],

which is also an element of S.

5. S ={[a b c

]∈ R3

∣∣ c = a− b}

.

Solution: Yes, S is a vector space. For any[a1 b1 c1

],[a2 b2 c2

]∈ S and any

scalars α and β,

α�[a1 b1 c1

]⊕ β �

[a2 b2 c2

]=

[αa1 + βa2 αb1 + βb2 αc1 + βc2

].

We know that c1 = a1 − b1 and c2 = a2 − b2, and therefore

αc1 + βc2 = α(a1 − b1) + β(a2 − b2)

= (αa1 + βa2)− (αb1 + βb2).

Thus, α�[a1 b1 c1

]⊕ β �

[a2 b2 c2

]∈ S.

1

6. S ={[a b c

]∈ R3

∣∣ c = ab}

.

Solution: No.[1 1 1

]∈ S, but 2�

[1 1 1

]=[2 2 2

]/∈ S.

7. S =

{[a 2a 00 a 2a

]∈M2×3

∣∣∣∣∣ a ∈ R

}.

Solution: Yes. For any

[a 2a 00 a 2a

],

[b 2b 00 b 2b

]∈ S and any scalars α and β,

α�[a 2a 00 a 2a

]⊕ β �

[b 2b 00 b 2b

]=

[αa+ βb 2αa+ 2βb 0

0 αa+ βb 2αa+ 2βb

]=

[αa+ βb 2(αa+ βb) 0

0 αa+ βb 2(αa+ βb

],

which is also in S.

8. S =

a a2 a3

a2 a a2

a3 a2 a

∈M3×3

∣∣∣∣∣ a ∈ R

.

Solution: No. For example,

1 1 11 1 11 1 1

∈ S, but

2�

1 1 11 1 11 1 1

=

2 2 22 2 22 2 2

/∈ S.

9. S =

{A ∈M2×2

∣∣∣∣∣ A is invertible

}.

Solution: No. This set contains no zero vector, as

[0 00 0

]is not invertible.

10. S =

{A ∈M2×2

∣∣∣∣∣ A is singular

}.

Solution: No. For instance,

[1 00 0

],

[0 00 1

]∈ S, but[

1 00 0

]⊕[0 00 1

]=

[1 00 1

],

which is invertible and therefore not in S.

11. S =

{at2 + bt+ c ∈ P2

∣∣∣∣∣ b = 0

}.

Solution: Yes. Let a1t2 + c1, a2t

2 + c2 ∈ S, and let α and β be any scalars. Then

α� (a1t2 + c1)⊕ β � (a2t

2 + c2) = (αa1t2 + αc1)⊕ (βa2t

2 + βc2)

= (αa1 + βa2)t2 + (αc1 + βc2).

2

This is also an element of S, so it follows that S is a vector space.

12. S =

{p(t) ∈ P2

∣∣∣∣∣ p(3)− 2p(1) = 4

}.

Solution: No. S does not contain the zero polynomial, and therefore has no zero vector.

13. (a) Prove that P2 is a subspace of P3.

(b) Prove that for n < m, Pn is a subspace of Pm.

Solution:

(a) P2 is a nonempty subset of P3, so it will suffice to show that it is closed under additionand scalar multiplication. Let p(t) = a2t

2 + a1t+ a0 and q(t) = b2t2 + b1t+ b0 be any two

polynomials in P2, and let α and β be any scalars. Then

αp(t) + βq(t) = α(a2t2 + a1t+ a0) + β(b2t

2 + b1t+ b0)

= (αa2 + βb2)t2 + (αa1 + βb1)t+ (αa0 + βb0),

which is also an element of P2. Thus, P2 is a subspace of P3.

(b) Pn is a nonempty subset of Pm, so it will suffice to show that it is closed under additionand scalar multiplication. Let p(t) = ant

n + · · ·+ a1t+ a0 and q(t) = bntn + · · ·+ b1t+ b0

be any two polynomials in Pn, and let α and β be any scalars. Then

αp(t) + βq(t) = α(antn + · · ·+ a1t+ a0) + β(bnt

n + · · ·+ b1t+ b0)

= (αan + βbn)tn + · · ·+ (αa1 + βb1)t+ (αa0 + βb0),

which is also an element of Pn. Thus, Pn is a subspace of Pm.

3

OHSx XM511 Linear Algebra:Offline Homework Solutions

Lecture 17 (§2.2)

1. Determine whether u is a linear combination of v1 =[1 0 1

]and v2 =

[1 1 1

].

(a) u =[3 2 3

](b) u =

[3 3 2

](c) u =

[0 0 0

](d) u =

[0 1 1

].

Solution:

(a) Yes. (b) No.(c) Yes. (d) No.

2. Determine whether the following matrices are linear combinations of

A1 =

[1 00 0

], A2 =

[0 10 0

], A3 =

[1 11 0

].

(a)

[0 11 1

](b)

[1 23 0

](c)

[1 10 0

](d)

[0 00 0

](e)

[2 0−2 0

](f)

[0 01 −1

].

Solution:

(a) No. (b) Yes. (c) Yes.(d) Yes. (e) Yes. (f) No.

3. Determine whether the following polynomials are linear combinations of {t3 + t2, t3 +t, t2 + t}.(a) t3 + t2 + t (b) 2t3 − t (c) 5t (d) 2t2 + 1.

Solution:

(a) Yes. (b) Yes. (c) Yes. (d) No.

4. Show that if u is a linear combination of the vectors v1,v2, . . . ,vn and if each vi (i =1, 2, . . . , n) is a linear combination of the vectors w1,w2, . . . ,wm, then u can also beexpressed as a linear combination of w1,w2, . . . ,wm.

Solution: Suppose that u =∑n

i=1 civi and vi =∑m

j=1 aijwj . Then

u =

n∑i=1

civi

=

n∑i=1

ci

m∑j=1

aijwj

=

n∑i=1

m∑j=1

ciaijwj .

Now, we can exchange the order of summation, which gives us

u =

m∑j=1

n∑i=1

ciaijwj .

1

But only the scalars depend on i, so we can rewrite this as

u =

m∑j=1

(n∑

i=1

ciaij

)wj .

And this gives us u as a linear combination of w1,w2, . . . ,wm.

5. Let A be an n× n matrix, and let both x and y be n× 1 column matrices. Prove that ify = Ax, then y is a linear combination of the columns of A.

Solution: Let A =

a11 · · · a1n...

. . ....

an1 · · · ann

and x =

x1...xn

. Then

y = Ax

=

a11x1 + · · ·+ a1nxn...

an1x1 + · · ·+ annxn

= x1

a11...an1

+ · · ·+ xn

a1n...ann

.Thus, y is a linear combination of the columns of A.

6. Show that the set of solutions of the matrix equation Ax = 0, where A is a p× n matrix,is a subspace of Rn.

Solution: Suppose that x1 and x2 are any two solutions to Ax = 0. Let α and β be anyscalars. Then

A(αx1 + βx2) = αAx1 + βAx2

= α0 + β0

= 0.

Thus, αx1 + βx2 is a solution of Ax = 0 as well, so the set of solutions is a subspace.

7. Show that the set of solutions of the matrix equation Ax = b, where A is a p× n matrix,is not a subspace of Rn when b 6= 0.

Solution: Suppose that Ax = b. Then

A(2x) = 2Ax

= 2b

6= b.

Therefore, the set of solutions to Ax = b is not closed under scalar multiplication, so it isnot a vector space.

2

8. [G-I] Prove that span{u,v} = span{u + v,u− v}.

Solution: Let x be an arbitrary (but fixed) element of span{u + v,u − v}. Then thereexist scalars a and b such that

x = a(u + v) + b(u− v).

From this, we easily see that

x = (a+ b)u + (a− b)v,

and hence x ∈ span{u,v}. Since x was arbitrary, this implies that span{u + v,u− v} ⊂span{u,v}.Now, we must prove the reverse inclusion, that span{u,v} ⊂ span{u + v,u − v}. Let ybe an arbitrary element of span{u,v}. Then there exist scalars a and b such that

y = au + bv.

Let c = 12 (a+ b) and d = 1

2 (a− b). Then we can easily see that a = c+ d and b = c− d, so

y = (c+ d)u + (c− d)v

= c(u + v) + d(u− v).

Thus, y ∈ span{u + v,u− v}, which implies that span{u,v} ⊂ span{u + v,u− v}.

9. [G-I] Prove that span{u,v,0} = span{u,v}.

Solution: Since {u,v} ⊂ {u,v,0}, it immediately follows that span{u,v} ⊂ span{u,v,0}.Hence, we need only prove the reverse inclusion to finish the problem.

Let x be an arbitrary element of span{u,v,0}. Then there exist scalars a, b, and c suchthat

x = au + bv + c0.

Since c0 = 0, this implies that

x = au + bv + 0

= au + bv.

Hence, x ∈ span{u,v}, which implies that span{u,v,0} ⊂ span{u,v}.

10. [G-I] Prove that span{u,v,w} = span{u + v,v + w,u + w}.

Solution: Let x be an arbitrary (but fixed) element of span{u + v,v + w,u + w}. Thenthere exist scalars a, b, and c such that

x = a(u + v) + b(v + w) + c(u + w).

From this, it is clear that

x = (a+ c)u + (a+ b)v + (b+ c)w,

and hence x ∈ span{u,v,w}. Since x was arbitrary, this implies that span{u + v,v +w,u + w} ⊂ span{u,v,w}Now we need to prove the reverse inclusion, that span{u,v,w} ⊂ span{u+v,v+w,u+w}.Let x be an arbitrary (but fixed) element of span{u,v,w}. Then there exist scalars a, b,and c such that

y = au + bv + cw.

3

Let d = 12 (a+ b− c), e = 1

2 (−a+ b+ c), and f = 12 (a− b+ c). Then

a = d+ f

b = d+ e

c = e+ f.

From this, we find that

y = (d+ f)u + (d+ e)v + (e+ f)w

= d(u + v) + e(v + w) + f(u + w).

Thus, y ∈ span{u+v,v+w,u+w}, which proves that span{u,v,w} ⊂ span{u+v,v+w,u + w}, as desired.

4

OHSx XM511 Linear Algebra:Offline Homework Solutions

Lecture 18 (§2.3)

In Problems 1–??, determine whether each set is linearly independent.

1.

1

01

,1

11

, 1−11

.

Solution: This set is linearly dependent. For instance,

2

101

−1

11

− 1−11

=

000

.

2.

1

23

,3

21

,2

13

.

Solution: Let c1, c2, and c3 be scalars such that

c1

123

+ c2

321

+ c3

213

=

000

.Then c1, c2, and c3 must satisfy the system of equations

c1 + 3c2 + 2c3 = 0

2c1 + 2c2 + c3 = 0

3c1 + c2 + 3c3 = 0.

Using Gaussian elimination, we find that c1 = c2 = c3 = 0 is the only solution. Thus, thisset is linearly independent.

3.

1

23

,3

21

,2

13

,−1

23

.

Solution: Let c1, c2, c3, and c4 be scalars such that

c1

123

+ c2

321

+ c3

213

+ c4

−12−3

=

000

.Then c1, c2, c3, and c4 must satisfy the system of equations

c1 + 3c2 + 2c3 − c4 = 0

2c1 + 2c2 + c3 + 2c4 = 0

3c1 + c2 + 3c3 − 3c4 = 0.

The system is homogeneous with more unknowns than equations, so there are nontrivialsolutions. Thus, this set is linearly dependent.

1

4.{[

1 2 3],[−3 −6 −9

]}.

Solution: This set is linearly dependent. For instance,

3[1 2 3

]+[−3 −6 −9

]=[0 0 0

].

5.

{[1 00 0

],

[0 10 0

],

[0 01 0

],

[0 00 1

]}.

Solution: Let c1, c2, c3, and c4 be scalars such that

c1

[1 00 0

]+ c2

[0 10 0

]+ c3

[0 01 0

]+ c4

[0 00 1

]=

[0 00 0

].

Then [c1 c2c3 c4

]=

[0 00 0

],

which clearly implies that c1 = c2 = c3 = c4 = 0. Thus, this set is linearly independent.

6.

{[1 01 1

],

[1 11 0

],

[1 10 1

],

[0 11 1

]}.

Solution: Let c1, c2, c3, and c4 be scalars such that

c1

[1 01 1

]+ c2

[1 11 0

]+ c3

[1 10 1

]+ c4

[0 11 1

]=

[0 00 0

].

For this to hold, these scalars must satisfy the system of equations

c1 + c2 + c3 = 0

c2 + c3 + c4 = 0

c1 + c2 + c4 = 0

c1 + c3 + c4 = 0.

Using Gaussian elimination, we find that c1 = c2 = c3 = c4 = 0 is the only solution. Thus,this set is linearly independent.

7.{t3 + t2, t3 − t2, t3 − t, t3 + 1

}.

Solution: Let c1, c2, c3, and c4 be scalars such that

c1(t3 + t2) + c2(t3 − t2) + c3(t3 − t) + c4(t3 + 1) = 0.

Rearranging, we obtain

(c1 + c2 + c3 + c4)t3 + (c1 − c2)t2 − c3t+ c4 = 0.

For this to hold, each coefficient of the polynomial on the left must be zero. And thatimplies that c1 = c2 = c3 = c4 = 0, so the set is linearly independent.

2

8. Consider a linearly dependent set of three vectors {v1,v2,v3} ⊂ R3. Prove that if no twovectors lie on the same straight line, then v3 must be a linear combination of v1 and v2.

Solution: Since the set {v1,v2,v3} is linearly dependent, we can find scalars c1, c2, c3,not all zero, such that

c1v1 + c2v2 + c3v3 = 0.

If c3 = 0, then c1v1 + c2v2 = 0. But this cannot be the case since v1 and v2 do not lie onthe same line, so c3 6= 0. Thus, we can solve the original equation in terms of v3 to findthat

v3 = −c1c3

v1 −c2c3

v2.

Therefore, v3 is a linear combination of v1 and v2.

9. Prove that if {v1,v2,v3} is linearly independent, then the set {u1,u2,u3}, where u1 =v1 − v2, u2 = v1 + v3, and u3 = v2 − v3, is also linearly independent.

Solution: Suppose that for some scalars c1, c2, and c3 we have

c1u1 + c2u2 + c3u3 = 0,

or equivalentlyc1(v1 − v2) + c2(v1 + v3) + c3(v2 − v3) = 0.

Rearranging terms yields

(c1 + c2)v1 + (−c1 + c3)v2 + (c2 − c3)v3 = 0.

Since {v1,v2,v3} is linearly independent, all coefficients in the above equation must equal0, so

c1 + c2 = 0

−c1 + c3 = 0

c2 − c3 = 0.

And solving this system, we find that c1 = c2 = c3 = 0. Thus, {u1,u2,u3} is linearlyindependent.

10. Prove that if {v1,v2,v3} is linearly independent, then the set {α1v1, α2v2, α3v3}, for anychoice of the nonzero scalars α1, α2, and α3, is also linearly independent.

Solution: Suppose there exist scalars c1, c2, and c3 not all zero such that

c1(α1v1) + c2(α2v2) + c3(α3v3) = 0.

Then(c1α1)v1 + (c2α2)v2 + (c3α3)v3 = 0.

Since {v1,v2,v3} is linearly independent, this implies that c1α1 = c2α2 = c3α3 = 0.And since each of the αi is nonzero, this implies that c1 = c2 = c3 = 0. Hence,{α1v1, α2v2, α3v3} is linearly independent.

3

11. Let A be an n × n matrix, and let {x1,x2, . . . ,xk} and {y1,y2, . . . ,yk} be two sets ofn-dimensional column vectors having the property that Axi = yi.

(a) Prove that if {y1,y2, . . . ,yk} is linearly independent, then {x1,x2, . . . ,xk} is linearlyindependent.

(b) If {x1,x2, . . . ,xk} is linearly independent, then is {y1,y2, . . . ,yk} necessarily linearlyindependent? If so, prove it. If not, find a counterexample.

Solution:

(a) Suppose that for some scalars c1, c2, . . . , ck

c1x1 + c2x2 + · · ·+ ckxk = 0.

Multiplying both sides by A we find

A(c1x1 + c2x2 + · · ·+ ckxk) = 0

c1Ax1 + c2Ax2 + · · ·+ ckAxk = 0

c1y1 + c2y2 + · · ·+ ckyk = 0.

The set {y1,y2, . . . ,yk} is linearly independent, the last equation implies that c1 = c2 =· · · = ck = 0. And this, in turn, implies that {x1,x2, . . . ,xk} is also linearly independent.

(b) No. For instance, let A =

[1 00 0

], x1 =

[10

], and x2 =

[01

]. Then

y1 =

[10

], y2 =

[00

].

And since the set {y1,y2} contains the zero vector, it cannot be linearly independent.

12. Prove that if a set of vectors S in a vector space V is linearly independent, then any subsetof S is also linearly independent. (This is Theorem 5 from the textbook.)

Solution: Let S = {v1, . . . ,vn} be linearly independent, and suppose that S′ ⊂ S.Without loss of generality, we may assume that S′ = {v1, . . . ,vm} for some m ≤ n.Suppose that for some scalars c1, c2, . . . , cm we have

c1v1 + · · ·+ cmvm = 0.

The last equation can be rewritten as

c1v1 + · · ·+ cmvm + 0vm+1 + · · ·+ 0vn = 0.

Since S is linearly independent, this implies that c1 = · · · = cm = 0. Hence, S′ is linearlyindependent.

13. Prove that if a set of vectors S in a vector space V is linearly dependent, then any largerset containing S is also linearly dependent. (This is Theorem 6 from the textbook.)

Solution: Let S = {v1, . . . ,vn} be linearly dependent, and suppose that T ⊃ S. Withoutloss of generality, we may write T as T = {v1, . . . ,vk} for some k ≥ n, where vn+1, . . . ,vk

are additional vectors not contained in S.

Since S is linearly dependent, there exist scalars c1, . . . , cn such that

c1v1 + · · ·+ cnvn = 0.

Therefore,c1v1 + · · ·+ cnvn + 0vn+1 + · · ·+ 0vk = 0,

which implies that T is linearly dependent as well.

4

14. [G-II] Prove by induction on n that {1, t, t2, . . . , tn} is linearly independent.

Solution: In the base case n = 0, we have the set {1}, which is clearly linearly independent.

Now, suppose that the set {1, t, t2, . . . , tn} is linearly independent, and consider the set{1, t, t2, . . . , tn+1}. Let c0, c1, c2, . . . , cn+1 be scalars such that

c0 + c1t+ c2t2 + · · ·+ cn+1t

n+1 = 0

for all t. Plugging in t = 0, this immediately gives us c0 = 0. On the other hand, we candifferentiate this equation to obtain

c1 + 2c2t+ · · ·+ ncntn−1 + (n+ 1)cn+1t

n = 0.

By the inductive hypothesis, {1, t, t2, . . . , tn} is linearly independent, so all coefficients inthe above equation must be zero. And clearly, this implies that c1 = c2 = · · · = cn+1 =0. Combined with the fact that c0 = 0, this proves that {1, t, t2, . . . , tn+1} is linearlyindependent.

15. [G-II] Prove by induction on n that for any distinct real numbers a1, a2, . . . , an, the set{ea1x, ea2x, . . . , eanx} is linearly independent.

Solution: In the base case n = 1, we have the set {ea1x}, which is clearly linearly inde-pendent.

Now, suppose that the statement holds for n. Let a1, a2, . . . , an+1 be distinct real numbers,and consider the set {ea1x, ea2x, . . . , ean+1x}. Suppose that c1, c2, . . . , cn+1 are scalars suchthat

c1ea1x + c2e

a2x + · · ·+ cn+1ean+1x = 0 (1)

for all x. Differentiating (??), we find that

a1c1ea1x + a2c2e

a2x + · · ·+ an+1cn+1ean+1x = 0. (2)

Now, we multiply (??) by an+1 and subtract (??) from this, which gives us

c1(an+1 − a1)ea1x + c2(an+1 − a2)ea2x + · · ·+ cn(an+1 − an)eanx = 0.

By the inductive hypothesis, {ea1x, ea2x, . . . , eanx} is linearly independent, so each of thecoefficients in the above equation must equal zero. And since the aj are distinct, this impliesthat c1 = c2 = · · · = cn = 0. Finally, applying this to (??) leaves us with cn+1e

an+1x = 0.Since ean+1x 6= 0, this implies that cn+1 = 0 as well, and hence {ea1x, ea2x, . . . , ean+1x} islinearly independent.

5

OHSx XM511 Linear Algebra:Offline Homework Solutions

Lecture 19 (§2.4)

1. Are the following sets bases for R3?

(a){[

1 0 0],[0 1 0

],[0 0 1

]}.

(b){[

1 1 0],[0 1 1

],[1 0 1

]}.

(c){[

1 0 0],[1 1 0

],[1 1 1

]}.

(d){[

1 1 0],[0 1 1

],[1 2 1

]}.

(e){[

1 1 0],[0 1 1

],[1 3 1

]}.

(f){[

1 1 0],[0 1 1

],[1 4 1

]}.

(g){[

1 2 3],[4 5 6

],[0 0 0

]}.

(h){[

1 2 3],[4 5 6

],[7 8 9

]}.

Solution:

(a) Yes. (b) Yes. (c) Yes. (d) No.(e) Yes. (f) Yes. (g) No. (h) No.

2. Are the following sets bases for M2×2?

(a)

{[1 00 0

],

[0 10 0

],

[0 01 0

],

[0 00 1

]}.

(b)

{[1 10 0

],

[−1 10 0

],

[0 01 1

],

[0 01 −1

]}.

(c)

{[1 00 0

],

[1 10 0

],

[1 11 0

],

[1 11 1

]}.

(d)

{[1 11 0

],

[1 10 1

],

[1 01 1

],

[0 11 1

]}.

Solution:

(a) Yes. (b) Yes. (c) Yes. (d) Yes.

3. Are the following sets bases for the following sets are bases for P3?

(a){t3 + t2 + t, t2 + t + 1, t + 1

}.

(b){t3, t2, t, 1

}.

(c){t3 + t2 + t, t2 + t + 1, t + 1, 1

}.

(d){t3 + t2, t2 + t, t + 1, 1

}.

(e){t3 + t2 + t, t3 + t2, t2 + t, t, t + 1, 1

}.

(f){t3 + t2, t3 − t2, t + 1, t− 1

}.

(g){t3 + t2 + 1, t3 + t2, t + 1, t− 1

}.

(h){t3 + t2 + t, t3 + t2, t2 + t, t3 + t

}.

Solution:

(a) No. (b) Yes. (c) Yes. (d) Yes.(e) No. (f) Yes. (g) No. (h) No.

1

4. Prove that if {v1,v2,v3} is a basis for a vector space V , then {u1,u2,u3} is a basis of Vas well, where u1 = v1 + v2 + v3, u2 = v2 − v3, and u3 = v3.

Solution: We must show that {u1,u2,u3} is linearly independent and that span{u1,u2,u3} =V . First, we prove linear independence. Let c1, c2, and c3 be scalars such that

c1u1 + c2u2 + c3u3 = 0.

Then

0 = c1(v1 + v2 + v3) + c2(v2 − v3) + c3v3

= c1v1 + (c1 + c2)v2 + (c1 − c2 + c3)v3.

Since {v1,v2,v3} is linearly independent, this implies that

c1 = 0

c1 + c2 = 0

c1 − c2 + c3 = 0.

And solving this system, we find that c1 = c2 = c3 = 0. Thus, {u1,u2,u3} is linearlyindependent.

Now we prove that span{u1,u2,u3} = V . Since u1,u2,u3 ∈ span{v1,v2,v3} = V , itis clear that span{u1,u2,u3} ⊂ V . To show the reverse inclusion, let x be an arbitraryelement of V . Since {v1,v2,v3} is a basis of V , there exist scalars c1, c2, and c3 such that

x = c1v1 + c2v2 + c3v3.

Let a1 = c1, a2 = c2 − c1, and a3 = c2 + c3 − 2c1. Then c1 = a1, c2 = a1 + a2, andc3 = a1 − a2 + a3, so

x = a1v1 + (a1 + a2)v2 + (a1 − a2 + a3)v3

= a1(v1 + v2 + v3) + a2(v2 − v3) + a3v3

= a1u1 + a2u2 + a3u3.

Hence, x ∈ span{u1,u2,u3}, and this implies that V ⊂ span{u1,u2,u3}. Therefore,span{u1,u2,u3} = V , so {u1,u2,u3} is a basis of V .

5. Prove that if {v1,v2, . . . ,vn} is a basis for a vector space V , then {k1v1, k2v2, . . . , knvn}is a basis of V as well, where k1, k2, . . . , kn are any nonzero scalars.

Solution: We must prove that {k1v1, k2v2, . . . , knvn} is linearly independent and thatspan{k1v1, k2v2, . . . , knvn} = V . First, we prove that {k1v1, k2v2, . . . , knvn} is linearlyindependent. Let c1, c2, . . . , cn be scalars such that

c1(k1v1) + c2(k2v2) + · · ·+ cn(knvn) = 0.

Then(c1k1)v1 + (c2k2)v2 + · · ·+ (cnkn)vn = 0,

and since {v1,v2, . . . ,vn} is linearly independent, this implies that c1k1 = c2k2 = · · · =cnkn = 0. Each of the ki is nonzero, so we must have c1 = c2 = · · · = cn = 0. Hence,{k1v1, k2v2, . . . , knvn} is linearly independent.

Now we prove that span{k1v1, k2v2, . . . , knvn} = V . Since k1v1, k2v2, . . . , knvn are allelements of span{v1,v2, . . . ,vn} = V , it is clear that span{k1v1, k2v2, . . . , knvn} ⊂ V . To

2

prove the reverse inclusion, let x be an arbitrary element of V . Then there exist scalarsc1, c2, . . . , cn such that

x = c1v1 + c2v2 + · · ·+ cnvn.

For each i = 1, 2, . . . , n, let ai = ci/ki. Then ci = aiki, and

x = a1(k1v1) + a2(k2v2) + · · ·+ cn(knvn).

Hence, x ∈ span{k1v1, k2v2, . . . , knvn}, which implies that V ⊂ span{k1v1, k2v2, . . . , knvn}.Consequently, span{k1v1, k2v2, . . . , knvn} = V , so {k1v1, k2v2, . . . , knvn} is a basis of V .

3

OHSx XM511 Linear Algebra:Offline Homework Solutions

Lecture 20 (§2.4)

1. Find an n-tuple representation for the coordinates of[1 1 0

]with respect to each of the

following bases:

(a) A ={[

1 0 0],[0 1 0

],[0 0 1

]}.

(b) B ={[

1 1 0],[0 1 1

],[1 0 1

]}.

(c) C ={[

1 0 0],[1 1 0

],[1 1 1

]}.

Solution:

(a) Since [1 1 0

]= 1

[1 0 0

]+ 1

[0 1 0

]+ 0

[0 0 1

],

we have [1 1 0

]=

110

A

.

(b) Since [1 1 0

]= 1

[1 1 0

]+ 0

[0 1 1

]+ 0

[1 0 1

],

we have [1 1 0

]=

100

B

.

(c) Since [1 1 0

]= 0

[1 0 0

]+ 1

[1 1 0

]+ 0

[1 1 1

],

we have [1 1 0

]=

010

C

.

2. Let S be a spanning set for a vector space V , and let v ∈ S. Prove that if v is a linearcombination of other vectors in S, then the set that remains by deleting v from S is alsoa spanning set for V .

Solution: Let T = S \ {v}. We need to prove that span(T ) = V . First, since T ⊂ S andspan(S) = V , it is clear that span(T ) ⊂ V .

On the other hand, by hypothesis v is a linear combination of the vectors in T , so v ∈span(T ). Every other vector in S is contained in T , and therefore in span(T ) as well. SoS ⊂ span(T ), which implies that span(S) ⊂ span(T ). And since V = span(S), this givesus V ⊂ span(T ). Thus, span(T ) = V .

3. Suppose that S = {v1, . . . ,vk} spans the vector space V . Prove that S contains a basis ofV as a subset.

Solution: Starting from the left, go through the vectors of S one by one, and remove anyvector that is a linear combination of the vectors that precede it. Suppose that at somestep of this process, we have the set T (which is S possibly with some vectors removed),and we remove the vector vj to obtain the set T ′. Since vj is a linear combination of othervectors of T , Problem ?? implies that span(T ′) = span(T ). Thus, this procedure does notaffect the span.

1

So, letting A be the set we obtain after completing this process, we find that span(A) =span(S) = V . Furthermore, no vector in A is a linear combination of the vectors thatprecede it, so A is linearly independent. Thus, A is a basis for V .

4. Show that the set S ={t3 + t2 + t, t2 + t + 1, t + 1

}is linearly independent, and extend

it into a basis for P3.

Solution: First, we show that S is linearly independent. Suppose that c1, c2, and c3 arescalars such that

c1(t3 + t2 + t) + c2(t2 + t + 1) + c3(t + 1) = 0.

Thenc1t

3 + (c1 + c2)t2 + (c1 + c2 + c3)t + (c2 + c3) = 0,

so c1, c2, and c3 must solve the system

c1 = 0

c1 + c2 = 0

c1 + c2 + c3 = 0

c2 + c3 = 0.

We easily find that c1 = c2 = c3 = 0 is the only solution, and thus S is linearly independent.

Since dim(P3) = 4, we can extend S to a basis simply by finding a fourth vector that,when added to S, gives us a linearly independent set. There are many choices, but t3, t,and 1 are a few good possibilities.

5. Suppose that S spans the vector space V . Prove that S cannot contain fewer elementsthan the dimension of V .

Solution: Since S spans V , it must contain a basis B of V as a subset. B is a basis, so itcontains dim(V ) vectors. And B ⊂ S, which implies that the number of elements in S isgreater than or equal to dim(V ).

6. Prove that any set of two vectors in R2 is a basis if one vector is not a scalar multiple ofthe other.

Solution: Suppose that {u,v} ⊂ R2, and neither v nor u is a scalar multiple of the other.Let c1 and c2 be scalars such that

c1u + c2v = 0.

Then c1u = −c2v, and if either c1 or c2 were nonzero, we could divide by it to obtain onevector as a scalar multiple of the other. This is not the case, so we must have c1 = c2 = 0.Thus, {u,v} is linearly independent. And since dim(R2) = 2, which is the number ofelements of {u,v}, this implies that {u,v} is a basis of R2.

7. Let W be a subspace of a vector space V , and let S be a basis for W . Prove that S canbe extended to a basis for V .

Solution: Since S is a basis for W it is linearly independent. And clearly S ⊂ V . So bythe theorem proven in lecture, S can be extended to a basis for V .

2

8. Let W be a subspace of a vector space V . Prove that dim(W ) ≤ dim(V ).

Solution: Let S be a basis for W . By Problem ??, S can be extended to a basis Bof V . S has dim(W ) elements while B has dim(V ) elements. And since S ⊂ B, thenumber of elements of S clearly cannot exceed the number of elements of B. Therefore,dim(W ) ≤ dim(V ).

9. Let W be a subspace of a vector space V . Prove that if dim(W ) = dim(V ), then W = V .

Solution: Let S be a basis of W , and let B be a basis of V that contains S, which exists byProblem ??. We know that S contains dim(W ) elements and B contains dim(V ) elements.And since dim(W ) = dim(V ), this implies that S and B each have the same number ofelements. But S ⊂ B, so we must have S = B. Thus, V = span(S) = W .

10. Prove that in an n-dimensional vector space V , no set of n− 1 vectors can span V .

Solution: Suppose that S spans V . Then S contains a basis B of V , and B containsdim(V ) = n vectors. Since B ⊂ S, this implies that S must contain at least n vectors.

3

OHSx XM511 Linear Algebra:Offline Homework Solutions

Lecture 22 (§2.5)

In Problems 1–??, find a basis for span(S).

1. S =

2

12

,

−2−1−2

,

424

,

−4−2−4

.

Solution: Each vector in S is a multiple of

212

. Thus,

2

12

is a basis for S.

2. S ={[

1 0 −1 1],[3 1 0 1

],[1 1 2 −1

],[3 2 3 −1

],[2 1 0 0

]}.

Solution: Let A be the matrix

A =

1 0 −1 13 1 0 11 1 2 −13 2 3 −12 1 0 0

,

which has the vectors of S as rows. Row reducing A, we obtain1 0 −1 10 1 3 −20 0 1 00 0 0 00 0 0 0

.

The nonzero rows of this matrix form a basis for the row space of A, which is equal tospan(S). So

{[1 0 −1 1

],[0 1 3 −2

],[0 0 1 0

]}is a basis of span(S).

3. S ={[

1 2 4 0],[2 4 8 0

],[1 −1 0 1

],[4 2 8 2

],[4 −1 4 3

]}.

Solution: Let A be the matrix

A =

1 2 4 02 4 8 01 −1 0 14 2 8 24 −1 4 3

,

which has the vectors of S as rows. Row reducing A, we obtain1 2 4 00 3 4 −10 0 0 00 0 0 00 0 0 0

.

The nonzero rows of this matrix form a basis for the row space of A, which is equal tospan(S). So

{[1 2 4 0

],[0 3 4 −1

]}is a basis of span(S).

1

4. S ={t2 + t + 1, 2t2 − 2t + 1, t2 − 3t

}.

Solution: First, we put the elements of S into coordinates with respect to the basisB = {t2, t, 1} of P2 as follows:

t2 + t + 1 =

111

B

, 2t2 − 2t + 1 =

2−21

B

, t2 − 3t =

1−30

B

.

Now, let

A =

1 1 12 −2 11 −3 0

,

which has these vectors as rows. Row reducing A gives us1 1 10 4 10 0 0

,

so{[

1 1 1],[0 4 1

]}is a basis for the row space of A. Translating back into poly-

nomials, we find that {t2 + t + 1, 4t + 1} is a basis for span(S).

5. S ={t3 + t2 − t, t3 + 2t2 + 1, 2t3 + 3t2 − t + 1, 3t3 + 5t2 − t + 2

}.

Solution: First, we write the elements of S in coordinates with respect to B ={t3, t2, t, 1

},

the basis of P3. These representations are

t3 + t2 − t =

11−10

B

, t3 + 2t2 + 1 =

1201

B

,

2t3 + 3t2 − t + 1 =

23−11

B

, 3t3 + 5t2 − t + 2 =

35−12

B

.

Now, let A be the matrix

A =

1 1 −1 01 2 0 12 3 −1 13 5 −1 2

,

which has these vectors as rows. Row reducing A, we obtain1 1 −1 00 1 1 10 0 0 00 0 0 0

.

So{[

1 1 −1 0],[0 1 1 1

]}is a basis for the row space of A. Translating back

into polynomials, this gives us {t3 + t2 − t, t2 + t + 1} as a basis for span(S).

2

6. S ={

2t3 + t2 + 1, t2 + t, 2t3 − t + 1, t + 1, 2t3 + 2}

.

Solution: First, we write the elements of S in coordinates with respect to B ={t3, t2, t, 1

},

the basis of P3. These representations are

2t3 + t2 + 1 =

2101

B

, t2 + t =

0110

B

, 2t3 − t + 1 =

20−11

B

,

t + 1 =

0011

B

, 2t3 + 2 =

2002

B

.

Now, let

A =

2 1 0 10 1 1 02 0 −1 10 0 1 12 0 0 2

,

which has these vectors as rows. Row reducing A, we obtain2 1 0 10 1 1 00 0 1 10 0 0 00 0 0 0

.

So{[

2 1 0 1],[0 1 1 0

],[0 0 1 1

]}is a basis for the row space of A. Trans-

lating back into polynomials, this gives us {2t3 + t2 + 1, t2 + t, t + 1} as a basis forspan(S).

7. S =

{[1 31 2

],

[1 21 1

],

[0 10 1

],

[2 72 5

]}.

Solution: First, we write the elements of S in coordinates with respect to the basis

B =

{[1 00 0

],

[0 10 0

],

[0 01 0

],

[0 00 1

]}of M2×2. These representations are

[1 31 2

]=

1312

B

,

[1 21 1

]=

1211

B

,

[0 10 1

]=

0101

B

,

[2 72 5

]=

2725

B

.

Now, let

A =

1 3 1 21 2 1 10 1 0 12 7 2 5

,

which has these vectors as rows. Row reducing A, we obtain1 3 1 20 1 0 10 0 0 00 0 0 0

.

3

So{[

1 3 1 2],[0 1 0 1

]}is a basis for the row space of A. And this implies that{[

1 31 2

],

[0 10 1

]}is a basis of span(S).

In Problems ??–??, use row rank to determine whether the given sets are linearly indepen-dent.

8.

1

23

,

321

,

213

.

Solution: First, we create the matrix

A =

1 2 33 2 12 1 3

,

which has these vectors as rows. Row reducing A gives us1 2 30 1 20 0 1

,

and hence the row rank of A is 3. Since this equals the original number of vectors, thegiven set is linearly independent.

9.

{[1 01 1

],

[1 11 0

],

[2 20 2

],

[1 02 0

]}.

Solution: First, we express these vectors in coordinates with respect to the basis B ={[1 00 0

],

[0 10 0

],

[0 01 0

],

[0 00 1

]}and write them as the rows of the matrix

A =

1 0 1 11 1 1 02 2 0 21 0 2 0

.

Row reducing A gives us 1 0 1 10 1 0 −10 0 1 −10 0 0 0

,

so the row rank of A is 3. This is less than the original number of vectors, so the given setis linearly dependent.

10.{t3 + t2, t3 − t2, t3 − 3t2

}.

Solution: First, we express these vectors in coordinates with respect to the basis B ={t3, t2, t, 1} and write them as the rows of the matrix

A =

1 1 0 01 −1 0 01 −3 0 0

.

4

Row reducing A gives us 1 1 0 00 1 0 00 0 0 0

,

so the row rank of A is 2. This is less than the original number of vectors, so the given setis linearly dependent.

11.{t3 + t2, t3 − t2, t3 − t, t3 + 1

}.

Solution: First, we express these vectors in coordinates with respect to the basis B ={t3, t2, t, 1} and write them as the rows of the matrix

A =

1 1 0 01 −1 0 01 0 −1 01 0 0 1

.

Row reducing A gives us 1 1 0 00 1 0 00 0 1 00 0 0 1

,

so the row rank of A is 4. This is equal to the original number of vectors, so the given setis linearly independent.

5

OHSx XM511 Linear Algebra:Offline Homework Solutions

Lecture 23 (§2.6)

1. Find the ranks of the given matrices:

(a)

[1 2 03 1 −5

], (b)

[2 8 −6−1 −4 3

], (c)

4 12 32 2

,

(d)

4 86 129 18

, (e)

1 4 −22 8 −4−1 −4 2

, (f)

1 2 4 21 1 3 21 4 6 2

.

Solution:

(a) The two rows of this matrix are not scalar multiples, so they are linearly independent.Thus, the rank of the matrix is 2.

(b) The two rows of this matrix are scalar multiples, so they are linearly dependent. Thus,the rank of the matrix is 1.

(c) This matrix has two columns, and these are clearly not scalar multiples of one another,so they are linearly independent. Thus, the rank of the matrix is 2.

(d) This matrix has two columns, and they are scalar multiples. Thus, the rank of thematrix is 1.

(e) Row reducing this matrix gives us1 4 −20 0 00 0 0

,

so its rank is 1.

(f) Row reducing the matrix, we obtain1 2 4 20 1 1 00 0 0 0

.

So the rank of the matrix is 2.

2. Give the largest possible rank for a matrix of each of the following sizes:

(a) 2× 5 (b) 4× 6 (c) 5× 1 (d) 3× 3

Solution:

(a) 2 (b) 4 (c) 1 (d) 3

3. Show that the rows of a 5× 3 matrix are linearly dependent.

Solution: Let A be a 5×3 matrix. Since A has 3 columns, we know that colrank(A) ≤ 3.And rowrank(A) = colrank(A), so rowrank(A) ≤ 3 < 5. The rows of a matrix are linearlyindependent if and only if its row rank equals the number of rows, therefore the rows of Aare linearly dependent.

1

4. Use rank to determine whether

111

can be written as a linear combination of the following

sets of vectors.

(a)

1

01

,

110

,

011

, (b)

3−12

,

210

,

−1−32

, (c)

1

01

,

111

,

1−11

.

Solution:

(a) Let

A =

1 1 00 1 11 0 1

, b =

111

,

where is the matrix having the vectors of the given set as columns. Then

111

can be

written as a linear combination of the given vectors if and only if the system Ax = b has

a solution. And r(A) = r([A |b

]) = 3, so this system is consistent. Hence,

111

can be

written as a linear combination of this set of vectors.

(b) As in part (a),

111

is a linear combination of these vectors if and only if Ax = b has

a solution, where

A =

3 2 −1−1 1 −32 0 2

, b =

111

.

However, r(A) = 2, while r([A |b

]) = 3, so the system is not consistent.

(c) Letting

A =

1 1 10 1 −11 1 1

, b =

111

,

We find that r(A) = r([A |b

]) = 2. So

111

can be written as a linear combination of

these vectors. (Although this is fairly obvious in the first place. . . )

5. In each of the following systems of equations, discuss the consistency and number of solu-tions. Check your answers by solving the systems wherever possible.

(a) x − 2y = 0x + y = 1

2x − y = 1

(b) x + y + z = 1x − y + z = 2

3x + y + 3z = 1

(c) x + 3y + 2z − w = 22x − y + z + w = 3

(d) 2x + 3y = 0x − 4y = 0

(e) x − 2y + 3z + 3w = 0y − 2z + 2w = 0

x + y − 3z + 9w = 0

Solution:

(a) Since r(A) = r([A |b

]) = 2, the system is consistent. And the system has two variables,

so there is exactly one solution.

(b) r(A) = 2, whereas r([A |b

]) = 3, so the system is inconsistent.

2

(c) Since r(A) = r([A |b

]) = 2, the system is consistent. And the number of variables is

four, which is greater than the rank, so the system has infinitely many solutions.

(d) The system is homogeneous, so it is guaranteed to be consistent. Furthermore, r(A) =2, which is the number of variables, so there is exactly one solution.

(e) The system is homogeneous, so it is guaranteed to be consistent. And r(A) ≤ 3, so itmust be less than the number of variables. Hence, there are infinitely many solutions.

3

OHSx XM511 Linear Algebra:Offline Homework Solutions

Lecture 24 (§2.6)1. Let A be a nonsingular m×m matrix, and let D be a nonsingular n× n matrix. Then let

M =

[A BC D

],

for some m× n matrix B and n×m matrix C.

(a) Give an example of B and C such that M is invertible. Prove that your example works.

(b) Give an example of B and C such that M is not invertible. Prove that your exampleworks.

Solution:

(a) Let B = 0 and C = 0. Then letting

N =

[A−1 00 D−1

],

we easily see that MN = I. Hence, M is invertible.

(b) Let B be any matrix whose first row equals the first row of D, and let C be any matrixwhose first row equals the first row of A. Then the first row and (m + 1)th row of M areequal, so r(M) < m + n. Hence, M is not invertible.

2. Prove that an n×n matrix A is invertible if and only if 0 is the only solution to the systemAx = 0.

Solution: First, suppose that A is invertible. If Ax0 = 0, then x0 = A−10 = 0, so 0 isthe only solution to Ax = 0.

Conversely, suppose that 0 is the only solution to Ax = 0. Let A1, A2, . . . , An denote thecolumns of A, and suppose that c1, c2, . . . , cn are scalars such that

c1A1 + c2A2 + · · ·+ cnAn = 0.

Let c =

c1c2...cn

. Then

Ac = c1A1 + c2A2 + · · ·+ cnAn = 0.

So by hypothesis, c = 0. Hence, c1 = c2 = · · · = cn = 0. This implies that the columns ofA are linearly independent, and thus r(A) = n. Consequently, A is invertible.

3. Let A be an n × n matrix. Prove that the system Ax = b has a unique solution for alln× 1 column vectors b if and only if 0 is the only solution to the system Ax = 0.

Solution:

By Problem ??, 0 is the only solution to the system Ax = 0 if and only if A is invertible.And by Problem 11 of Lecture 10, A is invertible if and only if the system Ax = b hasa unique solution for all n × 1 column vectors b. Combined, these two facts imply thedesired result.

1