Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP...

137
Basic Topics

Transcript of Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP...

Page 1: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

Basic Topics

Page 2: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

ObjectivesGet you comfortable and familiar with tools of linear algebra and other applications.

Will be given 12-14 exercise sets, will choose the best 10 (must serve at least 10)80% of the grade is based on homework!20% is based on the exam

Topics Planned to be Covered- Vector Spaces over R and C.- Basic definitions connected with vector spaces- Matrices, Block of matrices- Gaussian elimination- Conservation of dimensions- Eigen values and Eigen vectors- Determinants- Jordan forms- Difference and differential equations- Normed linear spaces- Inner product spaces (orthogonality)- Symmetric, Hermitian, Normal matrices- Singular Value Decompositions (SVD)- Vecor Valued Functions of many variables- Fixed point theorems- Implicit function theorem- Extremal problems with constraints

General Idea of Linear Algebra

a11x1+a12 x2+…+a1q xq=b1a21 x1+a22 x2+…+a2q xq=b2⋮a p1 x1+ap2 x2+…+apq xq=bp

Given a ij , i=1 ,…, p , j=1 ,…,q ;b1 ,…,bp

Find x1 ,…, xq

A choice of x1 ,…, xq that satisfies these equations is called a solution.1) When can you guarantee at least one solution?

Page 3: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

2) When can you guarantee at most one solution?3) How can you find solutions?4) Are there approximate solutions?

Short notation:Ax=b

A=[a11 … a1q⋮ ¿ ¿

…¿apq¿]x=[ x1⋮xq

]A vector space V over R is a collection of objects called vectors on which two operations are defined:

1) Vector addition – u∈V ,v∈V u+v∈V2) Scalar multiplication – α∈ R ,u∈V αu∈V

Subjects to following constraints:1) u+v=v+u2) (u+v )+w=u+( v+w )3) There is a zero vector, 0 s.t. u+0=u ,0+u=u4) u∈U , then there is a w s.t. u+w=05) 1∈R ,1u=u6) α ,β∈R, then α (βu )=αβu7) α ,β∈R, then (α+β )u=αu+βu8) α (u+v )=αu+αv

Vector Space ExampleRk

[u1u2⋮uk]

ui∈R

Page 4: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

[u1⋮uk]+[v1⋮vk

]=[u1+v1⋮

uk+vk]

R2 X3

A=[a11 a12 a13a21 a22 a23 ]

B=[b11 b12 b13b21 b22 b23 ]

Claim: if V is a vector space over R, then there is exactly one zero element.Proof: Suppose that 0 and ~0 are both zero elements.→

0+u=u+0=u~0+u=u+~0=uLets set u=~00+~0=~0

Lets set v=0~0+0=0

Therefore 0=~0

Claim: If u∈V , there is exactly one additive inverseSuppose w and ~w are both additive inverses of u.

u+w=0u+~w=0w=w+0

Lets mix things up:w=w+0=w+(u+~w )=(w+u )+~w=0+~w→w=~w

We denote the additive inverse of u by –u.u+(−u )=0

Since this is clumsy to write we abbreviate this as u−u=0

Note: We can replace R by C!

Let V be a vector space over F Means we can replace F by either R or C in all that follows this statement.

Page 5: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

DefinitionsLet V be a vector space over F.A subset U of V is called a subspace of V if:

1) u ,w∈U →u+w∈U2) α∈ F ,u∈U →αu∈U

If (1) and (2) hold, then U is a vector space of V over F.

Example

V=R2={[ab]|with a , c∈R}U={[a0]|a∈ R}[10]+[70]=[80]

α [a0]=[ αaα 0]=[αa0 ]As we can see, this is a subspace…However…

Let U={[a1]|a∈ R}[a1]+[b1]=[a+b

2 ]So this is not a subspace!

SpanA span {v1 ,…,vk } (v1,…,v k∈V )

¿ {∑i=1k

α i :α 1 ,…,α k∈ F}2v1+7v2+3v32v1+0v2+0 v3

Check: span {v1 ,…,vk } is a subspace of V ,ClearSpan span {v1 , v2 }⊆ span {v1 , v2 , v3 }⊆span {v1, v2, v3 , v4 }

For example:

Page 6: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

span {[11]}⊆span {[11] , [22]}⊆span {[11] ,[22] ,[33]}But they are all equal(!!):

α [11]+β [22]+γ [33]=(α+β+γ )[11]

Linear Dependencyv1 ,…,vk are said to be linearly dependent if there is a choice α 1,…,α k∈ F s.t. α 1 v1+…+α k vk=0Not all of which are zero!

If say α 1≠0 , v1=(α2α1 ) v2+…+(α k

α 1 )vk

v1 ,…,vk are said to be linearly independent if the following statement is true:α 1 v1+ ,…,αk vk=0→α 1=α 2=…=α k=0

If v1 ,…,vk are linearly independent, thenu=α1v1+…+αk vk=β1 v1+…+βk vk→α i=β i for all i.

A set of vectors u1 ,…,u l is said to be a basis for a vector space U over F, if u1 ,…,u l∈U and:

1) span {u1 ,…,ul }=U (there’s enough vectors to span the entire space)

2) u1 ,…,uk are linearly independent. (there are’t too many vectors)

Matrix MutiplicationIf ApXq , AqXr are matrices,

C=AB is a bXr matrix with c i , j=∑s=1

q

ai ,sbs , j

[a i ,1 ,…,a i, p ] ∙ [b1 , j⋮bq , j

]If ApXq ,Bq Xr

,D rXs are matrices, then ( AB ) D=A (BD)

In general – AB≠ BA!

Page 7: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

[0 10 1] [1 1

0 0 ]=[0 00 0 ]≠ [1 1

0 0 ][0 10 1]=[0 2

0 0]α ,β αβ=0→α=0 or β=0

A∈F pXr Is said to be left invertible if there is B∈ FqXp s.t. BA=I q=[1 0 00 ⋱ 00 0 1]qXq

A∈F pXq is right invertible if there’s a C∈FqXp s.t. AC=I p

If BA=I q and AC=I p then B=C .B=B I p=B ( AC )=(BA )C=I qC=C

Later we shall show that this also forces p=q.

Triangular matricesApXp∈F is said to be upper triangular if a i , j=0 for i>0

[a11 … a1p0 ⋱ ⋮0 0 α pp

]pXp

ApXp∈F is said to be lower triangular if a i , j=0 for i>0

[a11 0 0⋮ ⋱ 0

ap1 … α pp]pXp

If both upper and lower than we denote it as simply triangular.

[a11 0 00 ⋱ 00 0 α pp

]pXp

Theorm: Let A∈F pXp be a triangular matrix, then A is invertible if and only if all the diagonal entries a ii are nonzero.Moreover, in this case, A is upper triangular ↔ its inverse is upper triangularAnd A is lower triangular ↔ its inverse is lower triangular.

A=[a b0 d ]

Investigate A B=I 2want to:

Page 8: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

[a b0 d ][α β

γ δ ]=[1 00 1]

[aα+bγ aβ+bδdγ dδ ]

Want to choose α ,β , γ , δ so that:aα +bγ=1

dγ=0aβ+bδ=0

dδ=1

(4)→d≠0 , δ ≠0, δ=1d

(2)+(d ≠0¿→γ=0

(1)+γ=0→aα=1→a≠0 , α ≠0 , 1α

(3)β=−( 1a )b( 1d )If AB=I 2 it is necessary that a≠0 and d ≠0.Also we have a formula for B

Can show BA=I 2

Theorm: Let A∈F pXp , A is triangular, then A is invertible if and only if the diagonal entries of A are all non-zero. Moreover, if this condition is met, then A is upper triangular ↔ the inverse to A is upper triangular. (lower↔lower)

A invertible, there exists a matrix B∈ FpXp

Such that, AB=BA=I p

Let A( k+1 )X (k +1) upper triangular matrix.

a=[A11 A120 A22 ]

Page 9: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

Plan: Suppose we know that if AkXk upper triangular, then A is invertible if and only if it’s diagonal entries are nonzero.Objective: Extend this to (k+1 )X (k+1 ) upper triangular matrices.

Suppose first that A is invertible.

AB=[A11 A12

A21 A22] [ B11 (kXk ) B12(kX 1)B21(1 Xk ) B22(1 X 1)]=¿

[A11B11+A12B21 A11B12+A12B22A22B21 A22B22 ]

TODO: Draw single vector multiplication

[a i1ai 2ai3 ]¿

(1) A11B11+A12B21=I k (2) A22B21=01Xk (3) A11B12+A12B22=0 (4) A22B22=1

(4)→A22≠0 ,B22≠0∧b22=1A22

(2)+A22≠0→B21=0(1)+B21=0→A11B11=I k

BA=I k +1

[B11 B120 B22 ][A11 A12

0 A22]=[I k 00 1]

B11A11=I k →A11is a kXkupper triangular ¿ isinvertible!

If the theorem is true for kXk marices, then if A∈F (k +1) X (k+1) triangular upper matrix, diagonal of A11 are nonzero and A22≠0!

Showed: A( k+1 )X (k +1) is upper triangular invertible.

Now take A( k+1 ) X (k +1) upper triangular with non zero entries on its diagonal.

A=[ A11(kXk ) A120 A22(1 X1)]

A11 is kXk upper triangular matrix with non-zero entries on its diagonal.Theorm true for kXk matrices, then this = there exists a kXk C s . t .A11C=C A11=I k

Page 10: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

Lets define B=[[C −C A12 A22−1

0 A22−1 ]=[A11

−1 −A11−1 A12A22

−1

0 A22−1 ]

Can show – AB=BA=I k +1

[A11 A12

0 A22] [A11−1 −A11

−1 A12 A22−1

0 A22−1 ]=[A11 A11

−1 A11 (−A11−1 A12 A22

−1 )+A12A22−1

0 ¿ ]=[ I k −I k+ I k

0 ¿ ]

AT=¿transpose of A.

Example: A=[1 2 34 5 6]=[1 4

2 53 6 ]

ij entry of AT is ji entry of AAH=¿Hermitian transpose - AT (take complex conjugate)

A=[1+i 2 i 34 5 6−7i ]→A H=[1−i 4

−2 i 53 6+8 i ]

A∈F pXq, N A={x∈ Fq : Ax=0} - subspace of Fq

Nullspace of ARA={ Ax : x∈Fq } - subspace of Fq

Ax=0Ay=0→

A ( x+ y )=Ax+Ay=0+0=0Aαx=α ( Ax )=α 0=0

Need to show all properties. But they exist…

Matrix MultiplicationApXq , BqXr

A=[a1a2…aq ]=[ a1a2⋮ap]

Page 11: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

B=[b1b2…bq ]=[b1b2⋮bp]

[1 2 34 5 6]=[u1u2u3 ]

u1=[14 ]

AB=A [b1…br ]=[ Ab1 A b2…A br ]=[ a1⋮ap]B=[a1Ba2B

⋮ap B

]AB=∑

i=1

q

aib i

AB=[a1a2…a1 ] B=[a10…0 ] B+ [0a20…0 ] B+…+ [0…0aq ] B

There are pXq matrices that are right invertible but not left invertible.Similarly - there are pXq matrices that are left invertible and not right invertible.

This does not happen if p=q.

[1 0 00 1 0 ][1 0

0 1a b]=I 2

Gaussian EliminationA⏟

given pXq

∙ x= b⏟givenqX1

Looking for solutions for x…For instance:

3 x1+2 x2=62 x1+3 x2=4

A Gaussian Elimination is a method of passing to a new system of equations that is easier to work with.This new system must have the same solutions as the original one.

Page 12: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

U qXq is called upper echelon if the first nonzero entry in row i, sits to the left of the first nonzero entry in row i+1.

Example:

U=[1 3 4 2 10 6 0 0 20 0 0 4 10 0 0 0 0]

The first nonzero entry in each row is called a pivot. In the example we have 3 pivots marked in red.

Consider the equation:Ax=b

A=[0 2 3 11 5 3 42 6 3 2]b=[123]

Two operations:1) Interchange rows2) Substract a multiple of one row from another row

Each operation corresponds to multiplying on the left by an invertible matrix.We can revert the process later:

→CAx=Cb→C−1 (CAx )=C−1Cb→Ax=b

Steps:

1) Construct the augmented matrix ~A= [ Ab ]=[0 2 3 1 ¿ 1

1 5 3 4 ¿ 22 6 3 2 ¿ 1]

2) Interchange rows 1 and 2. ~A1=[1 5 3 4 ¿ 2

0 2 3 1 ¿ 12 6 3 2 ¿ 1]

3) Subtract 2 times row 1 from row 3 ~A2=[1 5 3 4 ¿ 2

0 2 3 1 ¿ 10 −4 −3 −6 ¿ −3]

4) Add 2 times row 2 to row 3 ~A3=[1 5 3 4 ¿ 2

0 2 3 1 ¿ 10 0 3 −4 ¿ −1]

Page 13: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

5) Solve the system – Ux=[ 21−1][1 5 3 40 2 3 10 0 3 −4 ][ x1x2x3x4]

6) Work from bottom up3 x3−4 x4=−1

Solve the pivot variable: x3=(−1+4 x4 )3

Note:

~A1=P1~A ,P1=[0 1 0

1 0 00 0 1 ]

[0 1 01 0 00 0 1 ][abc ]=[bac ] ~A2=E1~A1

E1=[ 1 0 00 1 0

−2 0 1] E1 [abc ]=[ a

b−2a+c ]

~A3=E2~A2

E2=[1 0 00 1 00 2 1 ]

E2 [abc ]=[ ab

2b+c ] ------- End of lesson 2

Page 14: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

Gauss Elimination

A=[0 2 3 11 5 3 42 6 3 2]

b=[121] Try to solve Ax=b

~A= [ Ab ]

Two operations:1) To permute (interchange) rows2) Subtract a multiple of one row from the other

(1)&(2) are implemented by multiplying Ax=b on the left by the appropriate invertible matrix.

Ax=bP1 Ax=P1b where P1 is an appropriate permutation matrixE1 P1 Ax=E1P1b where E2 is a lower triangular matrix⋮

Eventally you will have Ux=bSuch that U is an upper echelon matrix.

~A →[1 5 3 4 20 2 3 1 10 0 3 −4 −1]=[U b' ]

Now we need to solve Ux=b.

x1+5x2+3 x3+4 x4=2 2 x2+3 x3+x4 3 x3−4 x4=−1

Work from the bottom up, Solve the pivot variables in terms of the others.

x3=−1+4 x4

3

2 x2=−3 x3−x4+1=1−4 x4−x4+1=−5 x4+2

Page 15: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

x1=2−5 x2−3 x3−4 x4=2−5 (2−5 x4 )

2— 1+4 x4−x4+1=¿

−1+ 252x 4−5 x4=−1+ 15

2x4

x=[ x1x2x3x4 ]=[−21−130

]+x4[92

−52431

] A[

92

−52431

]=[000]

Another example:

A=[0 0 3 4 70 1 0 0 00 2 3 6 80 0 6 8 14 ] , b=[b1b2b3b4]

Try to solve Ax=b

~A= [ Ab ]→[0 1 0 0 0 b20 0 3 4 7 b10 0 0 2 1 b3+2b2−b10 0 0 0 0 b4−2b1

] Ax=b⇔ x2=b2

Page 16: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

3 x3+4 x3+7 x4=b1 2 x4+x5=b3−2b2−b1 0=b4−2b1

Solve for x2, x3 , x4 and find that:

x=[0b2

3b1+4 b2−2b33

b3−2b2−b120

]+x1[10000]+x5[

00

−53

−121

] This is a solution of Ax=b for any choice of x1 and x5 provided b4=2b1.

Let’s track the Gaussian elmnimation from a mathematical point of view:PEPA …=U

A∈F p×q A≠0p×q, then there exists an invertible matrix Gp× p such that GA=U upper echelon.

Theorem: U∈F p×q be upper echelon with k pivots, k ≥1,Then:

(1) k ≤min {p ,q }(2) k=q⇔U is left invertible⇔N U={0 }(3) k=p⇔U is right invertible⇔RU=F p

Proof of (2):Suppose k=q.

If p=q ,U=[▭ ¿ ¿▭ ¿⋱ ¿0 ¿ ▭]→no zero diagonal entries

Otherwise p>q,

U=[ ▭ ¿ ¿⋱ ¿¿−¿−¿−¿−¿0 0 0 0

0 0 0 0]=[ U 11q× q

U 21p−q×q]

The left invertible matrix V= [V 11q×q ,V 12

p× p−q ]V 11U 11=I q×q by definition

Page 17: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

VU= [V 11 V 12 ] [U11

U21]=V 11U 11+V 12U21=I q+0=I q

(2a) We shown that k=q⇒U is left invertible.(2b) U left invertible ⇒NU={0 }.

If Ap×q ,N A= {x|Ax=0 }

Let x∈N A By definition, it means that Ux=0By assumption, U is left invertible⇒ there is a V ∈F p×q s .t .VU=I q

0=V 0=V (Ux )=(VU ) x=I q x=x

(2c) NU= {0 }⇒U has q pivots.

NU= {0 }⇒Ux=0⇔x=0

[u1 ,u2 ,…,uq ][ x1x2⋮xq]=0⇔x1=x2=…=xq=0

NU= {0 }⇒Columns of U are independent.That forces k=q

(3a )k=p⇒U is right invertibleq=p⇒U upper triangular with nonzero diagonal entries ⇒ U is invertibleq>p

[▭ ∙ ∙ ∙ ∙0 0 ∙ ∙ ∙0 0 0 0 ▭]P =

swapcolumns [▭ ¿ ¿ ∙ ∙▭ ¿∨¿∙ ¿ ▭ ¿

∙¿∙¿]

You can find such a P such that UP=[U11U12 ] here U11 is p× p upper triangular with nonzero

diagonal elements, and U12=0.

V=[V 11

V 21] so that [U 11U 12 ] [V 11

V 21]=I p⇒ Therefore U11 is invertible. By VP!

And it doesn’t matter what V 21 we choose.

(3b) x is the input, Ax is the output.The range is the set of outputs.

Page 18: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

RA={ Ax|x∈F p } Claim: Given any b∈F p, can find an x∈ Fq s . t .Ux=bLet W be a right inverse to U , let x=Wb.Ux=U (Wb )= (UW )b=I pb=b

(3c)RU=F p⇒ p pivotsIf k< p then it must look something like:

U=[▭ ¿ ¿⋱ ¿ ¿¿

−¿−¿−¿−¿−¿0¿0¿0¿0¿0¿0¿0¿0¿0¿0¿]We have p−k zero rows.So all of our answers would always have p−k zeroes at the end! Therefore, we don’t cover all F p

Ux=[ ⋮⋮⋮⋮0]

Theorem: If A∈F p×q ,B∈ Fp×q and BA=I q then p≥qProof: Clear that A≠0p×q.If we apply Gaussian elimination, can find Gp× p invertible matrix such that GA=U which is upper echelon.

I q=BA=B (G−1U )=(BG−1)U⇒U is left invertible⇒U has q pivots⇒ p≥q.

Theorem: Let V be a vector space over F and let {u1 ,…,uk } be a basis for V .

And let {v1 ,…,v l } also be a basis for V . Then k=l.

∀ i=1 ,…,k :ui=∑s=1

l

bsiv s

Define B as l ×k

∀ j=1 ,…,k :v j=∑s=1

l

atjut

Define A as k ×l

ui=∑s=1

l

bsi∑t=1

k

atsu t=∑t=1

k

∑s=1

l

ats ut⇒∑s=1

l

a tsb si={0 if t ≠ i1 if t=i

⇒ AB=I k=¿l ≥ k

But I can do this again symmetrically with u’s replaced by v’s, I would get that k ≥ lSo k=l.

Page 19: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

-------End of lesson 3

Page 20: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

Theorem:Let V be a vector space over F. Let {u1 ,…,uk } be a basis for V and {v1 ,…,v l } also a basis for V, then l=k.

That number is called the dimension of V

The vector space 0={0 }. Define its dimension to be 0.

A transformation (mapping, operator,…) from a vector space U over F into a vector space V over F is a rule (algorithm, formula) that assigns a a vector Tu in V for each u∈U .

Example 1: Let U=R2={[ab ]|a ,b∈R}V=R3

T 1[ab]=[ a2

2b2

a2+b2] Example 2: Let U=R2={[ab ]|a ,b∈R}V=R3

T 2[ab]=[ 1 32 0

−4 6 ][ab] (by matrix multiplication)

Definition: A transformation T from U into V is said to be linear if: 1) T (u1+u2 )=T u1+T u22) T ( αu)=αTu

Is T 1 linear?No!

T 1([11]+[11])=[488 ]≠2∙ T1[11]=2∙ [122]

Every linear transformation can be expressed in terms of matrix multiplication. If T is a linear transformation from a vector space U over F into the vector space V over F. Then define:NT={u∈U|Tu=0V } –null space of T .

RT={Tu|u∈U } - range space of T .

Page 21: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

NT is a subspace of URT is also a subspace of U

u1∈NT , u2∈NT⇒T u1=0V , T u2=0V T (u1+u2 )=T u1+T u2=0V+0V=0V u1+u2∈NT

Conservation of dimensionTheorem: Let T be a linear transformation from a finite dimension vector space U over F into a vector space V over F.Then dimU=dim NT+dim RT

Proof:Suppose NT ≠ {0 } and RT ≠ {0 }Let u1 ,…,uk be a basis for NT .Claim RT is also a finite dimensional vector space. Let v1 ,…,v l be a basis for RT .

Since vi∈RT , so we can find y i∈U s .t .T yi=v i.

Claim: {u1 ,…,uk ; y1 ,…, y l } are linearly independent.

To check: Suppose there are coefficients: a1 ,…,ak , b1 ,…,bl∈ F s.t .a1u1+…+ak uk+b1 y1+…+bl y l=0U Let’s activate the transformation on both sidesT (a1u1+…+ak uk+b1 y1+…+b l y l )=T (0U ) a1T u1+…+ak T uk+b1T y1+…+blT y l=T oU=0V

Why T 0U=0V

T 0U=T (0U+0U )=T 0U+0V⇒T 0U=0V

Since u1 ,…,uk∈ NT, their transformation is zero!

Linearly independent items must never be zero, otherwise you can multiply the zero one with some value other than zero and get zeros.

{v1,…,vl } is a basis for RT⇒ v1,…, vl are linearly independent⇒b1=b2=…=bl=0.⇒ a1u1+…+ak uk=0U

{u1 ,…,uk } is a basis (for NT ¿⇒u1 ,…,uk are linearly independent⇒ a1=…=ak=0.

Page 22: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

Claim next: span {u1 ,…,uk , v1 ,…,v l }=U .

Let u∈U and consider Tu∈RT .

⇒Tu=∑j=1

l

d j v j=∑y−1

l

d jTy j=T (∑y−1l

d j y j)T (u−∑

y−1

l

d j y j)=0V

u−∑y−1

l

d j y j∈NT⇒

u−∑y−1

l

d j y j∈NT=∑i=1

k

c i ui⇒ u∈ span {u1,…,uk , y1,…, y l }⇒

{u1 ,…,uk , y1 ,…, y l } is a basis for U⇒dimU= k⏟

dimNT

+ l⏟dimNT

If RT={0 } it forces T to be the zero transformation

Last time we showed that:If U is an upper echelon p×q with k pivots.

(1) k ≤min {p ,q }(2) k=q⇔U left invertible⇔N U={0 }(3) k=p⇔U right invertible⇔RU=F p

Objective: Develop analogous set of facts for A∈F p×q not necessarily upper echelon.

(1) A∈F p×q , A≠0p×q , then there exists an invertible matrix Gs. t .GA=U is upper echelon.B1 ,B2 ,B3 are invertible p× p matrices, then B1B2B3 is also invertible.

(2) Let A∈F p×q ,B∈ Fq× q ,C∈F p× p s . t . B and C are invertible. Then:a. RAB=R Adim N AB=dim N A

b. dim RCA=dimRA N CA=N A

Suppose x∈ RAB⇒ there is us .t . x= ( AB ) uBut this also means x=A ( Bu )⇒ x∈RA.i.e. RAB⊆RA.

Suppose x∈ RA⇒ x=Av=A (BB−1 )x= ( AB ) (B−1 v )⇒ x∈R AB i.e. RA⊆RAB

RA⊆RAB⊆RA⇒RA=RAB

Page 23: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

Can we also show N AB=N A?No. Or at least not always…

Let A=[0 10 0]

Let x∈ N A , A [ x1x2]=[x20 ]=[00]⇒N A={[0β ]|β∈F}Let B=[0 1

1 0 ] ,B2=[1 00 1]. B is invertible.

N AB={[α0 ]|α∈F }

What we can show, is the dimensions of these spaces are equal.

Let {u1 ,…,uk } be a basis for N A

{B−1u1 ,…,B¿−1 }∈N AB

Easy to see: AB (B−1ui )=Aui=0.

Claim: they are linearly independent.Proof: α 1B

−1u1+…+α k B−1uk=0

B−1 (α 1u1+…+αk uk )=0 α 1u1 ,…,α kuk=B0=0 But u1 ,…,uk is a basis (therefore independent) so α 1=α 2=…=α k=0.dim N A=k dim N AB≥k So we’ve shown that: dim N AB≥dimN A

Now take basis – {v1 ,…,v l } of N AB⇒ ABvi=0⇒Bv i∈N A

Check {Bv1 ,…,Bv l } are linearly independent. Same as before…⇒dim N A≥ l=dimN AB So now we have two inequalities resulting I dim N A=dim N AB

Definition: If A∈Rp×q rankA=dimR A

RA=spanof thecolumns of A A=[a1a2a3a4 ]

A[ x1x2x3x4]=x1a1+x2a2+x3a3+ x4a4

Page 24: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

Say U=[▭ 0 0 00 0 ▭ 00 0 0 0]= [u1u2u3u4 ]

RU= {u1 ,u3 }

rankU=¿ number of pivots in U .

Theorem: Let A∈F p×q then(1) rankA ≤min ( p ,q )(2) rankA=q⇔A is left invertible ⇔N A= {0 }(3) rankA=p⇔A is right ivertible ⇔R A=F p

Proof: If rankA=0 (1) is clearly true.If rankA ≠0⇒∃Gp×p invertible such that GA=U upper echelon.This implies that RankGA=rankU=¿ number of pivots in U .

Suppose rankA=q⇒rankGA=q⇒rankU=q⇒U is left invertible ⇒ there is a V s .t .VU=I q

Same as to say V (GA )=I q⇒ (VG ) A=I q⇒ A is left invertible.

(3)

------End of lesson 4

Page 25: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

T linear transformation from a finite dimension vector spce U over F into a vector space V over F, then

dimU=dim NT+dim RT

If A∈F p×q , N A={x∈ Fq|Ax=0 } (also a subspace of Fq)q=dim N A+dimR A

Theorem: If A∈F p×q, then:(1) rankA≤min {p ,q }(2) rankA=q⇔A is left invertible⇔N A= {0 }(3) rankA=p⇔ A is right invertible⇔R A=F p=¿”everything”

Exploited fact: if U upper echelon with k pivots, then:(1) k ≤min {m,n }(2) k=q⇔U is left invertible⇔N U={0 }(3) k=p⇔U is right invertible⇔RU=F p

Gaussian elimination corresponds to finding and invertible G∈F p×p such that GA= U⏟

upperechelon

B1 ,B2 ,B3 invertible ⇒B1B2B3 is invertible.

RankA=dim RA

If U upper echelon with k pivots ⇔rankU=k

Implications1) System of equaeions:

Ax=b A=[a11 … a1q⋮ ¿ ¿

…¿a pq¿ ]b=[b1⋮bp]

Looking or vectors x=[ x1⋮xq] s . t . Ax=b (if any).

(a) As A left invertible, guarantees at most one solution.To chek: Suppose Ax=b , Ay=b⇒Ax−Ay=b−b=0A ( x− y )=0If B is a left inverse of Ax− y=I (x− y)=BA ( x− y )=B ( A ( x− y ))=B0=0

Page 26: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

(b) A right invertible guarantees at least one solution.Let C be a right inverse of A and choose x=CbThen A (Cb )=( AC )b=I pb=b

2) If A∈F p×q and A is both right invertible and left invertible then p=q.Earlier we showed that B ,C∈Fp× q s .t .BA=I q and AC=I p then B=C .RankA=k .A left invertible ⇒ k=qA right invertible ⇒ k=pSo p=q.

3) A∈F p×q ,G∈ Fp× p invertible, GA=U is upper echelon.And lwr k=¿ number of pivots in U .⇒rankA=¿rankU=k .Claim: The pivot columns of U are linearly independent and form a basis to RU

The corresponding columns in A are linearly independent and form a basis for RA

U=[1 u12 u13 u140 0 2 u240 0 0 0 ]=[u1 u2 u3 u4 ]

u1 ,u3 are lin independent, and span{u1 , u3 }=RU

If GA=U , A=[a1 a2 a3 a4 ]claim: a1 , a3 are linearly independent and span{a1, a3 }=RASuppose we can find

coefficients such that α a1+βa3=0

A=G−1U [a1 a2 a3 a4 ]= [G−1u1 G−1u2 G−1u3 G−1u4 ]α a1+βa3=αG−1u1+βG−1u3=0=G−1 (α u1+ βu3 )=0α u1+β u3=G 0=0⇒α=β=0 since u1 and u3 are linearly independent.

4) Related applicationGiven x1 ,…, xn∈F p

Find a basis for span{x1 ,…, xn }Define A=[ x1 x2 … xn ] p×nBring to upper echelon form via Gaussian elimination.The number of pivots in U=dimspan {x1 ,…,xn }and the corresponding columns will be a basis

5) Calculating inversesLet A be 3×3 and it is known to be invertible.How to calculate its inverse?

A x1=[100 ] , A x2=[010] , A x3=[001]

Page 27: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

A [ x1 x2 x3 ]=[ A x1 A x2 A x3 ]=[1 0 00 1 00 0 1]

Gauss-SeidelDo all these calculations in one shot

~A= [ A I 3 ]G~A=[GA G ]GA=U upper echelon

Suppose U=[2 4 60 3 60 0 4]

D=[12

0 0

0 13

0

0 0 14] , DU=[1 2 3

0 1 20 0 1]

F1[abc ]=[1 0 −30 1 −20 0 1 ] [abc ]=[a−3c

b−2cc ]

[ ¿ ] [321]=[3−32−21 ]=[001]

[1 0 −30 1 −20 0 1 ]⏟

F1 DU

[1 2 30 1 20 0 1]=[1 2 0

0 1 00 0 1]

~A= [ A I 3 ] (1) Manipulate by permutations and subtracting multiplies of rows from lower rows.

(2) Multiplying through by a diagonal matrix [DGA⏟uppertr

DG](3) Subtract multiplies of rows from higher rows [FDGA⏟

I p

FDG ] inverse of A is FDG.

A∈F p×q, AT

Page 28: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

A=[1 2 34 5 6] , AT=[1 4

2 53 6 ]

i , j entry of A is ji entry of AT

Claim: rankA=¿rankAT

GA=U upper echelonrankA=¿rankU

(GA )T=AT GT=UT

Rank AT=¿rankUT

U=[ ¿ ∙ ∙ ∙0 0 ¿ ∙0 0 0 0]

dim RU=2

UT=[¿ 0 0∙ 0 0∙ ¿ 0∙ ∙ 0] , rankUT=¿number of pivots in U=rankU .

If T is a linear transformation from a vector space U over F into itself (x∈U⇒Tx∈U )A subspace X of U is said to be invariant under T if for every x∈ X ,Tx∈ X.

The simplest non-zero invariant subspace (if they exist) would be one dimensional spaces.Suppose X is one dimensional and is invariant under T , then if you take any vector x∈ X , Tx∈ X

X={αy|α∈~A } Tx= λ⏟

∈ F

x

In this case, λ is said to be thean Eigen value of T and x is said to be an Eigen vector.Important – x≠0!!!

In other words, a vector x is called an Eigenvector of T if:(1) x≠0(2) Tx=λx some λ∈C

That λ is called an Eigenvalue of T .

If Tx=λx , then T 2 x=λ2 x.So there’s a “flexibility” of stretching.

Page 29: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

There isn’t the Eigenvector.

Theorem: Let V be a vector space over C, T a linear transformation from V into V and let U be a non-zero finite dimension subspace of V that is invariant under T . Then, there exists a vector u∈U and a λ∈Cs .t .Tu=λu.

Proof:Take any w∈U ,w≠0. Suppose dimU=nConsider w ,Tw ,…,Tnw. That’s a list of n+1 vectors. In an n dimensional space. Therefore, they are linearly dependent.Can find c0 ,…,cn not all of which are zero such that c0w+c1Tw+…+cnT

nw=0Suppose that k is the largest index that is non-zero.So this expression reduces to c0w+c1Tw+…+ck T

k w=0c0 I+c1T+…+ckT

k=0 Consider the following polynomial:

c0+c1 x+…+ck xk =

canbe factored (x−μ1 ) (x−μ2 )…( x−μk )=¿ck (T−μk I ) (T−μ2 I )…(T−μ1 I )w=0

Possibilities Either:(1) (T−μ1 I ) w=0

(2) (T−μ1 )w≠0 but (T−μ2 I ) {(T−μ1 I )w }=0(3) (T−μ2 I ) ( I−μ3 I )≠0 but (T−μ3 I ) {(T−μ2 I ) (T−μ1 I )w }=0(4) ⋮

(5) (T−μk I )…(T−μ1 I )w≠0 but (T−μk I ) {(T−μk I )… (T−μ1 I )w }=0----- End of lesson 5

Page 30: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

Previously – on linear algebraThorem: Let V be a vector space over C, T a linear transformation from V into V , and let U be a finite dimensional subspace in V that is invariant of T . (i.e. u∈U , then Tu∈U ). Then there exists a non-zero vector w∈U and a λ∈Cs .t .Tw=λw.

We could have just worked just with U .

Note: Tw= λw⇔ (T−λI )w=0⇔NT−λI ≠ {0 }Could have rephrased the conclusion as:There is a finite dimension subspace U of V that is invariant under T⇔There is a one dimension subspace X of U that is invariant under T .

ImplicationLet A∈C n×n, Then there exists a point λ∈Cs .t .N A−λ In

≠ {0 }Because the transformation T from Cn into Cn that is defined by the formula Tu=Au

Proof: Take any vecto w∈Cn ,w≠0.

Consider the set of vectors {w , Aw ,…, Anw }. These are n+1 vectors in C - an n dimensional space. Since they are independent, there are coefficients:

c0w+c1 Aw+…+cn Anw=0 such that not all coefficients are zero.

Same as to say (c0 I n+C1 A+…+cn An ) w=0

Let k=max { j|c j≠0 } Claim k ≥1 (trivial)

(c¿¿0 I n+c1 A+…+ck Ak)w=0 , ck ≠0¿

Claim that if we look at the ordinary polynomial:

c0+c1 x+c2 x2+…+ck x

k=ck (x−μ1 ) (x−μ2 )…( x−μk ) , ck≠0

⇒c0 I n+c1 A+…ck Ak=ck ( A−μ1 In ) ( A−μ2 I n )… ( A−μk I n ) =reverse the oder

¿

ck ( A−μk I n )… ( A−μ1 I n )w=0

Suppose k=3Either:

(1) ( A−μ1 I n )w=0

(2) ( A−μ1 I n )w≠0 and ( A−μ2 I n ) {( A−μ1 I n )w }=0(3) ( A−μ2 I n ) ( A−μ1 I n )w≠0 and ( A−μ3 I n ) ( A−μ2 I n ) (A−μ1 I n )w=0

We are looking fo λ s . t . (A− λ I n ) x=0 , x≠0

Page 31: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

Questions: Suppose A∈Rn× n Can one guarantee that A has at least one real Eigen value?No!

A=[ 0 1−1 0]

Looking for [ 0 1−1 0 ][ab]=λ [ab ]⇔b=λa ,−a= λb⇔b=−λ2b⇔ (1+λ2 )b=0

If b=0⇒a=0⇒The entire vector is zero! Not acceptable…So λ2+1=0 i.e. λ=± i

Result for A∈Cn×n says that there is always a one dimensional subspace of Cn that is invariant under A.

A∈Rn× n - there always exists at least one two dimensional subspace of Rn that is invariant under A.

Implication:Let A∈C n×n - then there exists a point λ∈C such that N A−λ In

≠ {0 }Suppose N A−λ In

≠ {0 } for some k distinct points in C λ1 ,…, λk.

That is to say, there are non-zero vectors u1 ,…,uk of Au j=λ ju j 1≤ j ≤ kClaim u1 ,…,uk are linearly independent.

Let’s check for k=3Suppose c1u1+c2u2+c3u3=0

( A−λ1 In ) (c1u1+c2u2+c3u3 )=(A− λ I n )0=0

Let’s break it up:

( A−α I n )u j=A u j−α u j=λ ju j−α u j=(λ j−α )u j

Continue with our calculation:c1 ( λ1− λ1 )u1+c2 (λ2−λ1 )u2+c3 (λ3− λ1 )u3=c2 (λ2−λ1 )u2+c3 ( λ3− λ1 )u3 We’ve reduced the problem!(we know the lambda’s are different so the rest of the coefficients are not zero)We can do it again to get

( A−λ2 I n) (c2 (λ2−λ1)u2+c3 (λ3−λ1 )u3 )=¿

c2 (λ2− λ1 ) (λ2− λ2 )u2+c3 ( λ3−λ1 ) ( λ3− λ2 )u3=c3 (λ3− λ1 ) (λ3−λ2 )u3 Now c3 must be zero since the other scalars and vectors in the expression are non-zero.We can in fact repeat the process to knock out c2 and also c1.This can also be generalized for n other than 3.

Page 32: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

Au j=λ ju j j=1 ,…,k λi≠ λ j if i≠ j

A [u1…uk ]=[ Au1…A uk ]=[ λ1u1…λkuk ]=[u1…uk ] [ λ1 ¿ ¿ λ2 ¿¿ ⋱ ¿ ¿ λk]

A∈Rn×n, then there is at least one dimensional subspace of Cn that is invariant under A.Translate this to: There is u∈Cn , λ∈C s .t . Au= λuu=x+i y , λ=α+i β A ( x+ i y )= (α+i β ) ( x+i y )=(αx−βy )+i ( βx+αy ) A∈Rn× n Ax=αx−βy Ay=βx+αy

W=span {x , y } with real coefficientsw∈W⇒Aw∈W w=ax+by Aw=aAx+bAy=a (αx+βy )+b (βx+αy )=(aα+bβ ) x+(−aβ+bα ) y

Example:If A∈Cn×n with k distinct Eigenvalues, then k ≤n.Because, u1 ,…,uk Eigenvectors are linearly independent. And they sit inside an n dimensional space Cn.

The matrix of the Eigenvectors:

[u1…uk ] is an n×k matrix with rank k .

If k=n, then A [u1…un ]=[u1…un ] [ λ1 ¿ ¿ λ2 ¿¿ ⋱ ¿ ¿ λk ]

AU=UDU is invertible. (Rank k)So we can rewrite this as: A=UDU−1

If you need to raise some matrix A to the power of 100Suppose: A=UDU−1 , A2=UDU−1UDU−1=U D2U−1

So A100=U D100U−1

D100=[α100 ¿ ¿ β100 ¿¿ ⋱ ¿ ¿ γ100 ]

Page 33: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

Sums of SubspacesLet U ,V be subspaces of a vector space W over F.U+V ={u+v|u∈U ,v∈V } Claim: U+V is a vector space.

(u1+v1)+ (u2+v2 )=(u1+u2 )⏟∈U

+(v1+v2 )⏟∈V

Lemma: Let U ,V subspaces of W a finite dimensional vector spacedim (U+V )=dimU+dimV−dim (U ∩V ) Another claim: ¿∩V is a vector space.

Proof: Suppose {w1 ,…,w k } is a basis for U ∩VU ∩V ⊆U if really U ∩V ≠U{w1 ,…,w k ,u1 ,…,us } basis for UU ∩V ⊆V . Suppose U ∩V ≠V{w1 ,…,w k , v1 ,…, vt } is a basis for V

Claim: {w1 ,…,w k ,u1,…,us , v1 ,…,v t } basis for U+VNeed to show:

(a) u+v is a linear combination of theseu=c1w1+…+ck w k+d1u1+…+dsusv=a1w1+…+ak w k+b1v1+…+b t vtu+v=¿

(a1+c1) w1+…+(ak+ck )w k+…d1u1+…+dsus+…+b1 v1+…+bt v t

(b) Show {w1…wk , u1 ,…,us , v1 ,…,v t } are linearly independent.

Claim: b is correct.Denote v=c1 v1+…+c t vt

Suppose:a1w1+…+akw k+b1u1+…+bsus+c1 v+…+ct v t=0a1w1+…+a2wk+b1u1+…+bsus⏟

∈U

=−v⏟∈V

We don’t know who −v is, but it is definitely in V ∩U !It has to be expressible this way:

(−c1 )v1+…+ (−c t ) v t=α1w1+…+αk wk⇒ c i=0

dim (U+V )=k+s+t=( k+s )+(k+t )−k=dimU+dimV −dimU ∩V

----- End of lesson 6

Page 34: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

U ,V subspaces of a vector space W over F.

U+V ={u+v|u∈U ,v∈V }

U ∩V= {x|x∈U , x∈V }

Both of these are vector spaces.We established that a dimension dimU+V =dimU+dimV−dimU ∩VThe sum U+V is said to be direct (called a direct sum) if dimU+V =dimV +dimU(consequence, the sum U+V is a direct sum ⇔U∩V ={0 })In a direct sum, the decomposition is unique.If x=u1+v1 , such that u1∈U ,v1∈V and also x=u2+v2 such that u2∈U ,v2∈V thenu1=u2 and v1=v2.

Why?

u1+v1=u2+v2u1−u2⏟

∈U

=v1−v2⏟∈V

So u1−u2∈U ∩V . From our assumption, u1−u2=0.Similarly, v2−v1=0.This means u1=u2 and v1=v2.

V 1 ,…,V k subspaces of a vector space W thenU 1+…+U k={u1+…+uk|u j=U j , j=1…k } U 1+…+U k is said to be direct if dimU1+…+U k=dimU 1+…+dimU k

It’s very tempting to jump to the conclusion (for example) that U 1+U 2+U 3 is a direct sum ⇔U1∩U 2={0 } ,U 1∩U 3={0 } ,U 2∩U 3={0 }However, this is not enough.

Consider U1=span {[100]},U 2=span {[010]},U 3=span{[110 ]}U 1∩U 2= {0 },U 1∩U 3= {0 } ,U 2∩U3= {0 } dimU 1+dimU 2+dimU3=3 But dimU 1+U2+U3=2So forget the false conclusions, and just remember dimU1+U 2=dimU 1+dimU 2.Generally we say U 1+…+U k is direct ⇔ every collection of non-zero vectors, at most one from each subspace is linearly independent.

Page 35: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

Suppose k=4. U ,V , X ,Y . If the sum U+V +X+Y is direct, then all non-zero u∈U ,v∈V , x∈ X , y∈Y ⇒ {u , v , x , y } are linearly independent.

Every A∈C n×n has at least one Eigenvalue. i.e. there is a point λ∈C and a vector x≠0 such that Ax=λx (same as saying ( A−λI ) x=0

That is equivalent to saying N (A− λI ) ≠ {0 }

Suppose A has k distinct Eigenvalues λ1 ,…, λk∈C . We shoed that if Au j=λ ju j j=1…k thenu1 ,…,uk are linearly independent,.

Same as to say N ( A− λ1 I )+…+N ( A− λk I ) is a direct sum.

Same as to say dim N ( A− λ1 I )+…+N ( A− λk I )=dimN (A−λ 1 I )+…+dim N (A−λ k I )

Criteria: A matrix A∈Cn×n is diagonizable ⇔ can find n linearly independent Eigenvectors.

Same as to say dim N ( A− λ1 I )+…+dim N ( A− λk I )=n.

A5× 5, λ1 ,…, λ5 distinct Eigenvalues. Au j=λ ju j , j=1 ,…,5u j≠0

A[u1 ,…,u5⏞U ]= [ Au1… Au5 ]=[ λ1u1…λ5u5 ]= [u1…u5 ][ λ1 0 0

0 ⋱ 00 0 λ5]

AU=UDAdditional fact: u1 ,…,u5 are linearly independent. Therefore, U is invertible. dim RU=5. U is a 5×5 matrix.So A=UDU−1

dim N ( A− λ j I5 )≥1⏟γ j

γ1+γ 2+…+γ5≤5⇒ γ1=γ2=…=γ5=1A is a 5×5 matrix.2 distinct Eigenvalues λ1 , λ2. λ1≠ λ2Crucial issue is the dimension of the null spaces: dim N ( A− λ1 I )=3 and dim N ( A− λ2 I )=2⇒ Au1= λ1u1 , Au2=λ1u2 , A u3=λ1u3 , Au4= λ2u4 , A u5=λ2u5 {u1 , u2 ,u3 } a basis for N ( A− λ1 I )

{u4 ,u5 } a basis for N ( A− λ2 I )

Page 36: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

A [u1 u2 u3 u4 u5 ]=[u1 u2 u3 u4 u5 ] [λ1 0 0 0 00 λ1 0 0 00 0 λ1 0 00 0 0 λ2 00 0 0 0 λ2

](1) Find Engenvalues λ1 ,…, λk of A (distinct Eigenvalues)

(2) Find basis for each space N ( A− λi I ) ,i=1…k

(3) Stack resulting vectors AU=UD(columns of U are taken from basis N ( A− λi I ) ,i=1…k and always linearly independent)

Question: Do you always have enough columns? No. U could be non invertible, and then A is not diagonizable.

Example:

A=[2 1 00 2 10 0 2]

( A−λ I 3 )=[2−λ 1 00 2−λ 00 0 2− λ]

( A−λ I 3 ) is invertible (nullspace is {0 }) if λ≠2, not invertible (nullspace is not {0 }) if λ=2.

A−2 I 3=[0 1 00 0 10 0 0]

Let’s look for vectors in the null space:

[0 1 00 0 10 0 0 ][

αβγ ]=[βγ0 ]=0

So β=γ=0⇒the nullspace is span {[100]}

Only one Eigenvector!

U=[1 0 00 0 00 0 0]

Page 37: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

Suppose A=[2 1 0 0 00 2 1 0 00 0 2 0 00 0 0 3 10 0 0 0 3

] , A− λ I 5=[2−λ 1 0 0 00 2− λ 1 0 00 0 2−λ 0 00 0 0 3− λ 10 0 0 0 3−λ

]A−2 I=[0 1 0 ¿

0 0 1 ¿0 0 0 ¿1 ¿ 0 1] ,N A−2 I=span {[1000

0]}

Remember: Bp×q :q=dim N B+dim RB

A−3 I=[−1 1 0 0 00 −1 1 0 00 0 −1 0 00 0 0 0 10 0 0 0 0

] , N A−3 I=span {[00010]}

( A−2 I )2=[0 0 1 0 00 0 0 0 00 0 0 0 00 0 0 1 20 0 0 0 1

] ,[A11 00 A22]

k

=[A11k 00 A22

k ]N (A−2 I )2=span {e1 , e2 } – added eigenvector

( A−2 I )3=[0 0 0 0 00 0 0 0 00 0 0 0 00 0 0 1 ¿0 0 0 0 1

]N (A−2 I )2=span {e1 , e2 , e3 } - added eigenvector

Page 38: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

( A−2 I )k , k ≥4 will still be something of the form [0 0 0 0 00 0 0 0 00 0 0 0 00 0 0 1 ¿0 0 0 0 1

] so we won’t get any new

eigenvectors in the following powers.

N A−3 I=span {e4 }N (A−3 I )2=span {e4 , e5 }

Summary: dim N (A−2 I )3+dimN ( A−3 I )2=dimN ( A−2 I )5+dim N (A−3 I )5=5

1) B∈Cn× n ,N B⊆N B2⊆…But there is a saturation! If NB k=Nbk +1 then NB k+1=Nbk+2 and so on.

Note: NB j=NBk ∀ j ≥ k

If x∈NB j⇒B j+1 x=B (B j x )=B0=0Translation: x∈NB j+1 i.e. NB j⊆NB j+1

Suppose NB k=N Bk+1. Take x∈NBk +2⇒Bk+2 x=0⇒Bk +1 (Bx )=0⇒Bx∈N Bk+1

But we assumed NB k+1=NB k⇒Bx∈N Bk⇒B k+1 x=0⇒ x∈ NBk +1

2) Bk x=0⇒Bn x=0 always.suppose n=5 ,B5×5, B3 x=0⇒B5 x=0 (easy)Interesting B6 x=0⇒B5 x=0Because if it is not true ⇒N B5⊊N B6⇒ Saturation after 5. ⇒dim N B6≥6. But 5×5 matrices can’t have such degree!

Let A∈C n×nwith k distinct eigenvalues.

i.e. N A−λ I n≠0⇔λ=λ1 or λ=λ2 or …λ=λk

γ j=dim N (A−λ j In ) called the geometric multiplicity

α j=dim N(A−λ j In)n called the algebraic multiplicity

It’s clear that γ j≤α j⇒ γ 1+…+γ k≤α 1+…+α k =not obvious∧willbe proved later on n

3) NB n∩RBn= {0 }Let x∈NBn∩RBn

Bn x=0 , x=Bn y⇒BnBn y=0⇒ y∈N B2 n By2⇒

y∈ NBn⇒Bn y=0⇒ x=0

4) Cn=NB n+RBn and this sum is direct

3 implies that the sum is direct. That means:

Page 39: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

dim (N Bn+RB n )=dimN Bn+dim RB n =conservation of dimensions n

Remark: Is it true that the dim (NC+RC )=dim NC+dim RC for any square matrix C?Answer: NO!

Consider: C=[0 10 0], NC=span {[10]}, RC=span {[10]}

dim NC+dim RC=2≠dim NC+RC=1

---- end of lesson 7

Page 40: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

A∈C n×n

λ Eigenvalue means N A−λ In≠ {0 }

A has at least one Eigenvalue in C.If A has k distinct Eigenvalues, then γ j=dim N A− λ In

- Geometric multiplicityα j=dim N

(A−λ I n )n – Algebric multiplicity

γ j≤α j always: γ1+…+γk ≤α1+…+αk =OBJECTIVE n

EX. A5×5 2 distinct Eigenvalues λ1, λ2If γ1+γ 2=n (equivalent to statement N A−λ 1 I5+N A− λ2 I5=C5

Then A [u1…u5 ]=[u1…u5 ] D(5 linearly independent Eigenvectors and D is diagonal and U is invertible)If γ1=3 and γ2=2Then: A [u1…u5 ]=[ λ1u1 λ1u2 λ1u3 λ2u4 λ2u5 ]

[u1u2u3u4u5 ] [λ1 ¿ ¿λ1 ¿ ¿

¿¿ λ2¿¿ λ2¿ ]x∈C5

x=∑1

5

e ju j

Ax=A (c1u1+…+c5u5 )=c1 λ1u1+c2 λ1u2+c3 λ1u3+c4 λ2u4+c5 λ2u5

Will show that a∈Cp×p with k distinct Eigenvalues λ1 ,…, λk can find invertible U n× n and upper triangular n×n J of special form such that AU=UJ

J=¿γ jcells in B λ j

Theorem: A∈C n×n , λ1 ,…, λk distinct Eigenvalues, then:

Cn=N ¿¿ ¿ and the sum is direct.

Suppose k=3

If e.g. k=3{u1 ,…,ur } basis for N ¿¿ ¿

{v1 ,…,vr } basis for N ¿¿ ¿

{w1 ,…,w r } basis for N ¿¿ ¿

Then {u1 ,…,ur , v1 ,…,vs ,…,w1 ,… ,wt } is a basis for Cn.

Page 41: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

1) B∈Cn× n ,RBn∩ NBn={0 }

2) Cn=NB n+RBn and this sum is direct

3) If α ≠ β, then N ( A−α In )n⊆R

( A−β In)n

Binomial theorem:

a ,b∈C⇒ (a+b )n=∑j=0

n

(nj)a jbn− j

(a+b )2=(20)a0b2+(21)ab+(22)a2b0=b2+2ab+a2

Can we do something similar for matrices?

( A+B )n=(20)A0B2+(21)AB+(22)A2B0=B2+2 AB+A2

( A+B )2= (A+B ) A+( A+B )B=A2+BA+AB+B2 AB=BA

Correct only if AB=BA

We accept it as correct for n though we can verify it ourselves.A−α I n=A−β I n+(β−α ) I n A−β I n commutative with (β−α ) In

( A−α I n )n=( ( A−β I n )+ (β−α ) I n )n=∑j=0

n

(nj) ( A−β I n )k ¿¿¿

x∈N (A−α In )n⇒ ( A−α I n )n x=0

0=(β−α )n x+∑j=1

n

(nj) (β−α )n− j ( A−β I n ) j x

x= −1(β−α )n

∑j=1

n

(nj )(β−α )n− j ( A−β I n ) j x=¿

( A−B I n ) { −1(β−α )n

∑j=1

n

(nj) (β−α )n− j ( A−β In ) j x }xx=( A−β I n ) P (A ) x So we can replace x with it’s polynomial:

( A−β I n ) P ( A )2 x=( A−β I n )n P ( A )n x=( A−β In )n⇒ x∈R( A− βIn )n

Page 42: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

4) If W=U+V and the sum is directLet X be a subspace of W , then:X ∩W=X ∩U+X ∩V this is not always true!

R2=W

U=span {[10]},V=span {[01]}X=span {[11]}X ∩U={0 }X ∩V ={0 }

X ≠ X ∩U+X ∩VNot always true!

It is true if U is a subset of X or V is a subset of X

Let U⊆X , x∈ X , X⊆W⇒x∈W⇒ x=u+v some u∈U ,v∈VIf also U⊆X⇒u∈X⇒ v=x−u∈ XX=U ∩X+V ∩X

5) Cn=N ( A− λ1 In )n+…+N ( A− λk In)n sum is direct.

For simplicity fix k=3To simplify the writing N j=N

(A− λ j In )nR j=R

( A− λ j In )n

Wish to show Cn=N1+N2+N3 (sum is direct)

(2 )⇒

1- Cn=N 1∔R1

Cn=N2∔R2

Cn=N 3∔R3

∔ means the sum is direct

(3 )⇒N2⊆R1

Cn=N2∔R2

2- [R1=R1∩Cn=R1∩N2∔R1∩R2=N 2∔R1∩R2 ] 3- [R1∩R2=R1∩R2∩N3∔R1∩R2∩R3=N 3∔R1+R2+R3]

Page 43: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

Cn=N 3∔R3 N 3⊆R2 , N3⊆R1⇒N3⊆R1∩R2

Cn=N1∔ (N 2∔R1∩R2 )=N1∔ (N2∔ {N3∔R1∩R2∩R3 }) Cn=N 1∔N 2∔N3∔R1∩R2∩R3

R1∩R2∩R3

Claim: R1∩R2∩R3 is invariant under A.i.e. x∈ R1∩R2∩R3⇒ Ax∈R1∩R2∩R3

R j=R( A− λ j In )n

x∈ R j⇒ x=( A−λ j I n )n y

Ax=A ( A−λ j I n)n y =Thesematricesare interchangeble ( A−λ j I n )n ( Ay )

R1∩R2∩R3 is a vector space, subspace of Cn. It’s invariant under A.⇒Either R1∩R2∩R3={0 } , or ∃u∈R1∩R2∩R3 and a λ∈C such that Au=λu.λ=λ1 or λ2 or λ3.If λ=λ1, u∈R1 and ( A−λ1 I )u=0∈N1

So the second possibility cannot happen.

So we conclude that R1∩R2∩R3={0 }Therefore: Cn=N 1∔N 2∔N3

Cn=N ( A− λ1 In )n∔N ( A− λ2 In)n∔N (A−λ 3 In )n

Let

{u1 ,…,ur } be any basis for N 1

{v1,…,v s } be any basis for N 2

{w1 ,…,w t } be any basis for N3

Then{u1 ,…,ur , v1 ,…,vs ,…,w1 ,… ,wt } is a basis for Cn

A [u1…ur v1…vsw1…w t ] =abbreviate A [U V W ]

A N j⊆N j

If x∈N j then ( A−λ j I n )n x=0

Page 44: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

Is Ax∈N j?Yes. We can interchange:

( A−λ j I n )n Ax=A ( A−λ j I n )n x=A0=0

Au j∈ span {u1 ,…,ur } j=1 ,…, rAv j∈ span {v1 ,…, vs } j=1 ,…, sAw j∈ span {w1 ,…,wt } j=1 ,…, t

x∈ span {u1,…,ur }⇒ x=Ua⇒ Ax=AUa

A [UV W ]=[U V W ][ G1 ¿ ¿∨¿−¿+¿−¿+¿−¿∨¿G2 ¿ ¿

∨¿G3¿]A=T [ G1 ¿ ¿∨¿

−¿+¿−¿+¿−¿∨¿G2 ¿ ¿∨¿G3¿]T−1

Next we will choose basis for each of the spaces of N j which is useful!

----- End of lesson 8

Page 45: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

Theorem: If A∈Cn×n with k distinct Eigenvalues λ1,…, λk , then

Cn=N A−λ1 I+…+N A−λ k I and this sum is direct.

Implication: (write for k=3)If {u1 ,…,ur } is a basis for N A−λ 1 I

And {v1 ,…,v s } is a basis for N A−λ2 I

And {w1 ,…,w t } is a basis for N A−λ 3 I

Then {u1 ,…,ur , v1 ,…,vs ,w1 ,…,wt } is a basis for Cn

γ j=dim N A− λ j I – Geometric multiplicity.α j=dim N

(A−λ1 I )n - Algebraic multiplicity.

r=α1 , s=α2 ,t=α3

Recall also:N

( A− λ j I )n is invariant under A.

i.e. x∈N(A−λ j I )n

⇒ Ax∈N( A− λ j I )n

Au1∈ N (A− λ1 I )n⇒ Au1∈ span {u1 ,…,ur } So

Au1=[u1 … ur ] [ x11x21⋮xr1

] Same as x11u1+ x21u2+…+xr1ur

Au2=[u1 … ur ] [ x12x22⋮xr2

] [ Au1 Au2 … Aur ]=[u1 … ur ] [ x11 x12 … x1r

x21 x22 … ⋮⋮ ⋮ ¿ ¿

xr2¿…¿xrr¿]So

A[u1 … ur⏟U ]=[u1 … ur ] X r ×r

Page 46: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

A[ v1 … vs⏟V ]=[ v1 … vs ] Y s× s

A[w1 … w t⏟W ]=[w1 … wt ] Z t×t

A [U V W ]=[UX VY WZ ]= [U V W ][X 0 00 Y 00 0 Z ]

This much holds for any choice of basis for each of the spaces N ( A− λ j I )n.

Next objective is to choose the basis in N ( A− λ j I )n to give nice results.

If k=3, we would like X ,Y ,Z to be matrices which are easy to work with.

Lemma: Let B∈Cn×n, then:

(1) dim NB k+1dim NB k≤dim NB k−dim NB k−1k=2,3 ,…

(2) dim NB 2−dim NB≤dim NB

Proof (1): We know that always dim NB≤dim NB2≤dimN B3≤… and somewhere it saturates.

If the left side is zero, it’s not an interesting statement, because the right side is always bigger.Assume dim NB k+1>dimN Bk

But then dim NB k>dim NBk−1

Let {a1 ,…,ar } be a basis for NB k−1

Let {a1 ,…,ar ,b1 ,…,bs } be a basis for NB k

Let {a1 ,…,ar ,b1 ,…,bs , c1 ,…,c t } be any basis for NB k+1

Claim: (r+s+t )− (r+s )≤ (r+s)−ri.e. wish to show that t ≤ s1) Bk c1+…+B k ct are linearly independent

Suppose γ1Bk c1+…+γt B

k c t=0Bk (γ 1c1+…+γt ct )=0

This means that γ1 c1+…+γ t c t∈N Bk

However, we know that NB k=N Bk−1∔span {b1 ,…,bs } NB k+1=NB k∔ span {c1 ,…, ct }

Page 47: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

And we also know that γ1 c1+…+γ t c t∈N Bk+1

But we know that NB k∩span {c! ,…,c t }={0 }⇒γ 1c1+…+γ t c t=0ci s are independent !

⇒γ1=γ t=0

2) Bk−1b1 ,…, Bk−1bS are linearly independent. The proof goes similarly to the proof of 1.

3) Bk c i∈ span {Bk−1b1 ,…,Bk−1bs }c i∈NB k+1

Bk+1 c i=0⇒Bk (Bci)=0⇒Bc i∈N Bk

Due to this observation, we can write:Bc i=u+v such that u∈ span {a1 ,…,ar }¿N Bk−1, v=span {b1 ,… ,bs }¿NBk

Bc i=u+∑j=1

k

β jb j

Bk−1 (Bci )=B k−1u+∑j=1

k

β j Bk−1b j=∑

1

s

β jBk−1b j

4) span {Bk c1 ,…,B kc t }⊆span {Bk +1b1,…, Bk+1bs }Since all Bk c1 are linearly independent, so s≥t !

Lemma: Suppose span {u1 ,…,uk }⊆ span {v1 ,…,v l } and u1 ,…,uk are linearly independent and v1 ,…,v l are linearly independent. Then can choose find l−k of the vectors v1 ,…,v l call them w1 ,…,w l−k, then span {u1 ,…,uk ,w1 ,…,wl−k }=span {v1 ,…, v l e}

Let k=3, l−5sp

TODO: Fill in!!!!

Page 48: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

A∈C n×n with λ1,…, λk distinct Eigenvalues

Then Cn=N ( A− λ1 In )∔…∔N ( A− λk In)

In other words: Take any basis of N ( A− λ1 In ) with α 1 vectors

Take any basis of N ( A− λ2 In ) with α 2 vectors⋮ Take any basis of N ( A− λk I n) with α k vectors

Take all those α 1+…+α k vectors together. It will form a basis for Cn

Next step, is to choose the basis for each space in a good way.Let B=A−λ j I n NB≠ {0 }NB⊊N B2⊊N B3=N B4

{a1 ,…,ar } basis for NB

{a1,…,ar ,b1 ,…,bs } basis for NB 2

{a1,…,ar ,b1 ,…,bs , c1 ,…,c t } basis for NB 3

B2 c1 ,…,B2 c t⏟independent

Bb1 ,…,Bbs⏟

independent a1,…,ar⏟independent

All in the nullspace of Bt+r+s vecors in NB!But we only need r vectors (since the dimension of the nullspace).

And also

span {B2c1 ,…,B2 c t }⊆ span {Bb1,…,B bs } span {Bb1 ,…,Bbs }⊆span {a1 ,…,ar }

So we will keep:all B2 c1 ,…,B2 c t

s−t of Bb1 ,… ,Bbs

r−s of a1 ,…,ar

Leaves us with t+ (s−t )+(r−s )=r vectors.

Can find s−t of Bb1 ,… ,Bbs such that span {B2c1 ,…,B2c t , those s−t }=span {Bb1 ,…,Bbs }Add r−s of the {a1 ,…,ar } vectors, so the whole collection will equal to span {a1 ,…,ar }=NB

Page 49: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

Let’s take an example with numbers:t=2 , s=3 , r=5

{a1,…,a5 } basis for NB

{a1,…,a5 , b1 , b2 ,b3 } basis for scrip tB 2

{a1,…,a5 , b1 , b2 ,b3 ,c1 , c2 } basis for scrip tB 3

B2 c1B2c2Bb1Bb2Bb3a1a2a3a4a5

Suppose Bb2 is linearly independent of B2 c1 ,B2c2

And a2 , a4 are linearly independent of B2 c1 ,B2c2 ,B b2

So keep

B2 c1 B2 c2 Bb2 a2 a4Bc1 Bc2 b2 ¿ ¿

c2 ¿¿¿¿

Claim: Those 10 vectors are Linearly independent.To this point, we only know B2 c1 ,B

2c2 , Bb2 , a2 , a4 are linearly independent.

γ1B2 c1+γ 2Bc1+γ3 c1+γ 4B

2 c2+γ 5Bc2+γ 6c2+γ7Bb2+γ 8b2+γ9a2+γ10 a4=0

Let’s apply B2 on both sides.What’s remains of these two terms is:

γ3B2 c1+γ 6B

2 c2=0⇒ γ3=γ6=0 Since they are linearly independent!

Let’s apply B!

γ2B2 c1+γ 5B

2 c2+γ8 Bb2=0⇒ γ2=γ5=γ 8 because they are linearly independent!

γ1B2 c2+γ 4B

2 c2+γ 7Bb2+γ9a2+γ10 a4=0⇒γ 1=γ4=γ 7=γ9=γ 10=0 because they are linearly independent!

So all coefficients must be zero, therefore all coefficients are linearly independent.

Since dim NB 3=10, the vectors in this array form a basis f=r NB 3.

Claim: A[B2 c1 Bc1 c1 B2 c2 Bc2 c2 B b2 b2 a2 a4⏟U j

]

Page 50: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

AU j=U j[C λ j

(3 ) 0 0 0 00 C λ j

( 3) 0 0 00 0 C λ j

(2 ) 0 00 0 0 C λ j

(1) 0

0 0 0 0 C λ j

(1 )]=U j[

λ j 1 0 0 0 0 0 0 0 00 λ j 1 0 0 0 0 0 0 00 0 λ j 0 0 0 0 0 0 00 0 0 λ j 1 0 0 0 0 00 0 0 0 λ j 1 0 0 0 00 0 0 0 0 λ j 0 0 0 00 0 0 0 0 0 λ j 1 0 00 0 0 0 0 0 0 λ j 0 00 0 0 0 0 0 0 0 λ j 00 0 0 0 0 0 0 0 0 λ j

]If A has 3 Eigenvalues λ1, λ2 , λ3

A [U1 U 2 U 3 ]=[U 1 U 2 U3 ] [D λ1 0 00 D λ2 00 0 Dλ 3

] D λ j

is an α j×α j with λ j on the diagonal and γ j Jordan cells in its “decomposition”.

dim N ( A− λ j I )=¿ number of Jordan cells.

Analyzing N ( A− λ j In )kk=1,2 ,…

For simplicity, denote A−λ j I n=Bdim NB=5=r dim NB 2=8=r+s dim NB 3=10=r+s+t

Just by these numbers you can find out how U should look like:

X X X X XX X X ¿ ¿

X ¿¿¿¿5

8−5=31−8=2

The Jordan form:2 3×3 Jordan cells1 2×2 Jordan cells2 1×1 Jordan cells

Page 51: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

A=[1 2 0 0 10 1 2 0 00 0 1 0 00 0 0 2 00 0 0 2 2

] The objective is to find U 5×5 such that AU=UJ ←Jordan form

(1) Find the Eigenvalues – A has 2 distinct Eigenvalues – λ1=1 , λ2=2 , α1=3 , α 2=2(2) First set B=A−λ1 I5=A−I 5, calculate NB , N B2,…

B=[0 2 0 0 10 0 2 0 00 0 0 0 00 0 0 1 00 0 0 2 1

] NB=span {e1 }e j= jth column of I 5

e1=[10000] ,…

NB 2=span {e1 , e2 }NB 3=span {e1 , e2, e3 }

B2=[ 0 2 0 ¿ 0 10 0 2 ¿ 0 00 0 0 ¿ 0 0

−¿−¿−¿+¿−¿−¿0 0 0 ¿ 1 00 0 0 ¿ 2 1

][ 0 2 0 ¿ 0 10 0 2 ¿ 0 00 0 0 ¿ 0 0

−¿−¿−¿+¿−¿−¿0 0 0 ¿ 1 00 0 0 ¿ 2 1

]=[B11 B120 B22][B11 B12

0 B22 ]=¿

[ (B11)2 B11B12+B12B22

0 (B22 )2 ]=¿

B3=[ 0 0 0 ¿ ¿ ¿0 0 0 ¿ ¿ ¿0 0 0 ¿ ¿ ¿

−¿−¿−¿+¿−¿−¿0 0 0 ¿ @ @0 0 0 ¿ @ @

]

Page 52: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

B4=[ 0 0 0 ¿ ¿ ¿0 0 0 ¿ ¿ ¿0 0 0 ¿ ¿ ¿

−¿−¿−¿+¿−¿−¿0 0 0 ¿ @ @0 0 0 ¿ @ @

]So we only keep {e1=c1 }

In this case dim N− λ1 I 5=1⇒ 1 Jordan cell for this Eigenvalue

Calculate NB , N B2

e3 ,B e3 , B2 e3

Be3=2e2B2 e3=B (Be3 )=B2 e2=4e1

A detour to old history:

B [B2c1Bc1 c1 ]=[B3 c1B2c1Bc1 ][0 1 0

0 0 10 0 0 ]

So

A [u1u2u3 ]=[u1u2u3 ] [ λ1 1 00 λ1 10 0 λ1]

Now let B=A−λ2 I 5=A−2 I 5=[−1 2 0 0 10 −1 2 0 00 0 −1 0 00 0 0 0 00 0 0 2 0

] B[ 1000

−1]=0

NB=span {e1+e5 }

Page 53: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

B2 x=[−1 −4 −4 2 −10 −1 −4 0 00 0 1 0 00 0 0 0 00 0 0 0 0

][x1x2x3x4x5

]=[x1−4 x2−4 x3+2 x4−x5

−x2−4 x3x300

] [x1x2x3x4x5

]=[−2x4+ x5

00x4x5

]=x4[−20010

]+x5[10001]

NB=span {[10001]}

NB 2=span {[10001] ,[−20010

]}

B [Bb1b1 ]= [B2b1B b1 ]=[0 Bb1 ]=[ Bb1b1 ] [0 10 0]

So A [u4u5 ]=[u4u5 ] [2 10 2]

A [u1 u2 u3 u4 u5 ]=[u1 u2 u3 u4 u5 ] [1 1 0 0 00 1 1 0 00 0 1 0 00 0 0 2 10 0 0 0 2

]---- end of lesson 9

Page 54: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

DeterminantsA∈C n×n Special number called its determinant.Show that there exists exactly one function from Cn×n to C

(1) f ( I n )=1(2) If P a simple permutation, then f ( PA )=− f ( A )(3) f ( A ) is linear in each row of A separately.

Let A∈C 3×2

A=[a1a2a3] ,if say a2=αu+ βvthen

f ([ a1αu+βv

a3 ])=f ([ a1αua3 ])+ f ([ a1βv

a3 ])=αf ([a1ua3])+βf ([a1va3])Properties 1,2,3 imply automatically:

(4) If 2 rows of A coincide, then f ( A )=0

A∈C n×n a i=a j ≠ j A=[a1⋮an]

Let P be the simple permutation that interchanges row i with j.But − f ( A )=f (PA )=f (A )2 f ( A )=0⇒ f ( A )=0

Example:

A=[1 2 34 5 61 2 3] , P=[0 0 1

0 1 01 0 0 ]

PA=[1 2 34 5 61 2 3]=A

A=[a11 a12a21 a22 ] , [ a11a12 ]=a11 [10 ]+a12 [01 ]

f ( A )=a11 f ([ 1 0a21 a22])+a12f ([ 0 1

a21 a22])

Page 55: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

[a21a22 ]=a21 [10 ]+a22 [01 ]

So

f ( A )=a11 f ([ [10 ]a21 [10 ] ])+a11 f ([ [10 ]

a22 [01 ] ])+a12 f ([ [01 ]a21 [10 ] ])+a12 f ([ [01 ]

a22 [01 ] ])=¿

a11a21 f ([1 01 0])+a11 a22 f ([1 0

0 1])+a12a21 f ([0 11 0])+a12 a22 f ([0 1

0 1])=¿

0+a11a22−a12a21+0=a11a22−a12 a21

A∈C 3×3 , A=[a1a2a3]=[a11 a12 a13a21 a22 a23a31 a32 a33] , e1, e2, e3 columns of I 3.

a1=a11e1+a12 e2+a13e3

f ( A )= f ([a1a2a3])=a11 f ([e1T

a2a3 ])+a12 f ([e2

T

a2a3 ])+a13 f ([e3

T

a2a3 ])

Let’s call them 1,2,3 accordingly.

So1=a2=a21e1+a22e2+a23+e3

1=a11a21 f ([e1T

e1T

a3])+a11a22 f ([e1T

e2T

a3])+a11a23 f ([e1T

e3T

a3 ])The first value is zero! We only have to deal with the last two terms.

a3=a31 e1T +a32 e2

T+a33 e3T

So 1 will be: a11A22a33 f ([e1T

e2T

e3T ])+a11 A23a32 f ([e1

T

e3T

e2T ])

So 1=a11a22 a33−a11a23 a322=¿ Two more terms3 = Two more terms

Page 56: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

TODO: Draw diagonals

(5) If A=[a1⋮an] ,B=EA ,E lower triangular, 1’s on the diagonal and exactly one non-zero

entry below diagonal.Then det (EA )=det A

A∈C 3×3 ,E=[1 0 00 1 00 α 1]

EA=[ a1a2

α a2+a3]det EA=αdet [a1a2a2]+det [a1a2a3]=det A

Same as to say: adding a multiple of one row to another row does not change the determinant.

(6) If A has a row which is all zeroes, then det (A )=0 (7) If two rows are linearly dependent, then det A=0(8) If A∈Cn×n is triangular, then det A=a11a22…ann

A=[a11 a12 a130 a22 a230 0 a33 ]

By the linearity in row 3⇒det A=a33det [a11 a12 a130 a22 a230 0 1 ] =

property5 a33det [a11 a12 00 a22 00 0 1]=¿

a33a22 det [a11 a12 00 1 00 0 1]=a33a22det [a11 0 0

0 1 00 0 1]=a33a22a11det [1 0 0

0 1 00 0 1 ]

(9) det A≠0⇔A is invertible

A nonzero …E3 P3E2P2 E1P1 A=U upper echelonP1 - Permutation of the first row with one of others or just I

Page 57: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

P2 - Permutation of the second row with either 3rd, 4th … or I

So you can generalize it to:EPA=U

E-lower triangular, P-permutation.

P2E1=P2[ 1 0 0α 1 0β 0 1]

P2 interchanges 2nd with 3rd row

P2=[1 0 00 0 10 1 0]

P2E1≠ E1P2 (not always!)

P2(I 3+[0 0 0α 0 0β 0 0])=P2+P2[0 0 0

α 0 0β 0 0]=P2+[0 0 0

β 0 0α 0 0]

This is the same as P2+[ 0 0 0β 0 0α 0 0]P2=(I 3+[ 0 0 0

β 0 0α 0 0])P2=E1

' P2

P2E1P1≠E1P2P1However, it is equal to E1

' P2P1 where E1' has the same form as E1

If A∈Cn×n , A ≠0, then there exists an n×n permutation matrix P, and an n×n lower triangular E with 1’s on the diagonal such that:EPA=U n×n upper echelon.In fact U is upper triangular!

det EPA=detU=u11u22…unn

By iterating property 5, det EPA=det PA=±det ASo we know that |det A|=|u11u22…unn|

Claim: EPA=USince E and P are invertible, A is invertible ⇔ U is invertible ⇔ all values on the diagonal are non zero ⇔ |det A|≠0⇔det A≠0

(10)A ,B∈Cn× n, then det AB=det A ∙det B

Case (1) – NB≠ {0 }⇒N AB≠ {0 }

Page 58: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

So B not invertible means AB not invertible. So by 9, det B=0 and det AB=0. Which also means that det A ∙det B=0 ∙ something=0=det AB

Case (2) – B is invertible.

φ ( A )=det ( AB )det B

What are the properties of this function?

φ ( I )=1if P is a simple permutation

φ (PA )=det P ( AB )det B

=−det ABdet B

=−φ ( A )

Claim: φ is linear in each row of AIn the 3×3 case

A=[a1a2a3]φ ( A )=

det [ a1Ba2BA3 B]

det B

Say that a1=αu+ βvSo a1B=α (uB )+β ( vB )

φ ([αu+βva2a3 ])=det [αuB+βvB

a2Ba3B ]=(α det [ uB

a2Ba3B]+β det [ vB

a2Ba3B])

det B

(11) If A is invertible, then det A−1= 1det A . Easily obtained by calculating det A ∙ A−1 which

must be 1 and according to property 3 is also the multiplication of determinants.(12)det AT=det A

EPA=U upper echelon

So according to previous properties we know that det P ∙det A=u11u22…unn

Page 59: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

But we also know that AT PT ET=U T

So:

det AT ∙ det PT ∙ det ET=det UT

We know that det E=det ET=1Also when we flip U , we still have a triangular matrix, just instead of a upper triangular we have a lower triangular. So its determinant stays the same. So, so far this is true:

det P ∙det A=det PT ∙ det AT

Let P be some permutation.Claim: PT P=ILet’s multiply both sides by det P !

(det P )2 ∙ det A=det P ∙det PT ∙ det AT=det PPT ∙ det AT

(±1 )2 ∙ det A=1∙ det AT⇒det A=det AT

A∈C n×n , λ1 ,…, λk distinct Eigenvalues.Then AU=UJ where J is in Jordan form.

A with λ1 , λ2 algebraic multiplicities α 1 , α 2α j=dim N

(A−λ j In)n

det λI n−A=det ( λ I n−UJ U−1)=det U (λ I n−J )U−1=detU ∙det λ I n−J ∙detU−1=det (λ I n−J )=(λ−λ1)α1 (λ−λ2)

α2

---- end of lesson 10

Page 60: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

Main Properties of Determinants - AgainLet A∈C n×n

det I n=1 det PA=−det A if P is a simple permutationdet A is linear in each row of A

A in invertible ⇔det A≠0Let B∈Cn×n

det AB=det A ∙det B=det BA

If A triangular then det A=a11a22…ann

Determinants in the aid of EigenvaluesRecall λ is an eigenvalue of A if N A−λ In

≠ {0 }

B matrix. It’s Left invertible ⇔N B= {0 }If B∈C n× n it’s invertible ⇔N B= {0 }det ( λI−A )≠0⇔λI−A invertible ⇔N λI−A= {0 }So the opposite is:det ( λI−A )=0⇔N λI−A≠ {0 }

A∈C n×n has k distinct eigenvalues, then there exists an invertible matrix U and an upper triangular matrix J of special form (cough cough, the Jordan form cough cough) Such that:AU=UJ equivalently A=UJU−1

If k=3:λ1 ⋱ 0 ¿ 0 0 0 0 0 0 00 ⋱ ⋱ ¿ 0 0 0 0 0 0 00 0 λ1 ¿ 0 0 0 0 0 0 0

−¿−¿−¿+¿−¿−¿−¿+¿0 0 ¿ 0 0 0 ¿ λ2 ⋱ 0 ¿

0¿0¿0 ¿0¿0¿0¿∨¿0¿⋱¿⋱¿∨¿0¿0¿0¿0¿0¿0 ¿∨¿0¿0¿ λ2¿∨¿0 ¿0¿0¿0¿0 ¿0¿+¿−¿−¿−¿+¿−¿−¿−¿0¿0¿0¿0 ¿0¿0¿0¿∨¿ λ3¿⋱¿0¿0¿0¿0¿0 ¿0¿0¿0¿∨¿0¿⋱¿⋱¿0¿0 ¿0¿0¿0¿0¿0 ¿∨¿0¿0¿ λ3¿

det ( λI−A )=det ( λ I n−UJU−1)=det (U (λ I n−J )U−1)=detU ∙det (λ I n−J ) ∙ detU−1=¿

det (λ I n−J )So if k=3 this would leave us with (λ−λ1)

α1 (λ−λ2)α2 (λ−λ3 )

α 3

Page 61: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

If we set λ=0 we get (−λ1 )α 1∙ (−λ2)α2 ∙ (−λ3 )α3

So det−A=(−λ1 )α 1∙ (− λ2 )α 2∙ (−λ3 )α 3⇒ (−1 )ndet A=(−1 )n (λ1 )α 1 (λ2 )α 2 (λ3 )α 3

So det A=( λ1 )α 1 ( λ2 )α 2 ( λ3 )α 3

A∈C n×n

traceA=∑i=1

n

a ii

A ,B∈Cn× n , traceAB=traceBAA=UJU−1 , traceA=traceU (J U−1 )=trace (J U−1 )U=traceJ=α1 λ1+…+αk λk

A=[ A11 00 A22] , A11∈Cp× p , A22∈Cq×q

Claim: det A=det A11 ∙ det A22

A11=UJ U−1 , J upper triangular

A22=V ~J V−1 ,~J upper triangular

det A11=det J ,det A22=det~J

A=[UJ U−1 00 V ~J V−1]=[U 0

0 V ] [J 00 ~J ][U−1 0

0 V−1]So det A=det [ J 0

0 ~J ]=det J det~J

[J 00 ~J ]=[ x11 ¿ ¿

x22 ¿ ¿¿¿ y11¿¿ y22¿ ]

It is still upper triangular!

A more sophisticated claim:

A=[ A11 (p× p) A12 ( p×q )0 (q× p ) A22 (q×q ) ]

Claim: det A=det A11det A22

A=[ I p 00 A22][ A11 A22

0 I q ]

Page 62: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

So we now know that det A=det [ I p 00 A22 ]det [A11 A22

0 I q ]

[a11 a12 a13 ¿ a14 a15a21 a22 a23 ¿ a24 a25a31 a32 a33 ¿ a34 a350 0 0 ¿ 1 00 0 0 ¿ 0 1

]I can subtract rows from other rows and the determinant still stays the same. So I can zero out all the rows in the right (since the rest of the values are zeroes)So we get:

[a11 a12 a13 ¿ 0 0a21 a22 a23 ¿ 0 0a31 a32 a33 ¿ 0 00 0 0 ¿ 1 00 0 0 ¿ 0 1

]But now it’s of the form as in the previous claim.

So det A=det [ I p 00 A22]det [A11 A22

0 I q ]=det A22det [A11 00 I q]=det A22det A11

An even more complicated claim - Schur Complemnts:

A=[ A11 A12A21 A22]

(all blocks are squares)If A22 is invertible, then:

A=[ I p A12 A22−1

0 I q ][A11−A12 A22−1 A21 0

0 A22] [ I p 0A22

−1 A21 I q]From this it follows that det A=det (A11−A12 A22

−1 A21) ∙ det A22

Suppose A11 is invertible

A=[ I p 0A21 A11

−1 I q][ A11 00 A22−A21 A11

−1 A12][I p A11−1 A22

0 I q ]det A=det A11det ( A22−A21 A11

−1 A12)

Page 63: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

A=[1 2 0 ¿ 0 10 1 2 ¿ 0 00 0 1 ¿ 0 00 0 0 ¿ 2 00 0 0 ¿ 2 2

]AU=U−1Jλ1=1 , λ2=2α 1=3 , α 2=2

det ( λI−A )=det [ λ−1 2 0 ¿ 0 10 λ−1 2 ¿ 0 00 0 λ−1 ¿ 0 0

−¿−¿−¿+¿−¿−¿0 0 0 ¿ λ−2 00 0 0 ¿ 2 λ−2

]=¿

det [ λ−1 −2 00 λ−1 −20 0 λ−1] ∙ det [ λ−2 0

−2 λ−2]=( λ−1 )3 ∙ ( λ−2 )2

Expansions of MinorsIf A∈Cn×n let A {i , j }=¿ determinant of (n−1 )× (n−1 ) matrix which is obtained by erasing row i and column j

A=[1 2 34 5 67 8 9]

A {2,3}=det [1 27 8]=8−14=−6

det A=∑i=1

n

(−1 )i+ j ∙ aij A {ij } for each j , j=1,…,n (expansion along the j’th column)

det A=∑j=1

n

(−1 )i+ j ∙ aij A {ij } for each i, i=1 ,…,n (expansion along the I’th row)

So if you have a row with a lot of zeroes for instance:

[0 0 0 4∙ ∙ ∙ ∙∙ ∙ ∙ ∙∙ ∙ ∙ ∙ ]

Then

Page 64: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

det A=∑j=1

n

(−1 )i+ j ∙ aij A {ij }

So if we choose i=1 we can get all zeroes but the 4th column.

Example:

Suppose that A=[a11 a12 a13a21 a22 a23a31 a32 a33 ]=[a1a2a3]

a1=a11 [1 0 0 ]+a12 [0 1 0 ]+a13 [0 0 1 ]So we can use linearity of the determinant:

det A=a11det [ 1 0 0a21 a22 a23a31 a32 a33 ]+a12 det [ 0 1 0

a21 a22 a23a31 a32 a33]+a13det [ 0 0 1

a21 a22 a23a31 a32 a33]

1) a11det [ 1 0 0a21 a22 a23a31 a32 a33] =

subtract the first rowmultiplied¿ the2nd∧3 rd ¿

a11det [1 0 00 a22 a230 a32 a33]=a11det [a22 a23

a32 a33 ]=a11 A {11 }

2) a12det [ 0 1 0a21 a22 a23a31 a32 a33 ] =

subtract the first rowmultiplied¿ the 2nd∧3 rd ¿

a12det [ 0 1 0a21 0 a23a31 0 a33]=−a12 det [ 0 0 1

a21 a23 0a31 a33 0 ]=−a12det [a21 a23

a31 a33 ]=¿−a12A {12 }

3) a13A {13 }

Theorem: A∈Cn×n ,C is the n×n matrix with ij entries c ij=(−1 )i+ j A { ji }, then

AC= (det A ) I

A3×3

det [ x y za21 a22 a23a31 a32 a33 ]=x A{11}− y A {12}+z A{13}

Suppose x=a11 , y=a12 , z=a13⇒ det A=a11 A {11}−a12 A {12}+a13 A {13}

Suppose x=a21 , y=a22 , z=a23⇒0=a21 A {11}−a22A {12 }+a23 A{13}

Suppose x=a31 , y=a32 , z=a33⇒0=a31 A {11}−a32 A {12}+a33 A {13 }

Page 65: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

[a11 a12 a13a21 a22 a23a31 a32 a33][

A {11 }

−A {12}

A{13 }]=[det A00 ]

(to be continued!)

We can do the same for the second row!

det [a11 a12 a13x y za31 a32 a33 ]= x A{21}− y A {22}+z A {23}

[ x y z ]=a1⇒ 0=a11A {21}−a12 A {22}+a13 A {23}

[ x y z ]=a2⇒ det A=a21 A {21}−a22 A{22}+a23 A {23}

[ x y z ]=a2⇒ 0=a31 A{21 }−a32 A {22}+a33 A {23 }

So we can fill in the matrix a bit more:

[a11 a12 a13a21 a22 a23a31 a32 a33 ][−

A {11 } −A {21}

A {12} A {22 }

A {13} −A {23}]=[det A 0

0 det A0 0 ]

And if we do the same thing again for the third row we get:

[a11 a12 a13a21 a22 a23a31 a32 a33 ][−

A {11 } −A {21} A {31}

A {12} A {22 } −A {32}

A {13} −A {23} A {33}]=[det A 0 0

0 det A 00 0 det A ]

F ( x )=[f 11( x ) f 12 (x )f 21 (x ) f 22 (x ) ]

f ( x )=det F ( x )

f ' ( x )= limϵ→0

f ( x+ϵ )−f ( x )ϵ

Shorthand notations: f⃗ 1=[ f 11 f 12 ] , f⃗ 2=[ f 21 f 22 ]

det [ f 11 ( x+ϵ ) f 12 (x+ϵ )f 21 ( x+ϵ ) f 22 (x+ϵ )]−det [ f 11 (x ) f 12 ( x )

f 21 ( x ) f 22 ( x )]ϵ

=¿

det [ f⃗ 1 ( x+ϵ )− f⃗ 1 ( x )+ f⃗ 1 ( x )f 2 ( x+ϵ ) ]+det [ f⃗ 1 (x )

f⃗ 2 ( x+ϵ )]−det [ f⃗ 1 ( x )f⃗ 2 (x )]

Page 66: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

But we know that:

det [ f⃗ 1 (x )f⃗ 2 ( x+ϵ )]=det det [ f⃗ 1 (x )

f⃗ 2 ( x+ϵ )− f⃗ 2 ( x )+ f⃗ 2 ( x )]So the entire formula is:

det [ f⃗ 1 ( x+ϵ )− f⃗ 1 ( x )f⃗ 2 ( x+ϵ ) ]+det [ f⃗ 1 ( x )

f⃗ 2 ( x+ϵ )− f⃗ 2 ( x )]Or

¿det [1ϵ ¿1]det [ f⃗ 1 ( x+ϵ )− f⃗ 1 (x )f⃗ 2 ( x ) ]+det [1 0

0 1ϵ ]det [ f⃗ 1 ( x )

f⃗ 2 ( x+ϵ )− f⃗ 2 ( x )]=¿

det [ f⃗ 1 (x+ϵ )−f⃗ 1 ( x )ϵ

f⃗ 2 ( x ) ]+det [ f⃗ 1 ( x )f⃗ 2 ( x+ϵ )− f⃗ 2 ( x )

ϵ ]So as ϵ goes to zero it equals det [ f⃗ 1' ( x )

f⃗ 1 ( x )]+det [ f⃗ 1 ( x )

f⃗ 2' ( x ) ]

---- end of lesson 11

A∈C n×n

A {i , j } determinant of the (n−1 )× (n−1 ) matrix that is obtained from A by erasing the i’th row and j’th column

Let C∈Cn× n

c ij=(−1 )i+ j A { j ,i}

AC=det A I n

If det A≠0 then A is invertible and A−1= Cdet A

Cramer’s ruleLet A be an invertible n×n matrix, consider Ax=b

Page 67: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

⇒ x=A−1b= Cbdet A

⇒ x i=1

det A∑j=1

n

c ijb j=1

det A∑j=1

n

(−1 )i+ j A { j , i} b j=¿

det [a1…a i−1ba i+1…an ]det A

det~A i where a i is replaced by b

f ' (t )=det [ f⃗ 1' ( t )f⃗ 2 ( t )f⃗ 3 ( t )]+det [ f⃗ 1 ( t )

f⃗ 2' ( t )

f⃗ 3 ( t )]+det [ f⃗ 1 ( t )f⃗ 2 ( t )

f⃗ 3' ( t )]=¿

∑❑

f 1' (t ) (−1 )1+ j A {1 , j }+∑

f 2' (t ) (−1 )2+ j A {2 , j }+∑

f 3' (t ) (−1 )3+ j A {3 , j}=¿

∑ f 1 , j ( t ) c j1 ( t )+∑ f 2 , j (t ) c j2 (t )+∑ f 3 , j (t )c j3 (t )=¿

f ( t )∑i=1

3

∑j=1

3

f i , j' ( t )

c ji ( t )f ( t )

=f (t )∑i=1

3

∑j=1

3

f i , j' ( t ) g ji ( t )

f ' (t )f (t )

=trace {F ' (t )F (t )−1 }

A reminder: traceA=∑ a ii

traceAB=∑i=1

n

(∑j=1n

a ijb ji)=traceBA

A=[0 00 1]

To calculate eigenvalues find the root of the polynomial:

det (λ I2−A )=det [ λ 00 λ−1]=λ ( λ−1 ) λ=0,1

AV=V [0 00 1]

A [v1 v2 ]=[ v1 v2 ] [0 00 1]=[0v1 1 v2 ]

A=[ 0 1 0 0 00 0 1 0 00 0 0 1 00 0 0 0 1

−a0 −a1 −a2 −a3 −a4]

Page 68: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

In general A=[ 1 0 00 ⋱ 00 0 1

−a0 … −an−1]

To calculate Eigenvalues, need to find roots of det (A I n−A )

n=2 , A=[ 0 1−a0 −a1]

λ I 1−A=[ λ −1a0 λ+a1]

det (λ I3−A )=λ (λ+a1 )+a0=a0+a1 λ+λ2

det (λ I3−A )=a0+a1 λ+a2 λ2+λ3

n=5

λ I 5−A=[ λ −1 0 0 00 λ −1 0 00 0 λ −1 00 0 0 λ −1

−a0 −a1 −a2 −a3 a4+ λ]

det (λ I5−A ) =using the righmost column (−1 ) (−1 )4+5 A {45}+(a4+λ ) (−1 )5× 5 A {55}=¿

det [ λ −1 ¿λ −1 ¿

−1¿a0¿a1¿a2¿ (a3−λ )+ λ¿]+ (λ+a4 ) λ4

a0+a1 λ+a2 λ2+(a3−λ ) λ3+λ4+a4 λ

4+ λ5

IF a is an n×n companion matrix then:

1) det (λ I n−A )=a0+a1 λ+…+an−1 λn−1+λn

A[ 1λ⋮λn−1]=[ 1 0 00 ⋱ 00 0 1

−a0 … −an−1][ 1λ⋮λn−1]=[

λλ2

⋮λn−1

−(a0+a1 λ+..+an−1 λn−1 )

]

Page 69: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

Denote v ( λ )=[ 1λ⋮λn−1]

p ( λ )=det (λ I n−A )

Av ( λ )=λv ( λ )−p ( λ ) [0⋮01]Suppose p (λ1 )=0

Av (λ1 )=λ1V (λ1 )

So [ 1λ⋮λn−1] is an eigenvector

Denote v' ( λ )=[ 012 λ⋮

(n−1 ) λn−2]Av ' ( λ )=v ( λ )+λ v ' ( λ )−p' ( λ ) [0⋮01]

p ( λ )=(λ−λ1 )2q ( λ )

p' ( λ )=2 (λ−λ1)q ( λ )+(λ−λ1)2q ' ( λ )

So p' ( λ1 )=0

1. Av ( λ )=λv ( λ )−p ( λ ) en

2. Av ' ( λ )=λ v ' ( λ )+v ( λ )−p' ( λ ) en

3. a v ' ' ( λ )=2 v ' ( λ )+ λ v ' ' ( λ )−p ' ' ( λ ) en

4. a v ' ' ' ( λ )=3v ' ' ( λ )+λv ' ' ( λ )−p' ' ' ( λ ) en

Suppose p (λ1 )=p ' (λ1 )=p' ' (λ1 )=p' ' ' (λ1 )=0

Page 70: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

Denote:v1=V ( λ1 )v2=v ' ( λ1 )

v3=v ' ' (λ1 )2!

v4=v ' ' ' (λ1)3 !

1⇒ Av1= λ1 v12⇒ A v2=λ1 v2+v13⇒ A v3=λ1 v3+v24⇒ A v4= λ1 v4+v3

Av3=2v2+ λ1 v' ' (λ1)

A [v1 v2 v3 v 4 ]=[ v1 v2 v3 v4 ] [ λ1 1 0 00 λ1 1 00 0 λ1 10 0 0 λ1

]2) A is invertible⇔a0≠03) The geometric multiplicity of each eigenvalue of A is equal to 1

λ I 5−A=[ λ −1 0 0 00 λ −1 0 00 0 λ −1 00 0 0 λ −1

−a0 −a1 −a2 −a3 a4+ λ]

Claim that for every λ∈C−dimR λ I5−A≥4

Or generally: dim R λ In−A≥n−1Since we have 4 independent columns!! Maybe the first column is dependent, and maybe not.

λ is an eigenvalue, thenβ=dimN A−λI ≥1

A 5×5 companion matrixdet (λ I5−A )=(λ−λ1 )5 (λ−λ2 )2 , λ1≠λ2

Find an invertible matrix U , J in Jordan form such that:AU=UJ

Since the geometrical multiplicity is one, then:

Page 71: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

J=[λ1 1 0 ¿ 0 00 λ1 1 ¿ 0 00 0 λ1 ¿ 0 0

−¿−¿−¿+¿−¿−¿0 0 0 ¿ λ2 00 0 0 ¿ 0 λ2

]For sure!

U=[1 0 0 ¿ 1 0λ1 1 0 ¿ λ2 1

(λ1 )2 2λ1 1 ¿ (λ2 )2 2 λ2

(λ1 )3 3 (λ1 )2 3 λ1 ¿ (λ2 )3 3 ( λ2 )2

(λ1)4 4 (λ1 )3 6 (λ1)

2 ¿ ( λ2 )4 4 (λ2 )3]

x2=a x1+b x0x3=a x2+b x1x4=a x3+bx2⋮xn=a xn−1+b xn−2

xn=[b a ][ xn−2

xn−1]xn−1=[0 1 ][ xn−2

xn−1][ xn−1

xn ]=[0 1b a ][ xn−2

xn−1]Given x0 , x1 want a recipe for xn

Denote A=[0 1b a]

u j=[ x j

x j+1]u0=[ x0x1]u1=[ x1x2]u2=Au1=A2u0u3=Au2=A3u0

Page 72: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

un=Anu0

So we take A, and we want to write it in a form such that A=UJU−1

un=UJU−1u0det (λ I2−A )=?This is a companion matrix!

So let’s change the notation such that A=[ 0 1a0 a1]

So det (λ I2−A )=a0+a1 λ+ λ2=−b−aλ+λ2

det (λ I2−A )={( λ−λ1 )2⇒ [ λ1 00 λ1]∨[ λ1 1

0 λ1]( λ− λ1 ) (λ−λ2 )⇒ [ λ1 0

0 λ2]But the case [ λ1 0

0 λ1] cannot happen since the geometric multiplicity is 1!

In the first case:

un=[ 1 0λ1 1][ λ1 1

0 λ1] [ 1 0λ1 1]

−1

u0

Or in the second case:

un=[ 1 1λ1 λ2][ λ1 0

0 λ2] [ 1 1λ1 λ2]

−1

u0

--- end of lesson

Page 73: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

A=[ 0 1 0 0 00 0 1 0 00 0 0 1 00 0 0 0 1

−a0 −a1 −a2 −a3 −a4] 5×5 companion matrix

(1) det (λ I5−A )=a0+a1 λ+…+a4 λ4+ λ5

(2) If λ j is an eigenvalue of A, then γ j=¿Geometrical multiplication¿1⇒only one Jordan cell connected with λ j

det (λ I5−A )=(λ−λ1 )3 (λ−λ1 )2 , λ1≠λ2

Suppose we know that: x3=a x0+b x1+c x2x4=a x1+b x2+c x3⋮xn+3=a xn+b xn+1+c xn+2

x3=[a b c ] [x0x1x2][ x1x2x3]=[0 1 0

0 0 1a b c ] [x0x1x2]

v j=[ x j

x j+1

x j+2]

v1=A v0v2=A v1=A2v0vn=Anv0

[ xn

xn+1

xn+2]=An[x0x1x2]

A is a companion matrix!

Page 74: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

det (λ I5−A )=a0+a1 λ+a2 λ2+λ3=−a−bλ−c λ2+ λ3

3 cases:

det (λ I5−A )={( λ− λ1 ) (λ−λ2 ) (λ−λ3 )3distinct eigen values

(λ−λ1 )2 (λ−λ2 )2distinct eigen values(λ−λ1 )1distinct eigen value

(1) Jn=[ λ1n ¿λ2n ¿ λ3

n] , A=UJ U−1

[ xn

xn+1

Xn+2]=vn=U J nU−1 v0

xn=[1 0 0 ]vn=[1 0 0 ]U JnU−1 v0= [α β γ ] [λ1n 0 00 λ2

n 00 0 λ3

n] [def ]=¿

αd λ1n+ βe λ2

n+γf λ3nxn=~α λ1

n+~β λ2n+~γ γ 3

n

We must be able to assume a≠0 otherwise, we are talking about second order and not third order!

So assume a≠0

x0=~α+~β+~γ x1=~α λ1+~β λ2+~γ λ3x1=~α λ1

2+~β λ22+~γ λ3

2[ x0x1x2]=[ 1 1 1λ1 λ2 λ3λ12 λ2

2 λ32] [~α~β~γ ]

(2) J=[ λ1 1 00 λ1 00 0 λ2]

(3) J=[ λ1 1 00 λ1 10 0 λ1]

Detour: Analyze Jn for case (3)

Page 75: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

J= λ1 I 3+N ,N=[0 1 00 0 10 0 0]

Jk=(λ1 I3+N )k=∑j=0

k

(kj)( λ I 3 )k − jN j=(k0) λk I+(k1) λk−1N+(k2) λk−2 N2=¿

[ λ1k (k1) λ1k−1 (k2) λ1k−20 λ1

k (k1) λ1k−10 0 λ1

k ]And for case 2:

xn=[1 0 0 ]U [ λ1n n λ1

n−1 00 λ1

n 00 0 λ2

n]= [α β γ ] [ ] [def ]=[α β γ ] [d λ1n+e λ1

n

e λ1n

f λ1n ][def ]=¿

αd λ1n+αen λ1

n−1+βe λ1n+ f λ2

n

xn=~α λ1n+~β m λ1

n+~γ λ2n

xn=g λ1n+hn λ1

n+l λ2n

x0=gx1=g λ1+h λ1+l λ2x2=g λ1

2+h λ12+l λ2

2

[ x0x1x2]=[ 1 0 1λ1 λ1 λ2λ12 2 λ1

2 λ22][ ghl ]

Go back to case 3:

xn=[1 0 0 ]U [ λ1n n λ1n−1 n (n−1 )

2λ1n−2

0 λ1n n λ1

n−1

0 0 λ1n ]U−1 v0

xn=g λ1n+hn λ1

n+ln2 λ14

x0=gx1=g λ1+h λ1+l λ1x2=g λ1

2+h2 λ12+l 4 λ1

2

Page 76: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

[ x0x1x2]=[ 1 0 0λ1 λ1 λ1λ12 2 λ1

2 4 λ12] [ghl ]

General Algorithmx5=a x0+b x1+c x2+d x3+e x4 a≠0

xn+5=a xn+b xn+1+c xn+2+d xn+3+e xn+4

(1) Replace x j by λ j - λn+5=a λn+b λn+1+…+e λn+4⇒ λ5−a−bλ−c λ2−d λ3−e λ4⏟p ( λ )

=0

(2) Calculate the roots of the polynomial.This will give you the eigevalues of A.

(3) Suppose p ( λ )=(λ−λ1 )3 (λ−λ2 )2

xn=α λ1n+βn λ1

n+γ n2 λ1n+δ λ2

n+ϵn λ2n

(4) Obtain α ,…,ϵ from the given x0 , x1 , x2 , x3 , x4

Discrete Dynamical Systemsv1=A v0v2=A v1⋮

vn=Anv0

A=UJU−1

vn=UJU−1 v0In such a problem, it is often easier to calculate U−1 v0 in one step, as apposed to calculating

U−1 then U−1 v0

w0=U−1 v0U w0=v0Solve this!

w=B−1u0

B=[6 2 20 3 10 0 1] , u0=[600]

Page 77: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

Bw=u0

[6 2 20 3 10 0 1] [w1

w2

w3]=[600] ,⇒w=[500]

J=[ λ1 1 00 λ1 10 0 λ1]

vn=U [ λ1n n λ1n−1 n (n−1 )

2λ1n−2

0 λ1n n λ1

n−1

0 0 λ1n ]U−1 J

Evaluate:

vn

λ1nn2

=U [1n2

1n λ1

n3−n2n2

∙ 1λ12

0 1n2

1n λ1

0 0 1n2

]V−1 v0

Differential Equationsx ' ' (t )=a x ' (t )+bx ( t )

Let x1 (t )=x (t )

x2 (t )=x ' (t )

[ x1' (t )x2' (t )]=¿

[ x1x2]'

=[0 1b a] [x1x2]

x ' (t )=Ax (t )

Page 78: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

e A=I +A+ A2

2 !+ A3

3 !+…

e tA=I +tA+ t2 A2

2!+…

e A eB=eA+B if AB=BA!

e A+B=∑k=0

∞ (A+B )k

k !=∑ A j

j ! ∑Bl

l !=…

limϵ →∞

(e( t+ϵ ) A−etA )ϵ

= limϵ→∞

(e tA eϵA−etA )ϵ

=by explenationbelow lim

ϵ→∞etA (eϵA−I )

ϵ=e tA A=Ae tA

Explanation:

eϵA=I+ϵA+ ϵ2 A2

2!+…

(eϵA−I )ϵ

=ϵ A+ ϵ 2 A2

2 !+…

x (t )=etA uu is constant

x ' (t )=AetAu=Ax (t )

x (t )=etA x (0 )

If A=UJU−1

Claim:

e tA=∑k=0

∞ (tA ) k

k !=∑

k=0

∞ (tUJ U−1 )k

k !=∑

k=0

∞ U (tJ )kU−1

k !=U (∑ (tJ )k

k ! )U−1=U etJ U−1

x (t )=[ x1 (t )x2 (t )]=U e tJU−1 x (0 )

x (t )= [1 0 ] x (t )=[1 0 ]U etJ U−1 x (0 )

Case 1: det (λ I2−A )= (λ−λ1 ) (λ−λ2 ) , λ1≠ λ2 , J=[ λ1 00 λ2]

e tJ=[et λ1 00 e t λ2]⇒ x (t )=α eλ1 t+β eλ2 t

Case 2: det (λ I2−A )= (λ−λ1 )2

Page 79: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

J=[ λ1 00 λ1] or [ λ1 1

0 λ1]But the first form doesn’t exist!!! So we only have the second form

e tJ=e t ( λ1 I +N )=eλ1 tI ∙ e tN

But N 2 is zero! So e tN=I+tNTherefore: e tJ=e λ1 tI ∙ ( I+tN )

--- end of lesson

Page 80: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

e A=∑k=0

∞ A k

k !e A eB=eA+B if AB=BA f (t )=etA v f ' ( t )=Ae tA v=Af ( t )

Final solution of:

a0 x+a1 x' (t )+a2 x

' ' (t )+a3 x' ' ' ( t )+xIV (T )=0 ,0≤ t ≤d

x (0 )=b1x ' (0 )=b2x ' ' (0 )=b3x ' ' ' (0 )=b4

Strategy:Set x1 (t )=x (t )

x2 (t )=x ' (t )

x3 (t )=x ' ' ( t )

x4 (t )=x ' ' ' (t )

x (t )=[ x1x2x3x4]⇒ x ' ( t )=[ x1' ( t )

x2' ( t )

x3' ( t )

x4' ( t )]=[ x2

x3x 4

−a0 x−a1 x'−a2 x

' '−a3 x' ' ' ]=¿

[ x2x3x4

−a0 x1−a1 x2−a2 x3−a3 x4]

x1=x , x1' =x ' , x2

' =x1' '=x ' '

So in matrix notation:

[ 0 1 0 00 0 1 00 0 0 1

−a0 −a1 −a2 −a3][ x1x2x3x4]

x⃗ ' ( t )=Ax ( t )x⃗ (t )=etA x ( t )

Page 81: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

x (t )= [1 0 0 0 ]e tA x (0 )

A=UJU−1 where J is of Jordan form.

(1) Calculate Eigenvalues of AThese are the roots of the polynomial det ( λI−A )=a0+a1 λ+a2 λ

2+a3 λ3+λ4

If all are different, then J is diagonal!

A=UJU−1 , e tA=U e tAU−1x (t )= [1 0 0 0 ]U ¿A=UJU−1 , AU=UJU ¿

e λ1 tu1+eλ2 tu2+eλ3 t u3+eλ4 t u4x (t )=aeλ1 t+beλ 2 t+c eλ3 t+d eλ4 t

1. Find the “eigenvalues” by substituting e λt into the given differential and then dividing through by e λt

a0eλt+a1 λ eλt+a2 λ

2e λt+a3 λ3e λt+λ4 eλt=0

After we cross out e λt we get the same polynomial as we got from the determinant!

2. Find the roots. Case (1) – 4 distinct roots: λ1 , λ2 , λ3 , λ4 x (t )=α eλ 1 t+β eλ2 t+γ e λ3 t+δ eλ 4 t

3. Calculate α ,β , γ , δ from the given initial conditions

2. Case (2) – a0+a1 λ+…+ λ4=(λ−λ1 )4

J=[ λ1 1 0 00 λ1 1 00 0 λ1 10 0 0 λ1

]e tJ=e t (λ1 I +N )

Where N=[0 1 0 00 0 1 00 0 0 10 0 0 0]

So N 2=[0 0 1 00 0 0 10 0 0 00 0 0 0] , N3=[0 0 0 1

0 0 0 00 0 0 00 0 0 0] , N4=0

Page 82: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

So:

e tJ=e t (λ1 I +N )=et λ 1 I etN=e t λ1 ∙(I+tN + t2 N2

2 !+ t 3N3

3 ! )=e t λ1[1 t t2

2t 3

3!

0 1 t t 2

20 0 1 t0 0 0 1

]x (t )= [1 0 0 0 ]U EtJU−1x (0 )=α e t λ1+βt e t λ2+γ t2e t λ3+δ t 3 et λ4

2. case (3) a0+…+λ4=( λ− λ1 )3 ( λ− λ2 ) , λ1≠ λ2α et λ1+ βt e t λ1+γ t 2 et λ1+δ t 3e t λ2

So far everything was homogeneous. But what happens if not?

a0 x (5 )+a1 x' ( t )+x ' ' ( t )=f ( t ) , t ≥0

x1=x , x2=x1'

[ x1x2]=[ x2f (t )−a0 x1−a1 x2]=[ 0 1

−a0 −a1][ x1x2]+[ 0f ( t )]x ' ( t )=Ax ( t )+Bu (t )

(e−tA x )'=−Ae−tA x+e−tA x '=e−tA (x '−Ax )=e−tABu ( t )dds (e−sA x ( s ) )=e− sABu (s )

We can integrate both sides!

∫0

t dds

(e− sA x (s ) ) ∙ ds=∫0

t

e−sABu (s ) ∙ ds

e−sA x (s )∨0

t¿∫0

t

e−sA Bu ( s ) ∙ ds

e−tA x (t )−x (0 )=∫0

t

e−sABu (s ) ∙ ds⇒ x ( t )=e tA x (0 )+etA(∫0

t

e−sABu (s ) ∙ ds)

Norm Linear SpacesBefore we actually go into norm linear spaces, we need to prove some inequalities.

Page 83: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

Inequalities(1)

a>0 , b>0 , t>1 , s>1 , 1t+ 1

s=1⇒ ab< as

s+ bt

tProof: s+t=ts⇒ ts−s−t+1=1⇒ ( t−1 ) ( s−1 )=1Following are equivalent for a pair of numbers t , s with t>1, s>11t+ 1

s=1

( t−1 ) (s−1 )=1( t−1 ) s=t( s−1 )t=s

TODO: Draw graph of y=x s−1

2. (Holder’s Inequality)

If s>1 , t>1 and 1s+ 1

t=1

∑j=1

k

|a jb j|≤(∑j=1k

|a j|s)1s ∙(∑j=1

k

|b j|t)1t

Inequality is obvious if right hand side is 0.

It suffices to focus on a case where the right hand side is strictly positive.

α j=a j

(∑j=1

k

|a j|s)1s

, β j=b j

(∑j=1

k

|b j|s)1t

∑j=1

k

|α j ∙ β j|≤∑j=1

k |α j|s

s+∑

j=1

k |β j|t

t=1

s+ 1

t=1

∑j=1

k

|α j|s=∑

j=1

k |α j|s

∑i=1

k

|ai|s=∑j=1

k

|a j|s

∑i=1

k

|ai|s=1

Similarly ∑|b j|t=1

∑j=1

k

( |a j|

(∑i=1k

|ai|s)1s

∙|b j|

(∑l=1k

|b l|t)1t )≤1⇒

Page 84: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

∑j=1

k

|a jb j|≤(∑j=1k

|a j|s)1s ∙(∑j=1

k

|b j|t)1t

So it’s proven.

Let’s prove something extra:

∑|a jb j|=(∑i=1k

|ai|)∙(∑i=1k

|bi|)3. Shwartz inequality:

∑j=1

k

|a jb j|≤√∑j=1k

|a j|2∙√∑j=1

k

|b j|2

4.(Minkowski inequality) If s≥1, then

(∑j=1k

|a j+b j|s)1s ≤(∑j=1

k

|a j|s)1s+(∑j=1

k

|b j|t)1t

If s=1, its easy.Suppose s>1, consider:

∑j=1

k

|a j+b j|s=∑

j=1

k

|a j+b j|∙|a j+b j|s−1

=∑j=1

k

(|a j|+|b j|) ∙|a j+b j|s−1

=¿

∑j=1

k

|a j|∙|a j+b j|s−1

⏟(1)

+∑j=1

k

|b j|∙|a j+b j|s−1

⏟(2)

By Holder’s inequality: 1s+ 1

t=1

(1 )≤ (∑|a j|s)1s ∙(∑j=1

k

|a j+b j|( s−1) t)

1t =(∑|a j|

s )1s ∙(∑j=1

k

|a j+b j|s)1t

(2 )≤ (∑|b j|s )1s ∙ (∑j=1

k

|a j+b j|s)1t

∑j=1

k

|a j+b j|s≤ {(∑|a j|

s )1s+(∑|b j|

s )1s }(∑

j=1

k

|a j+b j|s)1t

--- end of lesson

Page 85: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

Normed Linear Spaces (NLS)Holder’s inequality:

If s>1 , t>1 and 1s+ 1

t=1

∑j=1

k

|a jb j|≤(∑j=1k

|a j|s)1s ∙(∑j=1

k

|b j|t)1t

Cauchy-Shwartz inequality:

∑j=1

k

|a jb j|≤√∑j=1k

|a j|2∙√∑j=1

k

|b j|2

(Minkowski inequality) If s≥1, then

(∑j=1

k

|a j+b j|s)1s ≤(∑

j=1

k

|a j|s)1s+(∑

j=1

k

|b j|t)1t

A vector space V is said to be a normed-linear space over F if for each vector v∈V there is φ ( v ) with the following properties:

1) φ ( v )≥02) φ ( v )=0⇔v=03) φ (αv )=|α|∙φ (v )4) φ (u+v )≤φ (u )+φ (v ) (triangle inequality)

Example 1:

V=Cn ,fix s≥1

Define φ ( x )=(∑i=1n

|x i|s)1s

Disclaimer: I don’t have the mental forces to copy the proof why φ has all properties.

Example 2: V=C n

Define φ ( x )=max {|x1|,…,|xn|}Disclaimer 2: See disclaimer 1.

A fundamental fact that on a finite dimension normed-linear space all norms are equivalent. i.e. If φ ( x ) is a norm, and ψ ( x ) is a norm, then can find constants α 1>0 , α2>0 such that:

α 1φ (x ) ≤ψ ( x )≤α 2φ ( x )It might seem not symmetric. But in fact, the statement implies:

Page 86: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

φ ( x )=ψ (x )α

,φ (x )≥ ψ ( x )α2

So: ψ ( x )α 2

≤φ (X ) ≤ ψ ( x )α1

The three most important norms are: ‖x‖1 ,‖x‖2 ,‖x‖∞

‖x‖∞≤‖x‖2≤‖x‖1≤ γ‖x‖∞

x=[x1⋮xn]

|x i|=√|x|2≤√∑j=1n

|x j|2⇒|xi|≤‖x‖2⇒max|x i|≤‖x‖2

(‖x‖2 )2=|x1|

2+…+|xn|2≤|x1|(|x1|+…+|xn|)+…+|xn|(|x1|+…+|xn|)=¿

(|x1|+…+|xn|)2

But since it was normed-negative we are allowed to take the root and therefore:‖x‖2≤‖x‖1

‖x‖1=∑i=1

n

|x i|≤∑i=1

n

‖xi‖∞=n‖xi‖∞

(so in our case n=γ )

‖x‖s=(∑i=1n

|x i|)1s ,1≤s<∞

‖x‖∞=max|x i|

If V is a vector space of functions defined on a finite interval a≤ t ≤b

‖f‖s=(∫a

b

|f ( t )|sdt)1s

‖x‖∞=max|f (t )|, a≤ t ≤b

Matrix Norms (Transformation Norms)

Let A∈C p×q ,‖A‖=max‖Ax‖2‖x‖2

, x∈Cq , x≠0

0≤ x<1

Page 87: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

What is the maximum value of x in this interval? There’s no maximum value.

Lemma: Let A∈C p×q and let

α 1=max {‖Ax‖2‖x‖2 |x∈Cq , x≠0}

α 2=max {‖Ax‖2|x∈Cq ,‖x‖2=1}α 3=max {‖Ax‖2|x∈Cq ,‖x‖2≤1}

Then α 1=α 2=α 3

Take x∈Cq , x≠0 and consider ‖Ax‖2‖x‖2‖Ax‖2‖x‖2

=‖A ∙ x‖x‖2‖2≤α 2≤α3

Now we know that ‖ x‖x‖2‖=‖x‖2

‖x‖2=1

So we get: α 1≥α 2≥α 3

Take x∈Cq ,‖x‖2≤1, x≠0

‖Ax‖2=‖Ax‖2‖x‖2

,‖x‖2≤‖A x‖2‖x‖2

≤α 1

And for x=0 it’s a special case where the inequality is true as well.

So we get equality.

C p×q is a normed linear space in respect to the norm ‖A‖=max {‖Ax‖2‖x‖2 |x∈Cq , x ≠0}

It’s clear φ ( A )≥0Is φ ( A )=0⇒ every vector we put in there equals to zero.φ (αA )=|α|φ ( A )

Lemma: ‖Ax‖2≤α 1‖x‖2

Proof: x≠0 ,‖Ax‖2‖x‖2

≤α⇒‖Ax‖2≤α 1‖2‖2

If x=0 then it’s also true. So true for all x.

Page 88: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

φ ( A+B ) ≤‖A+Bx‖2

‖x‖2≤‖Ax‖2‖x‖2

+‖Bx‖2‖x‖2

≤φ ( A )+φ (B )

C p×q is a normed-linear space with respect to the norm:

‖A‖=max {‖Ax‖2‖x‖2 |x ≠0}

Can also show (in a very similar manner)

‖A‖s , s=max{‖Ax‖s

‖x‖s |x≠0}‖A‖s , t=max {‖Ax‖t

‖x‖s |x≠0}Lemma: Let A∈C p×q , B∈C∈q×r

‖AB‖≤‖A‖∙‖B‖Proof: ‖ABx‖2≤‖A‖‖Bx‖2≤‖A‖‖B‖‖x‖2

If x≠0 ‖ABx‖2‖x‖2

≤‖A‖‖B‖⇒‖AB‖≤‖A‖‖B‖

Theorem: A∈C n×n suppose A invertible.If B∈C n× n and B is “close enough” to A then B is invertible.

Proof: Let B=A−( A−B )=A ( I−A−1 ( A−B ) )=A( I− X⏟A−1 ( A−B ))

If ‖X‖<1 ,then I−X is invertible.

Enough to show N I−X= {0 }Let u∈N I−X⇒ ( I−X )u=0⇒u=Xu⇒‖u‖2=‖Xu‖2≤‖X‖‖u‖2(1−‖X‖)∙‖u‖2≤0⇒‖u‖2≤0So ‖u‖=0⇒u=0.

--- end of lesson

Page 89: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

NLS –normed linear spaces

A∈C p×q , B∈Cq× r

‖AB‖2,2≤‖A‖2,2∙‖B‖2,2‖Ax‖2,2≤‖A‖2,2 ∙‖x‖2

‖A‖s , t=max {‖Ax‖t

‖x‖s |x≠0}‖A‖2,2=max {‖Ax‖2,2:‖x‖2=1 }=max {‖A‖2,2 :‖x‖2≤1}

Theorem: If A∈C p× pthat is invertible and B∈Cp× p that is close enough to A, then B is also invertible.

First idea: x∈Cp× p ,‖x‖<1, then I p−X is invertible.

Let u∈N Ip−X⇒ ( I p−X )u=0⇒u=Xu

⇒ ‖u‖=‖Xu‖⇒‖Xu‖≤‖X‖∙‖u‖⇒ (1−‖X‖)‖u‖≤0⇒‖u‖≤0⇒u=0

α∈CSn=1+α+α2+…+αn

α Sn=α+α 2+…+α n+αn+1

(1−α ) Sn=1−α n+1

Sn=1+α+…+α n=1−α n+1

1−α→ 11−α

|α|<1∑j=1

α j= 11−α

So as n↑∞xn+1→0

‖xn+1−0‖→0So ‖xn+1‖≤‖x‖n+1

If ‖x‖<1⇒ ( I−X )Sn=I p−Xn+1→I pasn↑∞

( I−X )−1=∑j=0

x j i . e .‖∑j=0

x j−( I p−x )−1‖→0

Theorem: If A∈C p×q and A is left invertible, B∈Cp× q and Bis close enough to A, then B is left invertible.Proof: A is left invertible ⇒ there is a C∈Cq× p such that CA=Iq

Page 90: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

B=A−( A−B )⇒CB=CA−C (A−B )⏟X∈Cq×q

=I q−X

If ‖X‖<1⇒ I q−X is invertible.

Doing that, we find ( I q−X )−1CB=I q

⇒ ( I q−X )C is left inverse of B, i.e. B is left invertible.

X=C ( A−B )⇒‖X‖≤‖C‖∙‖A−B‖

If ‖A−B‖< 1‖C‖

⇒‖X‖<1

We can then prove symmetrically that if it is right invertible then any close enough B is also right invertible.

A∈C p×q

rankA=p or rankA=qB close to A it will have the same rank as A.

Question: rankA=k ,1≤k<min { p ,q }Can we prove that if B is close enough to A, then rank B=k.

Observation: ∈C p×q , A=[aij ] , then |aij|≤‖A‖2,2≤√∑i , j |aij|2⇒

|aij−bij|≤‖A−B‖2,2

Proof: ‖Ax‖22=∑

i=1

p |∑j=1q

aij x j|2

=∑i=1

p {(∑j=1q

|aij|2)12 ∙(∑j=1

q

|x j|2)12 }2

‖x‖22∑

i=1

p {(∑j=1q

|aij|2)12 }2

But ‖A‖2,2=max {‖Ax‖2|‖x‖2=1}So we get what we wanted to prove…

‖Ae l‖2≤‖A‖2,2∙‖el‖2,2⏟¿1

√∑i=1q

|a il|2≤‖A‖2,2∙1≤0

Useful fact: Let φ ( x ) be a norm on a NLS X over F. Then |φ ( X )−φ ( y )|≤φ ( x− y )Or in other notation: |‖x‖−‖y‖|≤‖x− y‖

Page 91: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

φ ( x )=φ (x− y+ y )≤φ ( x− y )+φ ( y )⇒φ ( x )−φ ( y )≤φ ( x− y )

But symmetrical to y−x so we can also claim:φ ( y )−φ ( x )≤φ ( y−x )

At least one of them is non-negative and will be selected as the absolute value.

A∈C p×q

‖A‖2,2=…

There are other ways of making it a normed linear space.

For instance, can also define: ‖A‖s={∑i , j |aij|s}1s

This is a valid definition of norm!But then we lose the fact that ‖AB‖s≤‖A‖s ∙‖B‖s!

Inner Product SpacesA vector space U over F is said to be an inner product space if there exists a number which we’ll designate by: ⟨u , v ⟩u for every pair of vectors u , v∈U with the following properties:

1) ⟨u ,u ⟩u≥02) ⟨u ,u ⟩u=0⇒u=03) ⟨u , v ⟩ is linear in the first entry – ⟨u+w ,v ⟩u= ⟨u , v ⟩+ ⟨w , v ⟩ and ⟨ αu, v ⟩=α ⟨u , v ⟩4) ⟨ v ,u ⟩= ⟨u , v ⟩u

Observations: ⟨0 ,u ⟩=0 for any u∈UProof: ⟨ 0 ,u ⟩= ⟨0+0 ,u ⟩= ⟨0 , u ⟩+ ⟨0 , u ⟩⇒0=⟨ 0 ,u ⟩

⟨u , v ⟩ - “conjugate” linear in second entry

⟨u , v+w ⟩= ⟨v+w ,u ⟩= ⟨ v , u ⟩+ ⟨w ,u ⟩= ⟨u , v ⟩+ ⟨u ,w ⟩⟨u ,αv ⟩=⟨ αv ,u ⟩=α ⟨ v ,u ⟩=α ⟨u , v ⟩

Examples:

1) Cn ⟨u , v ⟩=∑j=1

n

u j v j

Page 92: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

2) Cnρ1 ,…, ρn all ¿0 ⟨u , v ⟩=∑j=1

n

ρ j u j v j

3) ρ ( [a ,b ])=¿ continuous function on [a ,b ]

⟨ f , g ⟩=∫a

b

f ( t )g ( t ) dt

Cauchy’s-Shwarz in inner product spacesLet U be an inner product space over C. Then:

|⟨u , v ⟩|≤ {⟨u ,u ⟩ }12 {⟨ v , v ⟩

12 } with equality if and only if dim span {u , v }≤1

Proof: Let λ∈C and consider:

⟨u+λv ,u+ λv ⟩= ⟨u ,u+ λv ⟩+ ⟨ λv ,u+ λv ⟩= ⟨u ,u ⟩+λ ⟨v , u ⟩+ λ ⟨ v ,u ⟩+|λ|2 ⟨ v , v ⟩+¿The inequality is clearly valid if v=0Can restrict attention to ease ⟨ v , v ⟩>0 and ⟨u , v ⟩ ≠0. Let’s write ⟨u , v ⟩=r ∙ e i θ , r ≥0 ,0≤θ<2 πChoose λ=x ei θ. x∈ R⟨u+x ei θ ,u+ei θ ⟩= ⟨u , u ⟩+x ei θ rx e−iθ+x e−i θrx ei θ+ x2 ⟨ v , v ⟩=¿⟨u ,u ⟩+2 xr+ x2 ⟨ v , v ⟩

f ( x )= ⟨ v , v ⟩ x2+2 rx+⟨u ,u ⟩f ' ( x )= ⟨v , v ⟩2 x+2r

f ' ( x )=0⇒ x= −r⟨v , v ⟩

=x0

¿r 2

⟨ v , v ⟩ =⟨u ,u ⟩− ⟨u+x0 ei θ v ,u+x0 e

i θu ⟩ ≤ ⟨u ,u ⟩

r2≤ ⟨u ,u ⟩ ⟨ v , v ⟩r2= ⟨ v , v ⟩2

And there is an equality if ⟨u+x0 eiθ v ,u+x0 e

i θu⟩=0But this happens iff u+x0 e

i θ v=0⇒u and v are linearly dependent!

Claim: If U is an inner product space, then its also a normed-linear-space with respect to

φ (u )=⟨u , u ⟩12

Page 93: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

Check:1) φ (u )≥02) φ (u )=0⇒u=03) φ (αu )=|α|φ (u ) , α∈C4) φ (u+v )≤φ (u )φ ( v ) ??

φ (u+v )2=⟨u+v ,u+v ⟩= ⟨u ,u ⟩+ ⟨u , v ⟩+ ⟨v ,u ⟩+ ⟨v , v ⟩≤

⟨u ,u ⟩+2 ⟨u ,u ⟩12 ⟨ v , v ⟩

12+⟨ v , v ⟩=φ (u )2+2φ (u )2+φ (v )2= {φ (u )+φ ( v ) }2

φ (u+v )≤φ (u )+φ (v )

φ (u+v )2=⟨ u+v ,u+v ⟩= ⟨u ,u ⟩+ ⟨u , v ⟩+ ⟨v , u ⟩+ ⟨v , v ⟩φ (u−v )2= ⟨u−v ,u−v ⟩= ⟨u ,u ⟩−⟨u , v ⟩− ⟨v ,u ⟩−⟨ v ,0 ⟩φ (u+v )2+φ (u−v )2=2φ (u )2+2φ (v )2

So:

‖u+v‖2+‖u−v‖2=2‖u‖2+2‖v‖2

Parallelogram law!

If the norm in a NLS satisfies the parallelogram law, then we can define an inner product on that space by SeHing:

⟨u , v ⟩= 14∑j=1

4

i j‖u+i j v‖2

‖x‖s satisfies parallelogram law ⇔s=2

--- end of lesson

Page 94: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

Inner Product Sace over F is a vector spac over F plus a rule that associates a number ⟨u , v ⟩∈F for each pair of vectors u , v∈U

1) ⟨u ,u ⟩≥02) If ⟨u ,u ⟩=0⇒u=03) Linear in first entry4) ⟨u , v ⟩= ⟨ v ,u ⟩

Checked that ⟨u ,u ⟩12 defines a norm on U . i.e. U is a normed-linear space with respect to

⟨u ,u ⟩12

i.e. we shoed that:

1) ⟨u ,u ⟩12≥0

2) ⟨u ,u ⟩12=0⇒u=0

3) ⟨ αu ,αv ⟩12=|α|⟨u ,u ⟩

12

4) ⟨u+v ,u+v ⟩12≤ ⟨u ,u ⟩

12+ ⟨v , v ⟩

12

Question: Is every NLS an inner product space?No!

The norm that we introduced above ‖u‖= ⟨u ,u ⟩12 satisfies the parallelogram law, i.e.:

‖u+v‖2+‖u−v‖2=2‖u‖2+2‖v‖2

FACT: If U is a normed-linear space such that its norm satisfies this extra condition, then can define an inner product on this space.

OrthogonalityLet U be an inner product space over F, then a pair of vectors u∈U is said to be orthogonal to a vector v∈U if ⟨u , v ⟩=0

TODO: Draw cosines

cosθ=‖a‖2+‖b‖2+‖b−a‖2

2‖a‖∙‖b‖=

‖a‖2+‖b‖2+∑j=1

3

b j2+2b j a j+a2

2‖a‖∙‖b‖

Since:

Page 95: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

‖a‖2=∑i=1

3

ai2

‖b−a‖2= ⟨b−a ,b−a ⟩=∑j=1

3

(b j−a j )2

So:

cosθ=⟨ a ,b ⟩

‖a‖∙‖b‖

A family {u1 ,…,uk } of non-zero vectors is said to be an orthogonal family if ⟨ui ,u j ⟩=0 for i≠ j.

A family {u1 ,…,uk } of nonzero vectors is said to be orthonormal family if { ⟨u i , u j ⟩=0 , i≠ j⟨u i , ui ⟩=1 , i=1…k

If V is a subspace of U then the orthogonal complement of V which we write as V⊥ of V (in U )

is equal to {u∈U|⟨u , v ⟩=0 ∀ v∈V }Claim: V⊥ is a vector space.

To check: Let u ,w∈V ⊥

⟨u+w ,v ⟩= ⟨u , v ⟩+ ⟨w , v ⟩=0+0=0⟨ αu ,v ⟩=α ⟨u , v ⟩=α 0=0

Claim: V ∩V⊥= {0 }Let x∈V ∩V⊥, but x∈V⊥ means it’s orthogonal to every vector in V ⇒ ⟨ x , x ⟩=0⇒ x=0.

Observation: If {u1 ,…,uk } is an orthogonal family, then it’s a linearly independent set of vectors.

Proof: Suppose you could find coefficients c1 ,…,ck such that:

∑j=1

k

c ju j=0⇒ ⟨∑1k

c ju j , ui⟩=⟨0 ,u i ⟩=0

It’s true for all i. So suppose i=1:

∑1

k

c j ⟨u j ,u1 ⟩ =I t' s anorthogonal

family ! c1 ⟨u1 ,u1 ⟩u1≠0⇒

c1=0

We can do the same for u2 and so on…Thus proving ∀ i . c i=0 which means they are linearly independent.

Let U be an inner product space over F and let {u1 ,…,uk } be a set of k vectors in U then:G= [g ij], with gij= ⟨ui ,u j ⟩ called Gram matrix.

Page 96: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

Lemma: {u1 ,…,uk } will be linearly independent ⇔ G is invertible.

Suppose first G is invertible, claim that if ∑j=1

k

c ju j=0 then c1=…=ck=0.

Consider c=[c1⋮ck] (Gc )i=∑

j=1

k

gij c j=∑j=1

k

c j ⟨u j ,ui ⟩=⟨∑j=1

k

c ju j ,u i⟩=⟨0 ,ui ⟩=0

But this is true for all i!

So Gc=0G is invertible ⇒c=0

Assume G is invertible ⇒c1 ,…,ck are such that ∑ c ju j=0⇒ c1=…=ck=0 and linearly independent.

Suppose now u1 ,…,uk are linearly independent.Claim: G is invertible.

It’s enough to show that NG={0 }

Let c∈NG⇒Gc=0⇒∑j=1

k

g ijc j=0 for i=1 ,…,k⇒∑ ⟨u j , ui ⟩c j=0⇒

⟨∑j=1

k

c ju j , ui⟩=0 for i=1 ,…, k⇒∑i=1

k

c i(⟨∑j=1k

c j u j , ui⟩)=0⇒⟨∑j=1

k

c ju j ,∑i=1

k

c iu i⟩But these are the same vector! Denote it as w so: ⟨ w ,w ⟩=0⇒w=0⇒∑

j=1

k

c ju j=0

{u1 ,…,uk } linearly independent ⇒c1=…=ck=0

Thus Gc=0⇒ c=0⇒NG= {0 }. G is invertible.

AdjointsLet T be a linear transformation from a finite dimensional inner product space U into a finite dimensional inner product space V . Then there exists exactly one linear transformation S from V to U such that for u∈U , v∈V - ⟨Tu , v ⟩V=⟨u , Sv ⟩US is called the adjoint of T . And the usual notation is T ¿.

Proof: At the end of the lesson if we have time.

Page 97: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

It’s easy to show that there is most one linear transformation S from V to U such that ⟨T ,u , v ⟩V=⟨u , Sv ⟩U all u∈U ,v∈V .Suppose could find S1 , S2 that met this condition.⟨Tu ,u ⟩V= ⟨u , S1 v ⟩U= ⟨u ,S2 v ⟩U⟨u ,S1 v−S2 v ⟩=0 for all u∈U ,v∈V .

This is true for all u! We can choose u=S1 v−S2 v .

But then: ⟨S1 v−S2 v ,S1v−S2 v ⟩=0So the vector S1 v−S2 v=0⇒ S1 v=S2v⇒ S1=S2.

Let T be a linear transformation from a finite dimensional inner product space U into U .Then T is said to be:Normal if T ¿T=T T ¿

Selfadjoint if T ¿=TUnitary if T ¿T=T T ¿=I

T=T ¿⇒T is normalT T ¿=T T ¿⇒ is normal

Theorem: Let T be a linear transformation from an n dimensional inner product space U into itself and suppose T is normal. Then:

(1) There exists an orthonormal basis {u1 ,…,un } of eigenvectors of T(2) If T=T ¿⇒ all the eigenvalues are real.(3) If T is unitary⇒ all eigenvalues have magnitude 1.

1) T λ=λI−T , (T λ)¿= λ I−T

⟨T λu , v ⟩U= ⟨u , (T λ)¿ v ⟩But the left side equals:

⟨ ( λI−T )u , v ⟩=λ ⟨u , v ⟩− ⟨Tu, v ⟩=⟨u , λ v ⟩− ⟨u ,T¿ v ⟩=⟨u , ( λ I−T¿ )v ⟩ But there is only one adjoint matrix! So (T λ )¿=( λ I−T ¿)

2) T normal ⇒ T λ is normal.

T λ (T λ)¿=( λI−T ) ( λ I−T )=λI ( λ I−T ¿ )−T ( λ I−T¿ )=( λ I−T ¿) λI−( λ I−T ¿ )T=¿( λ I−T ) ( λI−T )=(T λ )¿T λ

3) Tu=λu⇒T¿u=λuLet u∈NT λ

⇒T λu=0⇒ (T λ )¿T λu=0⇒T λ (T λ)¿=0⇒ ⟨T λ (T λ)¿u ,u ⟩=0⇒

⟨ (T λ)¿u , (T λ )¿u ⟩=0⇒ (T λ )¿u=0⇒ ( λ I−T¿ )u=0

Page 98: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

4) Establish orthonormal basis of eigenvectors of T .

U is invariant under T⇒ T has an eigenvector u1≠0 i.e. there is T u1=λ1u1 , u1≠0

T ( u1‖u1‖)=λ1( u1

‖u1‖)So can assume u1 has norm 1.

Suppose we could find {u1 ,…,uk } such that T u j=λu j , j=1 ,…,k

⟨ui ,u j ⟩=0 if i≠ j and 1 otherwise.

Lk=span {u1 ,…,uk }Lk⊥=¿ vectors in U that are orthogonal to Lk.

v∈Lk⊥⇔ ⟨v ,u j ⟩=0 , j=1,…,k

Lk⊥ is invariant under T , i.e. v∈Lk

⊥⇒Tv∈Lk⊥

⟨Tv ,u j ⟩=⟨v ,T¿u j ⟩= ⟨v , λ ju j ⟩=λ j ⟨v ,u j ⟩=0 , j=1 ,…,kBecause:T u j=λ ju j

T ¿u j=λ ju j

So can find an eigenvector of T in Lk⊥. Call it uk +1=span {u1 ,…,uk }.

So we can increase our Lk⊥ until it is big enough…

6) T unitary ⇒ |λ j|=1If T unitary:

T u j=λ ju j⇒T ¿T u j=λ jT¿u j=λ j λ ju j⇒ u j=|λ j|

2u j.

--- end of lesson

Page 99: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

Recall: IF T is a linear transformation from a finite dimension inner product space U over F into a finite dimension inner product space V , then there exists exactly one linear transformation S from V to U such that ⟨Tu , v ⟩V=⟨u , Sv ⟩U for every u∈U ,v∈VThat S is called the adjoint of T and is denoted by T ¿.So in the future we’ll write ⟨Tu ,v ⟩V=⟨u ,T ¿v ⟩U.

Existence: Suppose {u1 ,…,uk } is a basis for U and {v1 ,…,v l } is a basis of V .Can reduce the existence to showing that exists S such that

⟨T ui , v j ⟩V=⟨ui , S v j ⟩U , i=1 ,…,k , j=1 ,…,l

T ui∈V ⇒T u i=∑t=1

l

a tiv t for i=1 ,…,k⇒ ⟨T ui , v j ⟩=⟨∑t=1l

a tiv t , v j⟩=∑t=1

l

ati ⟨v t , v j ⟩V=¿

∑t=1

l

ati (GV ) jt=(GV A ) ji

Sv j=∑i=1

k

b ijur

Want to choose B=[b ij ] so that equiality (1) holds.

⟨ui , S v j ⟩U=⟨ui ,∑i=1

k

bijur ⟩U=∑i=1

k

bij ⟨u i , ur ⟩U= (GU A )

brj=(BH ) jrBH GU=GV A

T :U →UT is said to be normal if T ¿T=T T ¿

T is said to be self-adjoint if T=T ¿

T is said to be unitary if T ¿T=T T ¿=I

Let A∈C p×q ,U=C p , ⟨ , ⟩U ,V =Cp , ⟨ , ⟩VThen there exists exactly one matrix A∈C q×p such that ⟨ Ax , y ⟩V= ⟨ x , A¿ y ⟩U

Example: ⟨u1 ,u2 ⟩U= ⟨ Δ1u1, Δ1u2 ⟩st=(Δ1u2 )H Δ1u1

⟨ a ,b ⟩ st=∑j=1

k

α jb j=bHa=u2H Δ1

H Δ1u1 , Δ1∈C q×q invertible.

C p , B∈C q×q invertible.

⟨ x , y ⟩B=¿ ⟨Bx ,By ⟩st=(By )H Bx= yhBH Bx1) ⟨ x , x ⟩B≥0? ⟨ x , x ⟩B=xH BH Bx=( Bx )H Bx=⟨ Bx , Bx ⟩ st=∑|Bx|2

Page 100: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

2) ⟨ x , x ⟩B=0⇒ x=0 …3) …4) …

Δ2 p× p invertible.

⟨v1 , v2 ⟩V=¿ ⟨Δ2 v1, Δ2 v2 ⟩st=(Δ2 v2 )H Δ2 v1=V 2

H (Δ2H Δ2) v1⟨ Ax , y ⟩V= yH Δ2

H Δ2 Ax⟨ x , A¿ y ⟩U=( A¿ y )H Δ1

H Δ1 x= yH (A¿)H Δ1H Δ1x

Since it holds for all x∈Cq and y∈C p:

Δ2H Δ2 A=(A¿)H Δ1

H Δ1⇒ (A¿)H=Δ2H Δ2 A (Δ1H Δ1)

−1

A¿=( Δ1H Δ1 )−1 A H Δ2

H Δ2

If Δ1=I q , Δ2=I p i.e. if using standard inner product, then A¿=AH

Theorem: Let A∈C p×p ,Cp have inner product ⟨ , ⟩UThen: (1) A¿ A=A A ¿⇒ A is diagonalizable and there exists an orthonormal set of eigen vectors i.e. AU=UD, D diagonal and columns of U are orthonormal. D∈Rp× p

(2) A=A ¿

(3) A¿ A=A A ¿=I p ,|d ij|=1

(1) If A¿ A=A A¿, then Au=λu⇒ A¿u=λuAu=λu⇒ ( A− λI )u=0⇒ (A¿−λ I ) ( A−λI )u=0 A A¿=A¿ A

⇒( A−λI ) ( A¿− λ I )u=0⇒

⟨ ( A− λI ) ( A¿−λ I )u ,u ⟩=⟨ (A¿−λ I )u , (A ¿−λ I )u ⟩=0⇒ ( A¿−λ I )u=0⇒ u is an eigenvector of

A¿ with Eigenvalue λ I .

Let λ be an eigenvalue of A.It’s enough to show N (A− λI )2=N (A− λI )

x∈N (A−λI )2⇒ ( A−λI )2 x=0⇒ (A− λI ) ( ( A−λI ) x )=0Denote y= (A−λI ) x. Then Ay=λyBy (1), A¿ y=λ y⇒ (A ¿−λ I ) ( A−λI ) x=0⇒ ⟨ (A¿−λ I ) ( A−λI ) x ,x ⟩=0⇒⟨ ( A−λI ) x , ( A¿− λ I )¿ x ⟩=0⇒ ⟨ ( A−λI ) x , ( A−λI ) x ⟩=0⇒ x∈N A− λI

Since: ( A¿−λ I )¿=(A¿ )¿−( λ I )¿=A− λIHave shown:N (A− λI )2⊆N ( A−λI )

Page 101: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

But this is always true:N (A− λI )⊆N (A−λI )2

So we must have equality: N ( A− λI )=N (A− λI )2

(3) Aui= λiu2, A u j=λ ju j , λi≠ λ j⇒ ⟨ui, u j ⟩=0λ i ⟨ui ,u j ⟩= ⟨λi ui ,u j ⟩= ⟨ Aui ,u j ⟩= ⟨ui , A

¿u j ⟩=⟨ui , λ ju j ⟩=λ j ⟨ui ,u j ⟩⇒(λ i−λ j ) ⟨u i , u j ⟩=0. So if λ i≠ λ j then it must be that ⟨ui ,u j ⟩=0 and thus they are orthogonal.

(4) A¿ A=A A ¿⇒ can choose an orthonormal basis of eigenvectors of A.

det (λ I3−A )=(λ−λ1 )2 (λ−λ2 )⇒dim N ( A− λ1 I3 )=2 , dim N (A−λ 2 I3 )=1

⟨u1 ,u3 ⟩=0 , ⟨u2 , u3 ⟩=0⟨u1 ,u2 ⟩=?We will later use the grahm shmidt method to construct an orthonormal basis for the same eigenvalue.

But since 4 is true: A [u1 ,…,un ]=[ λ1u1…λnun ]=[u1…un ] [ λ1 0 00 ⋱ 00 0 λn

]AU=UD

(5) If A=A¿⇒ λ i∈Rλ i ⟨ui , v i ⟩=⟨ Au i , ui ⟩= ⟨ui , A

¿ui ⟩= ⟨ui , A ui ⟩=⟨ui , λ iui ⟩=λi ⟨u i , u j ⟩(λ i−λ i) ⟨ui , ui ⟩=0⇒ λ i=λi

A¿ A=A A ¿=I⇒|λi|=1⟨ui ,u i ⟩=⟨ A ¿ A ui ,ui ⟩=⟨ A ui , ( A

¿ )¿ui ⟩= ⟨ Aui , A u i ⟩=⟨ λiui , λ iui ⟩=|λi|2 ⟨u i , ui ⟩⇒

(1−|λi|2 ) ⟨ui ,u i ⟩=0⇒|λ i|

2=1

If ⟨ x , y ⟩U=⟨ x , y ⟩st= yH xA¿=(… ) AH (… )If Δ1=I q , Δ2=I p then A¿=AH

If A∈C p×p and A=A H, then:1) A is diagonalizable2) It’s Eigenvalues are real

Page 102: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

3) There exists an orthonormal basis (w.r.t. standard inner product) of eigenvectors u1 ,…,un of A.AU=UD, where D is diagonal with real entries

U=[u1 u2 u3 ]

U HU=[u1H

u2H

u3H ] [u1 u2 u3 ]

(U HU )ij=uiHu j=⟨u j , ui ⟩st={1 , if i= j

0if i≠ j

Projections And Direct SumsA linear transformation P from a vector space U into U is called a projection if P2=PIf U is an inner product space, P is called an orthogonal projection if P2=P and P=P¿ or

P2=p and ⟨Px , y ⟩U= ⟨ x , Py ⟩U for all x , y∈U

U=C2

P=[1 a0 0]

x∈C2 ,Px=C 2

p2=P‖P‖ can be large. ‖P‖=1 will be denoted as an orthogonal projection later…

Let U be a finite dimension vector space over F, let P a projection. Then U=NP∔Rp

N p= {u|Pu=0 }, Rp= {Pu|u∈U }

Let u∈U . Then: u=Pu⏟∈R P

+( I−P )u

Claim: ( I−P ) u∈ NP

P (I−P )u=(P−P2)u =P=P20

Remains to show that N P∩RP= {0 }Let x∈NP∩RP

x∈ Rp⇒ x=Pyx∈NP⇒Px=0But then:

Page 103: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

0=Px=P2 y=Py=x

Conversely, let U=V∔W is a direct sum decomposition of vector space U in terms of the two subspaces V and W .Then each u∈U have a unique decomposition: u=v+w where v∈V and w∈W .

Claim: There’s only 1 such decomposition.Proof: if v1+w1=v2+w2⇒ (v1−v2 )= (w2−w1 )But the left hand side is in V and the right hand side is in W . So in their intersection – which means both sides are zero and therefore v1=v2 and w1=w2.

Start with u. There is exactly one v∈V such that u−v∈W .Call this v PV u

Claim: PV 2=PV

Proof: PV u=v∈Vv=v+0PV v=v

Example: C2

V=span {[10]}W=span {[12]}[ab]=α [10]+β [12]b=2 βa=α+β

[ab]=(a−b2 )[10]+ b

2 [12]PV [ab ]=(a−b

2 )[10]V=span {[10]}W=span {[13]}

Page 104: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

PV [ab ]=(a−b3 )[10]

--- end of lesson

Page 105: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

U=V∔W ,V ∩W= {0 }u∈U there is exactly 1 decomposition u=v+w with v∈V ,w∈W

Let v1 ,…,vk be a basis for V , w1,…,w l be a basis for W then

{v1,…,vk ,w1,…,wl } is a basis for U ⇒

u=∑i=1

k

ai vi+∑j=1

l

b jw j , ai ,b j∈F

PV u=∑i=1

k

ai v i

This recipe depends upon W as well as V .

(1) Start with direct sum decomposition, can define PV and observe that

PV2 u=PV

2 (v+w )=PV (v+w )(2) Can start with P, a linear transformation from U into U such that P2=P, then

U=RP∔NP

If U is an inner product space and P is linear transformation from U into U such that:1) P2=P2) P¿=P

Then U=RP∔NP

Now N P is orthogonal to RP.

v∈RP ,w∈N P

v∈RP⇒ v=Py

⟨ v ,w ⟩U=⟨ Py ,w ⟩U =always true ⟨ y , P¿w ⟩U =

P=P¿

⟨ y , Pw ⟩U= ⟨ y ,0 ⟩U=0Now N P is orthogonal to RP

If V is a subspace of UV⊥ - the orthogonal complement of V in U={u∈U|⟨u , v ⟩U=0∀ v∈V }

Claim: RP⊥⊆NP

Proof: Suppose that w∈ RP⊥⇒ ⟨ Py ,w ⟩=0 for all y∈U

This is the same as: ⟨ y ,P¿ w ⟩=⟨ y , Pw ⟩

So we know that 0=⟨ y , Pw ⟩ ∀ y∈U ¿articular

⟨Pw , Pw ⟩=0⇒Pw=0

Page 106: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

Orthogonal DecompositionU=V⊕W

Means that u=v+w ,v∈V ,w∈WMoreover, ⟨ v ,w ⟩U=0 ∀ v∈V ,w∈W

We really have 3 symbols!V +W={v+w|v∈V ,w∈W } V ∔W when V ∩W= {0 }V ⊕W when V ⊥W (W=V⊥ ) (only defined in an inner product space)

Let U be an inner product space over F, let V be a subspace of U with basis {v1 ,…,vk }

Then U=V⊕WLet u=v+w , v∈V ,w∈W , ⟨ v ,w ⟩=0Question: Given u find v

u=∑j=1

k

c j v j+u−∑j=1

k

c j v j

Try to choose c j such that ⟨u−∑j=1

k

c j v j , v i⟩U=0 , i=1 ,…,k

⟨u , v i ⟩U=u−∑j=1

k

c j ⟨v j , v i ⟩⏟gij

=0

We would like to achieve the equality:

∑j=1

k

g ijc j= ⟨u , vi ⟩U , i=1 ,…,k

Let’s choose a vector b=[b1⋮bk] , bi= ⟨u ,v i ⟩U ,c=[c1⋮ck

]So we got that: Gc=bSince all vectors are linearly independent, G is invertible. So: c=G−1b

PV u=∑j=1

k

c j v j where c=G−1b

Suppose U=Cn with the standard inner product.⟨u ,w ⟩=∑ u jw j=wHu

Page 107: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

b=[ ⟨u , v1 ⟩⋮

⟨u , vk ⟩ ]=[v1Hu⋮

v kHu ] ,V =[v1 … vk ]⇒V H=[v1

H

⋮vk

H]⇒ b=V Hu

gij= ⟨v j , v i ⟩=v iH v j=ei

HV HV e j

So G=V HV

PV u=∑j=1

k

c j v j=Vc

Because c=G−1b=(V HV )−1V HuSo PV u=V (V HV )−1V HuV= [v1 … v k ]

TODO: Draw area of triangle…

h ∙‖a‖=¿ area.

V=span {a }PV b=a (aT a )−1aT b

b=a (aT a )−1aT b+b−a (aT a )−1aT b

So area = ‖b−a (aT a )−1aT b‖∙‖b‖are a2=⟨b−a (aT a )−1aT b ,b−a (aT a )−1aT b ⟩ ∙‖a‖2

( ⟨b ,b ⟩− ⟨PV b ,b ⟩ ) ∙‖a‖2=(bT b−bT a (aT a )−1aT b )aT a=(bT b ) (aT a )−(bT a ) (aT b )

Since: ⟨b−PV b ,PV b ⟩=⟨ PV b−pV2 b ,b ⟩=⟨0 , b ⟩=0

(area )2=[aT a aT bbT a bT b]=det C ∙CT =

since∈R2 det C ∙det CT =C=CT

(det C )2

Where C=[a b ]

If a ,b∈R2

CorrectionWhen we said triangle area earlier, we meant the area of a parallelogram!

a ,b∈R2,then the area is ¿det [a b ]

Page 108: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

Back to the lessonU inner product space, V subspace of U with basis v1 ,…,vk. Let PV u denote the orthogonal projection of u onto V . Have a recipe.In addition to what we’ve already done,

Fact (n): min {‖u−v‖|v∈V } is achieved by choosing v=PV u.

‖u−v‖2=‖u−PV u+PV u−v⏟∈V

Denoteasw

‖2

Claim: it equals to ‖u−PV u‖2+‖PV u−v‖Proof: ‖u−PV u+w‖2 =

by≝¿ ⟨u−PV u+w ,u−PV u+w ⟩=¿ ¿¿

⟨u−PV u ,u−PVu ⟩+ ⟨u−PV ,w ⟩+⟨w ,u−PV u ⟩+ ⟨w ,w ⟩Harry claims the second and third piece is zero. Why?If w∈V then

⟨u−PV u ,w ⟩=⟨u−PV u , PV w ⟩=⟨ PV¿ (u−PV u ) ,w ⟩= ⟨PV (u−PV ) ,w ⟩= ⟨0 ,w ⟩=0

So we are left with ‖u−PV u‖2+‖PV u−v‖Since we want the expression as small as possible, we can only play with the right hand part (since the first hand part is determined!)So we want to choose the v such that ‖PV u−v‖=0We can do so if we choose v=PV u

So min‖u−v‖2 v∈V is in fact

‖u−PV u‖2=⟨u−PV u ,u−PV u ⟩= ⟨u−PV u ,u ⟩−⟨u−PV u , PV u ⟩But ⟨u−PV u , PV u ⟩=0!

So it actually equals ⟨u−PV u ,u ⟩= ⟨u ,u ⟩−⟨PV u ,u ⟩= ⟨u ,u ⟩−⟨ PV2 u ,u ⟩

But PV¿ =PV!

So equals: ⟨u ,u ⟩− ⟨PV u , PV¿ u ⟩=‖u‖2−‖PV u‖2

PV u=∑1

k

c j v j , c=G−1b ,b=[ ⟨u , v i ⟩⋮

⟨u , vk ⟩ ]Another special case, when v1 ,…,vk are orthonormal to the given inner product.gij= ⟨v j , v i ⟩=1 if i= j and 0 if i≠ j

Page 109: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize

In this case, G=I⇒c=b⇒

PV u=∑j=1

k

⟨u , v j ⟩U v j

Suppose U is an inner product space, and suppose {u1 ,…,ul } is an orthonormal basis for U .

Given u∈U⇒u=∑j=1

l

c ju j

⟨u ,u i ⟩=∑j=1

l

c j ⟨u j , ui ⟩=c i

So ⟨u ,u ⟩=∑j=1

l

|c j|2

Gram-SchimdtGiven a linearly independent set of vectors u1 ,…,uk in U .Claim: There exists an orthonormal set v1 ,…,vk such thatspan {v1 ,…,v j }=span {u1 ,…,u j } for j=1 ,…,kProof:

Define v1=u1

‖u1‖⇒ ⟨v1 , v1 ⟩−

⟨u1 ,u1 ⟩‖u1‖

2 =1

Introduce the notation: V j=span {v1 ,…,v j }Define w2=u2−PV 1

u2We are doing it since this vector will be orthogonal to v1.Also claim w2≠0.

And define: v2=w2

‖w2‖Keep on doing that.Define w3=u3−PV 2

u3So the vector is also orthogonal!

And then v3=w3

‖w3‖Now w4=u4−PV 3

u4=u4− {⟨u4 , v1 ⟩ v1+ ⟨u4 , v2 ⟩ v2+ ⟨u4 , v3 ⟩ v3 } such that

v4=w4

‖w4‖

Page 110: Objectives - The Faculty of Mathematics and Computer Science/avivre/Basic Topics.docx · Web viewP 2 - Permutation of the second row with either 3rd, 4th … or I So you can generalize