Num_LA
-
Upload
yeung-kin-hei -
Category
Documents
-
view
223 -
download
1
description
Transcript of Num_LA
-
Numerical Linear AlgebraIntroduction
Ng Tin Yau (PhD)
Department of Mechanical EngineeringThe Hong Kong Polytechnic University
Jan 2015
By Ng Tin Yau (PhD) 1/65
-
Table of Contents
1 Matrices & Determinants
2 Linear SystemsApplications to Differential EquationsDirect MethodsIterative Methods
3 EigenproblemsDiagonalization ProblemA Transformation Method Jacobi
4 Exercises
By Ng Tin Yau (PhD) 2/65
-
Matrix
Denote K as either the set of real numbers R or complex numbers C.An m n matrix A is an array of numbers that belong to K, that is,
A =
a11 a12 a1na21 a22 a2n...
.... . .
...am1 am2 amn
aij K, 1 i m, 1 j nIn the case where m = n, we call it a square matrix. A matrix with allentries equal to zero is called the zero matrix.
O =
0 0 00 0 0...
.... . .
...0 0 0
Two matrices A and A are said to be equal if and only if aij = aij .
Matrices & Determinants By Ng Tin Yau (PhD) 3/65
-
Some important types of matrix
The tranpose of an m n matrix A, denoted by AT, is defined to bethe nm matrix that results from interchanging the rows and columnsof A. A matrix A is said to be symmetric if A = AT, that isaij = aji.A square matrix A is said to be invertible if there exists anothersquare matrix B such that AB = BA = I where
I =
1 0 00 1 0...
.... . .
...0 0 1
is the identity matrix. If A is invertible, then we denote B = A1
and is called the inverse of A. A square matrix Q is said to be anorthogonal matrix if Q1 = QT. Finally, a matrix of dimensionn 1 is called a column matrix or sometimes called it ann-dimensional vector in geometry.
Matrices & Determinants By Ng Tin Yau (PhD) 4/65
-
Matrix Addition
Let A and B be two matrices of same dimension. Then the additionof A and B is defined as
A + B aij + bij (1)Suppose that k K, then the scalar multiplication of k and A isdefined as
kA kaij (2)In the case where k = 1, we write (1)A = A.Example
Compute 2AB with
A =
[1 02 3
]and B =
[0 41 1
]Solution:
2AB = 2[
1 02 3
][
0 41 1
]=
[2 43 5
]Matrices & Determinants By Ng Tin Yau (PhD) 5/65
-
Matrix Multiplication
Let A and B be matrices of dimensions n p and pm, respectively.Then the matrix multiplication of A and B is defined as
ABp
k=1
aikbkj (3)
Example
Given
A =
[1 02 3
]and B =
[42
]Compute AB.
Solution:
AB =
[414
]Note that BA is undefined. Therefore, in general, AB 6= BA.
Matrices & Determinants By Ng Tin Yau (PhD) 6/65
-
Determinant: Cofactor Expansion Approach
Given an n n real matrix A,
A =
a11 a12 a1na21 a22 a2n...
.... . .
...an1 an2 ann
For n 2, we define Aij to be the (n 1) (n 1) matrix obtainedfrom A by deleting row i and column j.
If n = 1, so that A = a11, we define det(A) = a11. For n 2, we definethe determinant of A recursively as
det(A) =
nj=1
(1)1+ja1j det(A1j) (4)
This formula is called cofactor expansion along the first row of A. Thescalar (1)i+j det(Aij) is called the cofactor of the entry of A in rowi, column j.
Matrices & Determinants By Ng Tin Yau (PhD) 7/65
-
A 3 3 ExampleFor a 2 2 matrix, we have
det
([a11 a12a21 a22
])=
a11 a12a21 a22 = a11a22 a12a21 (5)
Given a 3 3 matrix A,
A =
a11 a12 a13a21 a22 a23a31 a32 a33
Then
det A =3
j=1
(1)1+ja1j det(A1j)
= a11 det(A11) a12 det(A12) + a13 det(A13)
a11
a22 a23a32 a33 a12 a21 a23a31 a33
+ a13 a21 a22a31 a32
Matrices & Determinants By Ng Tin Yau (PhD) 8/65
-
Properties of Determinants
1. Suppose that A is a n n matrix and k is any scalar, then
det(kA) = kn det(A) (6)
2. Suppose that A and B are n n matrices, then
det(AB) = det(A) + det(B) (7)
3. If A is invertible, then
det(A1) =1
det(A)(8)
4. If A is a square matrix, then
det(A) = det(AT) (9)
Matrices & Determinants By Ng Tin Yau (PhD) 9/65
-
Inverse of a 2 2 Matrix
Given a matrix
A =
[a bc d
]Then
det(A) = ad bcand
A1 =1
det(A)
[d bc a
](10)
Matrices & Determinants By Ng Tin Yau (PhD) 10/65
-
Properties of Matrix Operations
1. If the sizes of the matrices are such that the stated operations canbe performed, then
(AT)T = A (11)
and(AB)T = BTAT (12)
2. Suppose that A is invertible, then
(A1)1 = A (13)
3. If A and B are invertible matrices, then AB is invertible and
(AB)1 = B1A1 (14)
4. If A is an invertible matrix, then then AT is also invertible and
(A1)T = (AT)1 (15)
Matrices & Determinants By Ng Tin Yau (PhD) 11/65
-
System of Linear Equations
Many practical applications of engineering and science lead to a systemof linear algebraic equations. A set of simultaneous linear algebraicequation can be expressed in matrix form:
a11 a12 a1na21 a22 a2n...
.... . .
...an1 an2 ann
x1x2...xn
=b1b2...bn
(16)In symbolic notation, we write Ax = b where A is called thecoefficient matrix of the system. Also, the augmented matrix ofA, denoted A is the matrix
[A | b] =
a11 a1n b1a21 a2n b2...
......
...an1 ann bn
(17)Linear Systems By Ng Tin Yau (PhD) 12/65
-
Conditions for Invertibility of a Square Matrix
Given a linear system Ax = b of n equations.
Theorem
Let A be a square matrix with real entries. Then the followingstatements are equivalent:
1 A is invertible
2 det(A) 6= 03 Given any vector b Rn, there is exactly one vector x such that
Ax = b.
4 Ax = 0 has only the trivial solution, namely, x = 0.
5 The rank (the number of linearly independent column vectors ofA) of A is equal to n.
Linear Systems By Ng Tin Yau (PhD) 13/65
-
Common Methods
Several methods can be used for solving systems of linear equations.These methods can be divided into two types: direct and iterative.
Common direct methods include elimination method (Gauss,LU-decomposition) and Choleskys method (for symmetric, positivedefinite matrix). Iterative methods include Jacobi method,Gauss-Seidel method, relaxation methods, to name a few.
Linear Systems By Ng Tin Yau (PhD) 14/65
-
BVP1 - ODE
We consider the discretization of the boundary value problem for theordinary differential equation
u(x) = f(x, u(x)) x (0, 1) (18)
with boundary conditions u(0) = u(1) = 0. Here, f : (0, 1)R R is agiven continuous function, and we are looking for a solutionu C2(0, 1). Boundary value problem of this type occur, for example,in elastic bar problem and heat conduction problem.
Linear Systems By Ng Tin Yau (PhD) 15/65
-
BVP1 - discretization
For the approximate solution we choose an equidistant subdivision ofthe interval [0, 1] by setting
xj = jh j = 0, 1, . . . , n+ 1 (19)
where the step size is given by h = 1/(n+ 1) with n N.At the interval grid points xj with j = 1, . . . , n, we replace u
(xj) bythe difference quotient
u(xj) 1h2
[u(xj+1) 2u(xj) + u(xj1)] (20)
By using the notation u(xj) uj , we arrive the system of equations
1h2
[uj1 2uj + uj+1] = f(xj , uj) (21)
for j = 1, . . . , n.
Linear Systems By Ng Tin Yau (PhD) 16/65
-
BVP1 - 3 3 case
In the case where n = 3 and notice that u0 = u4 = 0, then we arrive
1
h2
2 1 01 2 10 1 2
u1u2u3
=f(x1, u1)f(x2, u2)f(x3, u3)
(22)In symbolic form, we have Au = f(x,u) where
A =1
h2
2 1 01 2 10 1 2
and f =f(x1, u1)f(x2, u2)f(x3, u3)
(23)In this type of problems, the matrix A is tridiagonal. Moreover, if fdepends linearly on the second variable u, then the tridiagonal systemof equations also is linear.
Linear Systems By Ng Tin Yau (PhD) 17/65
-
BVP2 - PDE
Let x = (x1, x2) and = (0, 1) (0, 1) and consider the followingelliptic partial differential equation:
(2u
x21+2u
x22
)= f(x, u(x)) x (24)
with Dirichlet boundary condition u(x) = 0 for all x . Here,f : R R is a given continuous function, and we are looking for asolution u C2(). Boundary value problems of this type arise, forexample, in torsional bar problem and steady-state heat conductionproblem.
Linear Systems By Ng Tin Yau (PhD) 18/65
-
BVP2 - discretization
For the approximate solution we choose an equidistant subdivision ofthe region [0, 1] [0, 1] by setting
xi,j = (ih, jh) j = 0, 1, . . . , n+ 1 (25)
where the step size is given by h = 1/(n+ 1) with n N.Analogously to the previous example, At the internal grid points xi,jwith i, j = 1, . . . , n, we replace the Laplacian by the difference quotient
2u
x21+2u
x22 1h2
[u(xi,j+1) + u(xi+1,j) 4u(xi,j) + u(xi,j1) + u(xi1,j)](26)
By using the notation u(xi,j) ui,j , we arrive the system of equations1
h2[4ui,j ui,j+1 ui+1,j ui,j1 ui1,j ] = f(xi,j , ui,j) (27)
for i, j = 1, . . . , n.
Linear Systems By Ng Tin Yau (PhD) 19/65
-
Elementary Row Operations
Intuitively, the following operations for matrices
1 Interchange of two rows
2 Addition of a constant multiple of one row to another row
3 Multiplication of a row by a nonzero constant c
should not change the solution of any system of equations.We now call a linear system S1 row-equivalent to a linear system S2 ifS1 can be obtained from S2 by finitely many row operations. In fact,we have the following theorem:
Theorem
Row-equivalent linear systems have the same set of solutions
Linear Systems By Ng Tin Yau (PhD) 20/65
-
An Example
Consider the following linear system2 1 14 3 13 2 2
x1x2x3
=
4615
(28)Put it in the form of augmented matrix and perform row operations onthis matrix.2 1 1 44 3 1 6
3 2 2 15
2E1+E23/2E1+E3
2 1 1 40 5 3 20 72
12 9
7/10E2+E3
2 1 1 40 5 3 20 0 135
525
To this end, we finished the so-called forward elimination. Nextprocedure called the back substitution. That is, we have x3 = 4 andthe solve for x2 = 2 and finally x1 = 1.
Linear Systems By Ng Tin Yau (PhD) 21/65
-
Idea of the Gauss Elimination
This standard method for solving linear systems is a systematic processof elimination that reduces the system to triangular form becausethe system can then be easily solved by back substitution. In thecase of a 3 3 system, a triangular system takes the forma11 a12 a13 b1a21 a22 a23 b2a31 a32 a33 b3
a11 a12 a13 b10 a22 a23 b2
0 a32 a33 b3
(1) a11 a12 a13 b10 a22 a23 b2
0 0 a33 b3
(2)
Now one can solve for x3 = b(2)3 /a
(2)33 provided a
(2)33 6= 0.
In the first step of the elimination process, the first equation E(0)1 of
the system is called the pivot equation and a11 is called the pivot.
We use this pivot equation to eliminate x1 from E(0)2 to E
(0)3 . In the
second step we take the new second equation E(1)2 as the pivot
equation and use it to eliminate x2 of equation E(1)3 .
Linear Systems By Ng Tin Yau (PhD) 22/65
-
Gauss Elimination Method - General Procedures
In general, for an n n system after n 1 steps the system istransformed to a triangular system that can be solved by backsubstitution.
a11 a12 a13 a1n0 a22 a23 a2n... 0 a33 a3n...
.... . .
. . ....
0 0 0 ann
x1x2......xn
=
b1b2......bn
(29)
Now if we set aj,n+1 = bj where 1 j n, thenxn =
an,n+1ann
(30)
and for i = n 1, . . . , 1 we have
xi =1
aii
ai,n+1 nj=i+1
aijxj
(31)Linear Systems By Ng Tin Yau (PhD) 23/65
-
How about if akk = 0?
In general, the pivot akk (in step k) must be different from zero andshould be large in absolute value so as to avoid roundoff manification bythe multiplication in the elimination. For this we choose as our pivotequation one that has the absolutely largest ajk in column k on orbelow the main diagonal (actually, the upper most if there are severalsuch equations). This popular method is called partial pivoting.Consider the system as follows:0 8 23 5 2
6 2 8
x1x2x3
=7826
In this case, we have a11 = 0, therefore, pivoting is necessary. Thegreatest coefficient in column 1 is |a31| = 6, in this case we interchangeE1 and E3 to give the system as in problem 1.
Guildlines for Pivoting
In step k, if akk = 0, we must pivot. If |akk| is small, we should pivot.Linear Systems By Ng Tin Yau (PhD) 24/65
-
Difficulty with Small Pivots
The solution of the system[0.0004 1.4020.4003 1.502
]{x1x2
}=
{1.4062.501
}is x1 = 10 and x2 = 1. Solve this system by Gauss elimination using4-digit floating-point arithmetric.Picking the first of the given equations as the pivot equation, we haveto multiply this equation by 0.4003/0.0004 = 1000.75 1001 andsubtract the result from the second equation, obtaining1405x2 = 1404 (round-off error here!). Hence,x2 = (1404)/(1405) = 0.9993 (round-off error again!) which givesx1 = 12.4535. This failure occurs because |a11| is small compared with|a12| so that a small round-off error in x2 leads to a large error in x1.That is, using the first equation to obtain
0.0004e1 + 1.402e2 = 0 (32)
where ei = xi xi. Notice that e2 = 0.0007 which gives e1 = 2.4535.Linear Systems By Ng Tin Yau (PhD) 25/65
-
Swaping the rows
The solution of the system[0.4003 1.5020.0004 1.402
]{x1x2
}=
{2.5011.406
}In this case we use the factor 0.0004/0.4003 0.0010 to make a22 = 0which yields the system[
0.4003 1.5020.0000 1.404
]{x1x2
}=
{2.5011.403
}which gives x2 = 0.9993 and
0.4003x1 1.502(0.9993) = 2.501 x1 = 9.9974Now using the first equation we have
0.4003e1 1.502e2 = 0 (33)In this case e2 = 0.0007 which is the same as without pivioting,however, e1 = 0.0026 which is definitely better than before.
Linear Systems By Ng Tin Yau (PhD) 26/65
-
Iteration Methods
The linear systems Ax = b that occur in many applications can have avery large order. For such systems, the Gaussian elimination method isoften too expensive in either computation time or computer memoryrequirements, or possibly both.In an iterative method, a sequence of progressively accurate iterates isproduced to approximate the solution. Thus, in general, we do notexpect to get the exact solution in a finite number of iteration steps,even if the round-off error effect is not taken into account. In the studyof iteration methods, the most important issue is the convergenceproperty.
Linear Systems By Ng Tin Yau (PhD) 27/65
-
Vectors in Rn
Denote x = (x1, x2, . . . , xn)T Rn where xi R. Then Rn becomes a
vector space if for all elements x,y of Rn and scalars R we have1 x + y = (x1 + y1, x2 + y2, . . . , xn + yn)
T
2 x = (x1, x2, . . . , xn)T
To introduce the notion of length to Rn, we have the followingso-called normed linear space axioms:
Normed Linear Space Axioms
Let x,y be elements of Rn. A vector norm on Rn is anonnegative real-valued function such that it satisfies the followingaxioms:
1 x = 0 if and only if x = 02 x = ||x for all R3 x + y x+ y for all x,y Rn (Triangle Inequality)
Linear Systems By Ng Tin Yau (PhD) 28/65
-
Some Vector Norms
Let x = (x1, x2, . . . , xn)T Rn where xi R. Common norms
employed in numerical analysis include:1 p-norm:
xp =(
ni=1
|xi|p)1/p
for p N (34)
2 Infinity norm:x = max
1in{|xi|} (35)
In particular if p = 2 we call it the Euclidean norm.
Example
Let x = (1, 1,2)T R3. Calculate x2 and x.Ans:
x2 =
(1)2 + 12 + (2)2 =
6
andx = max{| 1|, |1|, | 2|} = 2
Linear Systems By Ng Tin Yau (PhD) 29/65
-
What norm we should use?
Theorem
For each x Rn, we have
x x2 nx (36)
Proof: Let xj be a coordinate of x such that
x = max1in
|xi| = |xj | (37)
Then
x2 = |xj |2 = x2j ni=1
x2i = x22 (38)
so x x2. On the other hand
x22 =ni=1
x2i ni=1
x2j = nx2j = nx2 (39)
Hence, x2 nx.
Linear Systems By Ng Tin Yau (PhD) 30/65
-
Convergent Sequences
A sequence {x(k)}k=0 of vectors in Rn is said to converge to x withrespect to the norm if, given any > 0, there exists an integerN() such that
x(k) x < k N() (40)The following theorem provides a useful stopping criterion for manynumerical algorithms.
Theorem
The sequence of vectors {x(k)} converges to x in Rn with respect to if and only if limk x(k)i = xi for each i = 1, 2, . . . , n.Proof: () Notice that x(k) x = max1in |x(k)i xi|, in otherwords, there exists j {1, . . . , n} such thatx(k) x = |x(k)j xj | < and we have the desired result.() Given > 0 then for each i {1, . . . , n} there exists Ni such that|x(k)i xi| < whenever k(i) Ni. Let N = max1in{N1, . . . , Nn}and whenever k N , x(k) x = max1in |x(k)i xi| < .
Linear Systems By Ng Tin Yau (PhD) 31/65
-
Example 1
Let x(k) R4 be defined by
x(k) =
(1, 2 +
1
k,
3
k2, ek sin k
)TSince limk 1 = 1, limk 2 + 1/k = 2, limk 3/k2 = 0 andlimk ek sin k = 0, therefore, x(k) converges to x = (1, 2, 0, 0)T withrespect to . In other words, given > 0, there exists an integerN(/2) with property that
x(k) x < 2
x(k) x < /2 whenever k N(/2). Since the Euclidean normand the infinity norm are equivalent, this implies that
x(k) x2 0.0002
Hence, we have to continue.Linear Systems By Ng Tin Yau (PhD) 35/65
-
Solution: Second Iteration
Let us try one more iteration. Using x(1) calculated in the firstiteration, we have
x(2)1 =
110(x
(1)2 2x(1)3 x(1)4 + 6) = 0.8598
x(2)2 =
111(x
(1)1 + x
(1)3 3x(1)4 + 25) = 1.7159
x(2)3 =
110(2x
(1)1 + x
(1)2 + x
(1)4 11) = 0.8052
x(2)4 =
18(3x
(1)2 + x
(1)3 + 15) = 0.8852
Nowx(2) x(1)x(2)
= 0.5768 > 0.0002
Therefore, we have to continue.
Linear Systems By Ng Tin Yau (PhD) 36/65
-
The Approximation Solution
Continue with the computational work we have the following results:
k x(k)1 x
(k)2 x
(k)3 x
(k)4 ek
1 0.6000 2.2727 1.1000 1.8750 1.00002 0.8598 1.7159 0.8052 0.8852 0.57683 0.8441 2.0363 1.0118 1.1309 0.15734 0.8929 1.9491 0.9521 0.9849 0.05825 0.8868 1.9987 0.9852 1.0251 0.02496 0.8944 1.9842 0.9750 1.0023 0.01477 0.8932 1.9920 0.9802 1.0090 0.00398 0.8943 1.9896 0.9785 1.0055 0.00189 0.8941 1.9909 0.9794 1.0066 0.000610 0.8943 1.9905 0.9791 1.0060 0.000311 0.8943 1.9907 0.9792 1.0062 0.0001
Since e11 = 0.0001 < 0.0002, therefore the approximate solution is x(11)
and Matlab gives (0.8943, 1.9906,0.9792, 1.0061)T .Linear Systems By Ng Tin Yau (PhD) 37/65
-
A Sufficient Condition for Convergence
For a nn linear system Ax = b, a sufficient condition for convergenceof the Jacobis method is that in each row of the matrix of cofficientsaij , the absolute value of the diagonal element is greater than the sumof the absolute values of the off-diagonal elements. That is
|aii| >j=n
j=1,j 6=i|aij | (43)
When this condition is satisfied, the matrix A is classified as diagonallydominant and the iteration process converges toward the solution. Thesolution, however, might converge even when the above condition is notsatisfied.
Linear Systems By Ng Tin Yau (PhD) 38/65
-
What is an eigenproblems?
We shall denote Kn be the collection of all n-tuples such that eachcomponent belongs to K. Unless otherwise stated, we shall denote A tobe any n n matrix over K.
Given a square matrix A. Suppose that there exist K and anonzero vector v Kn such that
Av = v (44)
Then we say is an eignvalue of A and v is an accompanyingeigenvector. Finding such a pair (,v), also called an eigenpair thatsatisfies equation (1) is called an eigenproblem.Eigenproblem plays a significant role in engineering applications and isfrequently encountered in numerical methods.
Eigenproblems By Ng Tin Yau (PhD) 39/65
-
How to determine the eigenvalues of a square matrix?
Rewrite equation (1) as
(IA)v = 0 or (A I)v = 0 (45)
where I is a the identity matrix. If is the eigenvalue of A and v 6= 0,then we must have
det(IA) = 0 (46)Define the characteristic polynomial of A by
p() , det(IA) (47)
Thus, the zeros of p() are eigenvalues of the matrix A. Hence, wehave the following theorem:
Theorem
Let A be a complex square matrix. Then C is an eigenvalue of A ifand only if det(IA) = 0.
Eigenproblems By Ng Tin Yau (PhD) 40/65
-
Facts about eigenvalues
Theorem
Let A be a square matrix over Kn. Then1 Every A has an eigenvalue.
2 The eigenvalue associated with an eigenvector is uniquelydetermined.
3 If v(1),v(2) Kn are eigenvectors with the same eigenvalue , thenfor every scalar c1, c2, the vector c1v
(1) + c2v(2), if it is nonzero, is
also an eigenvector with eigenvalue .
Eigenproblems By Ng Tin Yau (PhD) 41/65
-
A warm up example
Example
Given
A =1
10
[4 36 7
]Verfy that (
1,
{12
})and
(1
10,
{11})
are the eigenpairs of matrix A.
Solution: [4/10 3/106/10 7/10
]{12
}= 1
{12
}and [
4/10 3/106/10 7/10
]{11}
=1
10
{11}
Eigenproblems By Ng Tin Yau (PhD) 42/65
-
Bookkeeping of cars of a car rental company
Example
A car rental company has offices in Edmonton and Calgary. Relying onits records, the company knows that on a monthly basis 40% of rentalsfrom the Edmonton office are returned there and 60% are one-wayrentals that are dropped off in the Calgary office. Similarly, 70% ofrentals from the Calgary office are returned there, whereas 30% cardropped off in Edmonton.
(1) Obtain a matrix equation that describes the number of cars at thedepots in Edmonton and Calgary at the beginning of month.
Solution: Let xk and yk denote the number of cars at the depots inEdmonton and Calgary, respectively, at the beginning of month k(k = 0, 1, 2, . . .). We can express this information in terms of differenceequations:
xk+1 = 0.4xk + 0.3yk
yk+1 = 0.6xk + 0.7ykEigenproblems By Ng Tin Yau (PhD) 43/65
-
Cont
(2) Estimate the number of cars at the depots in Edmonton andCalgary in the long run.
Analysis: Denote zk = (xk, yk)T and A as the coefficient matrix of
the system of difference equations for the model. Then, zk+1 = Azkand notice that zk = A
kz0. Recall that the eigenvalues of A are 1 = 1and 2 = 0.1 and their corresponding eigenvectors v1 = (1, 2)
T andv2 = (1,1)T, respectively. By writing z0 =
2i=1 civi and using the
fact that Avi = ivi, then we have
zk = Akz0 =
2i=1
ciAkvi =
2i=1
ciki vi =
{c1 + c2(0.1)
k
2c1 c2(0.1)k}
Now if k , we have limk zk = (c1, 2c1)T. Thus, in the long run,the number of cars in Edmonton depot tends to a value that is half thenumber of cars at the Calgary office.
Eigenproblems By Ng Tin Yau (PhD) 44/65
-
Example 3
Example
Compute the eigenvalues and their corresponding eigenvectors ofmatrix
A =
[3 42 6
]First, we need to compute
det(IA) = det([ 3 42 + 6
])= 0
This leads to the charisteristic equation
( 3)(+ 6) + 8 = 2 + 3 10 = ( 2)(+ 5) = 0
Thus, the eigenvalues are 1 = 2 and 2 = 5.
Eigenproblems By Ng Tin Yau (PhD) 45/65
-
Example 3 cont
Next we need to use the equation (IA)v = 0 to determine thecorresponding eigenvectors.For = 2, the equation becomes
(1IA)v(1) =[1 42 8
]{v(1)1
v(1)2
}=
{00
}
Set v(1)2 = 1, then v
(1)1 = 4, hence v
(1) = (4, 1)T.For = 5, the equation becomes
(2IA)v(2) =[8 42 1
]{v(2)1
v(2)2
}=
{00
}
Set v(2)1 = 1, then v
(2)2 = 2, hence v
(2) = (1, 2)T.
Eigenproblems By Ng Tin Yau (PhD) 46/65
-
Example 4
Example
Compute the eigenvalues of matrix
A =
[2 63 4
]The characteristic polynomial is
det(IA) = det([+ 2 63 4
])= 2 2+ 10 = 0
Notice that if we restrict that R, then we have no eigenvalue thatsatisfy the characteristic polynomial! However, if we allow C, thenwe have
1 = 1 + 3i and 2 = 1 3iwhere i =
1.Eigenproblems By Ng Tin Yau (PhD) 47/65
-
Example 5
Example
Determine the eigenvalues for the following matrix.
A =
7 13 1613 10 1316 13 7
First, we need to compute
det(A I) = det(+ 7) 13 1613 (+ 10) 13
16 13 (+ 7)
= 0This leads to the charisteristic equation
p() = 3 + 242 405+ 972 = 0Solving this equation gives
1 = 36 2 = 9 3 = 3Eigenproblems By Ng Tin Yau (PhD) 48/65
-
A Mechanical Vibration Problem
Consider the vibration of a 3-spring 2-mass problem:
Using Newtons second law to obtain[m1 00 m2
]{x1x2
}+
[k1 + k2 k2k2 k2 + k3
]{x1x2
}=
{00
}or in matrix form
Mx + Kx = 0
Eigenproblems By Ng Tin Yau (PhD) 49/65
-
Mechanical Vibration cont
Setting A = M1K, that is,
A =
[(k1 + k2)/m1 k2/m1k2/m2 (k2 + k3)/m2
]then we have x = Ax. By assuming that the solution is purelyoscillatory, we have x = veit. Substitute this solution into theequations of motion to yield
2veit = Aveit
Since eit 6= 0 which givesAv = v
where = 2. Physically, i represents the natural frequency of miand the eigenvectors v(1) and v(2) are the mode shapes of the twomasses.
Eigenproblems By Ng Tin Yau (PhD) 50/65
-
Diagonalization Problem
A square matrix A is called a diagonal matrix if Aij = 0 when i 6= j.It is easy to see that working with a diagonal matrix is much moreconvenient than working with a nondiagonal matrix. A matrix A issaid to be similar to a matrix B if there exists an nonsingular matrixP such that P1AP = B. In particular, if A is similar to a diagonalmatrix D, then it is said to be diagonalizable.
Now the two key questions are:
1 How do we know that a given square matrix is diagonalizable?
2 Suppose that the square matrix is diagonalizable, then how toobtain a matrix P?
In the sequel, we shall see that the diagonalization problem is directlyrelated to an eigenproblem.
Eigenproblems By Ng Tin Yau (PhD) 51/65
-
Idea of constructing P
Suppose that A is diagonalizable, then P1AP = D. Denote Dii = iwhere i is the diagonal entry at row i. Since P is invertible, then thecolumn space forms a linearly independent subset of Cn. Denotecolumn i of P as v(i) and thus,
P = [v(1) v(2) v(n)] (48)
From the relation AP = PD, we must have
Av(i) = iv(i) (49)
or equivalently (iIA)v(i) = 0. Since v(i) cannot be zero, therefore,det(iIA) = 0. This suggest that we can solve for all i and thenobtain the corresponding v(i) and if all the v(i)s are linearlyindependent, then we are succeeded! Hence, all we need are sometheorems to guarantee that the eigenvectors forms a linearlyindependent set. At least we must have n eigenvectors at the outset.
Eigenproblems By Ng Tin Yau (PhD) 52/65
-
A matrix with n distinct eigenvalues
Theorem
Let 1, 2, . . . , n be distinct eigenvalues of an n n matrix A. Ifv(1),v(2), . . . ,v(n) are the eigenvectors of A such that i corresponds tov(i), then the set {v(1),v(2), . . . ,v(n)} is linearly independent.
Corollary
Let Abe an n n matrix. If A has n distinct eigenvalues, then A isdiagonalizable.
Example
The matrices
[3 42 6
]and
7 13 1613 10 1316 13 7
are diagonalizable.
Eigenproblems By Ng Tin Yau (PhD) 53/65
-
Brief Survey of Numerical Methods
Many methods can be used to determine the eigenvalues andeigenvectors of a square matrix. For example, the power method,deflation methods, QR method, Jacobi method, to name a few.The power method can be used to find the dominant eigenvalue andan associated eigenvector for an arbitrary matrix. The inverse powermethod will find the eigenvalue closest to a given value and associatedeigenvector. This method is often used to refine an approximateeigenvalue and to compute an eigenvector once an eigenvalue has beenfound by some other technique.Methods based on similarity transformations, such as Householdersmethod, are used to convert a symmetric matrix into a similar matrixthat is tridiagonal (or upper Hessenberg if the matrix is notsymmetric). Techniques such as the QR method can then be applied tothe upper Hessenberg matrix to obtain approximations to all theeigenvalues. The associated eigenvectors can be found by using aniterative method, such as the Inverse Power Method, or by modifyingthe QR method to include the approximation of eigenvectors.
Eigenproblems By Ng Tin Yau (PhD) 54/65
-
Symmetric Matrices
A matrix A is said to be symmetric if A = AT. An n n matrix Qis said to be an orthogonal matrix if Q1 = QT.
Theorem
If A is a real symmetric square matrix and D is a diagonal matrixwhose diagonal entries are the eigenvalues of A, then there exists anorthogonal matrix Q such that D = QTAQ.
The following corollary to the above theorem demonstrate some of theinteresting properties of symmetric matrices.
Corollary
If A is a real symmetric n n matrix, then there exist n eigenvectorsof A that form an orthonormal set and the eigenvalues of A are realnumbers.
Eigenproblems By Ng Tin Yau (PhD) 55/65
-
The Jacobis Method
Suppose that we have a real symmetric square matrix A. The Jacobismethod is an iterative method that produce all the eigenvalues andeigenvectors of a real symmetric matrix simultaneously. This method isbased on a theorem in linear algebra stated previously, that is we needto determine an orthogonal matrix Q such that D = QTAQ.However, from a practical viewpoint, it may not be possible to obtain atruely diagonal matrix D. Instead we seek a sequence of matrics{Dk}kN and hoping that
limk
Dk = D (50)
whereDk = Q
Tk Dk1Qk k N (51)
with D0 A. Then the eigenvalues are given by the diagonal entriesD
(k)ii of matrix Dk. The corresponding eigenvectors {v(i)}ni=1 are given
by the columns of the matrix V(k) where it is given by
V(k) = Q1Q2 Qk =[v(1) v(2) v(n)] (52)Eigenproblems By Ng Tin Yau (PhD) 56/65
-
How to convert a 2 2 matrix to a diagonal one?Using the idea of rotating a vector in the plane, we have the rotationmatrix given by
R =
[cos sin sin cos
](53)
Then we can obtain an orthogonal transformation A = RTAR. Carryout the matrix multiplication, we have
A11 = A11 cos2 + 2A12 sin cos +A22 sin
2 (54)
A12 = A21 = (A22 A11) sin cos +A12(cos2 sin2 ) (55)A22 = A11 sin
2 2A12 sin k cos +A22 cos2 (56)To obtain a diagonal matrix, we need to kill the off-diagonal terms. Inother words, we require A12 = A21 = 0 and using the identitiescos 2 = cos2 sin2 and sin 2 = 2 sin cos to yield
tan 2 =2A12
A11 A22 (57)
Eigenproblems By Ng Tin Yau (PhD) 57/65
-
How to determine matrix Qk?
Extending this idea to the n n case and using our notation inprevious analysis, we have
Qk =
I 0 0 0 00 cos k 0 sin k 00 0 I 0 00 sin k 0 cos k 00 0 0 0 I
nn
(58)
for all k N. Here the sine and cosine entries appear in the position(i, i), (i, j), (j, i) and (j, j). In this case, we require
D(k+1)ij = D
(k+1)ji = 0 which gives
tan 2k+1 =2D
(k)ij
D(k)ii D(k)jj
(59)
Thus, each step of Jacobis method reduces a pair of off-diagonalelements to zero.
Eigenproblems By Ng Tin Yau (PhD) 58/65
-
Example 6
Example
Find the eigenvalues and eigenvectors of the matrix
A =
1 1 11 2 21 2 3
Ans: The largest off-diagonal term is A23 = 2. In this case, we havei = 2 and j = 3. Thus
tan 21 =2A23
A22 A33 1 =1
2tan1
(4
2 3)
= 37.981878
and
Q1 =
1 0 00 cos 1 sin 10 sin 1 cos 1
=1.0 0 00 0.7882054 0.6154122
0 0.6154122 0.7882054
Eigenproblems By Ng Tin Yau (PhD) 59/65
-
Example 6 - First Iteration
With D0 = A and using the calculated Q1, we have
D1 = QT1 D0Q1 =
1.0 0.1727932 1.40361760.1727932 0.4384472 0.01.4036176 0.0 4.5615525
Now we try to reduce the largest off-diagonal term of D1, namely,
D(1)13 = 1.4036176 to zero. In this case, we have i = 1 and j = 3.
tan 22 =2D
(1)13
D(1)11 D(1)33
2 = 12
tan1(
2.8072352
1.0 4.5615525)
= 19.122686
and
Q2 =
cos 2 0 sin 20 1 0sin 2 0 cos 2
= 0.9448193 0 0.32759200 1.0 00.3275920 0 0.9448193
Eigenproblems By Ng Tin Yau (PhD) 60/65
-
Example 6 - Second Iteration
Using the calculated Q2, we have
D2 = QT2 D1Q2 =
0.5133313 0.1632584 0.00.1632584 0.4384472 0.05660570.0 0.0566057 5.0482211
The largest off-diagonal term of D2, namely, D
212 = 0.1632584 to zero.
In this case, we have i = 1 and j = 2. Now tan 23 =2D
(2)12
D(2)11 D(2)22
which
gives
3 =1
2tan1
(0.3265167
0.5133313 0.4384472)
= 38.541515
and
Q3 =
cos 3 sin 3 0sin 3 cos 3 00 0 1
=0.7821569 0.6230815 0.00.6230815 0.7821569 0.0
0.0 0.0 1.0
Eigenproblems By Ng Tin Yau (PhD) 61/65
-
Example 6 - Third Iteration
Using the calculated Q3, we have
D3 = QT3 D2Q3 =
0.6433861 0.0 0.03526990.0 0.3083924 0.04427450.0352699 0.0442745 5.0482211
Suppose that you want to stop the process, then the three eigenvaluesare
1 = 0.6433861 2 = 0.3083924 3 = 5.0482211
In fact the eigenvalues obtained by Matlab are
1 = 0.6431 2 = 0.3080 3 = 5.0489
Eigenproblems By Ng Tin Yau (PhD) 62/65
-
Example 6 - Eigenvectors
To obtain the corresponding eigenvectors we compute
V(3) = Q1Q2Q3 =
0.7389969 0.5886994 0.32759200.3334301 0.7421160 0.58145330.5854125 0.3204631 0.7447116
Then the eigenvectors are given by the columns of the matrix Q(3).That is,
v(1) =
0.73899690.33343010.5854125
v(2) =0.58869940.74211600.3204631
v(3) =
0.32759200.58145330.7447116
Using Matlab, we have the corresponding eigenvectors
v(1) =
0.73700.32800.5910
v(2) =0.5910.73700.3280
v(3) =
0.32800.59100.7370
Eigenproblems By Ng Tin Yau (PhD) 63/65
-
Set (1) Solve the following system using Gauss elimination method withpartial pivoting.
x1 x2 + 2x3 x4 = 82x1 2x2 + 3x3 3x4 = 20
x1 + x2 + x3 = 2x1 x2 + 4x3 + 3x4 = 4
(2) Suppose that z = x y. Compute z5 and z if = 3 and = 2, and x = (5,3, 8)T and y = (0, 2,9)T .(3) Perform five iterations to the following linear system using theJacobi method. (Using x(0) = 0 as the initial approximation.)
4x1 + x2 x3 = 5x1 + 3x2 + x3 = 42x1 + 2x2 + 5x3 = 1
Exercises By Ng Tin Yau (PhD) 64/65
-
Set
(4) Given a matrix
C =
3 1 21 0 51 1 4
Determine the eigenvalues and eigenvectors of C by conventionalmethod.(5) Given matrices
A =
6 7 24 5 21 1 1
and B =3 1 01 4 2
0 2 3
Determine the eigenvalues and eigenvectors of matrix A by theconventional method and matrix B by the Jacobis method with 3iterations ONLY.
Exercises By Ng Tin Yau (PhD) 65/65
Matrices & DeterminantsLinear SystemsApplications to Differential EquationsDirect MethodsIterative Methods
EigenproblemsDiagonalization ProblemA Transformation Method Jacobi
Exercises