Ch 7.1: Introduction to Systems of First Order Linear Equations
A system of simultaneous first order ordinary differential equations has the general form
where each xk is a function of t. If each Fk is a linear function of x1, x2, …, xn, then the system of equations is said to be linear, otherwise it is nonlinear. Systems of higher order differential equations can similarly be defined.
),,,(
),,,(),,,(
21
2122
2111
nnn
n
n
xxxtFx
xxxtFxxxxtFx
K
M
K
K
=′
=′=′
Example 1The motion of a spring-mass system from Section 3.8 was described by the equation
This second order equation can be converted into a system of first order equations by letting x1 = u and x2 = u'. Thus
or
0)(192)(16)( =+′+′′ tututu
019216 122
21
=++′=′
xxxxx
122
21
19216 xxxxx
−−=′=′
Nth Order ODEs and Linear 1st Order SystemsThe method illustrated in previous example can be used to transform an arbitrary nth order equation
into a system of n first order equations, first by defining
Then
( ))1()( ,,,,, −′′′= nn yyyytFy K
)1(321 ,,,, −=′′=′== n
n yxyxyxyx K
),,,( 21
1
32
21
nn
nn
xxxtFxxx
xxxx
K
M
=′=′
=′=′
−
Solutions of First Order SystemsA system of simultaneous first order ordinary differential equations has the general form
It has a solution on I: α < t < β if there exists n functions
that are differentiable on I and satisfy the system of equations at all points t in I. Initial conditions may also be prescribed to give an IVP:
).,,,(
),,,(
21
2111
nnn
n
xxxtFx
xxxtFx
K
M
K
=′
=′
)(,),(),( 2211 txtxtx nn φφφ === K
00
0202
0101 )(,,)(,)( nn xtxxtxxtx === K
Example 2The equation
can be written as system of first order equations by letting x1 = y and x2 = y'. Thus
A solution to this system is
which is a parametric descriptionfor the unit circle.
π20,0 <<=+′′ tyy
12
21
xxxx−=′
=′
π20),cos(),sin( 21 <<== ttxtx
Theorem 7.1.1Suppose F1,…, Fn and ∂F1/∂x1,…, ∂F1/∂xn,…, ∂Fn/∂ x1,…, ∂Fn/∂xn, are continuous in the region R of t x1 x2…xn-space defined by α < t < β, α1 < x1 < β1, …, αn < xn < βn, and let the point
be contained in R. Then in some interval (t0 - h, t0 + h) there exists a unique solution
that satisfies the IVP.
( )002
010 ,,,, nxxxt K
)(,),(),( 2211 txtxtx nn φφφ === K
),,,(
),,,(),,,(
21
2122
2111
nnn
n
n
xxxtFx
xxxtFxxxxtFx
K
M
K
K
=′
=′=′
Linear SystemsIf each Fk is a linear function of x1, x2, …, xn, then the system of equations has the general form
If each of the gk(t) is zero on I, then the system is homogeneous, otherwise it is nonhomogeneous.
)()()()(
)()()()()()()()(
2211
222221212
112121111
tgxtpxtpxtpx
tgxtpxtpxtpxtgxtpxtpxtpx
nnnnnnn
nn
nn
++++=′
++++=′++++=′
K
M
K
K
Theorem 7.1.2Suppose p11, p12,…, pnn, g1,…, gn are continuous on an interval I: α < t < β with t0 in I, and let
prescribe the initial conditions. Then there exists a unique solution
that satisfies the IVP, and exists throughout I.
002
01 ,,, nxxx K
)(,),(),( 2211 txtxtx nn φφφ === K
)()()()(
)()()()()()()()(
2211
222221212
112121111
tgxtpxtpxtpx
tgxtpxtpxtpxtgxtpxtpxtpx
nnnnnnn
nn
nn
++++=′
++++=′++++=′
K
M
K
K
Ch 7.2: Review of MatricesFor theoretical and computation reasons, we review results of matrix theory in this section and the next. A matrix A is an m x n rectangular array of elements, arranged in m rows and n columns, denoted
Some examples of 2 x 2 matrices are given below:
( )⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
==
mnmm
n
n
ji
aaa
aaaaaa
a
L
MOMM
L
L
21
22221
11211
A
⎟⎟⎠
⎞⎜⎜⎝
⎛−+−
=⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛=
iii
CB7654231
,4231
,4321
A
TransposeThe transpose of A = (aij) is AT = (aji).
For example,
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
=⇒
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
=
mnnn
m
m
T
mnmm
n
n
aaa
aaaaaa
aaa
aaaaaa
L
MOMM
L
L
L
MOMM
L
L
21
22212
12111
21
22221
11211
AA
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛=⇒⎟⎟
⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛=⇒⎟⎟
⎠
⎞⎜⎜⎝
⎛=
635241
654321
,4231
4321 TT BBAA
ConjugateThe conjugate of A = (aij) is A = (aij).
For example,
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
=⇒
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
=
mnmm
n
n
mnmm
n
n
aaa
aaaaaa
aaa
aaaaaa
L
MOMM
L
L
L
MOMM
L
L
21
22221
11211
21
22221
11211
AA
⎟⎟⎠
⎞⎜⎜⎝
⎛+
−=⇒⎟⎟
⎠
⎞⎜⎜⎝
⎛−
+=
443321
443321
ii
ii
AA
AdjointThe adjoint of A is AT , and is denoted by A*
For example,
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
=⇒
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
=
mnnn
m
m
mnmm
n
n
aaa
aaaaaa
aaa
aaaaaa
L
MOMM
L
L
L
MOMM
L
L
21
22212
12111
*
21
22221
11211
AA
⎟⎟⎠
⎞⎜⎜⎝
⎛−
+=⇒⎟⎟
⎠
⎞⎜⎜⎝
⎛−
+=
432431
443321 *
ii
ii
AA
Square MatricesA square matrix A has the same number of rows and columns. That is, A is n x n. In this case, A is said to have order n.
For example,
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
=
nnnn
n
n
aaa
aaaaaa
L
MOMM
L
L
21
22221
11211
A
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛=
987654321
,4321
BA
VectorsA column vector x is an n x 1 matrix. For example,
A row vector x is a 1 x n matrix. For example,
Note here that y = xT, and that in general, if x is a column vector x, then xT is a row vector.
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛=
321
x
( )321=y
The Zero MatrixThe zero matrix is defined to be 0 = (0), whose dimensions depend on the context. For example,
K,000000
,000000
,0000
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛= 000
Matrix EqualityTwo matrices A = (aij) and B = (bij) are equal if aij = bij for all i and j. For example,
BABA =⇒⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛=
4321
,4321
Matrix – Scalar MultiplicationThe product of a matrix A = (aij) and a constant k is defined to be kA = (kaij). For example,
⎟⎟⎠
⎞⎜⎜⎝
⎛−−−−−−
=−⇒⎟⎟⎠
⎞⎜⎜⎝
⎛=
30252015105
5654321
AA
Matrix Addition and SubtractionThe sum of two m x n matrices A = (aij) and B = (bij) is defined to be A + B = (aij + bij). For example,
The difference of two m x n matrices A = (aij) and B = (bij) is defined to be A - B = (aij - bij). For example,
⎟⎟⎠
⎞⎜⎜⎝
⎛=+⇒⎟⎟
⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛=
121086
8765
,4321
BABA
⎟⎟⎠
⎞⎜⎜⎝
⎛−−−−
=−⇒⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛=
4444
8765
,4321
BABA
Matrix MultiplicationThe product of an m x n matrix A = (aij) and an n x rmatrix B = (bij) is defined to be the matrix C = (cij), where
Examples (note AB does not necessarily equal BA):
∑=
=n
kkjikij bac
1
⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛−+++−+++
=⇒⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−=⎟⎟
⎠
⎞⎜⎜⎝
⎛=
⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛++++
=⇒
⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛++++
=⇒⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛=
41715
61000512340023
102103
,654321
20141410
16412212291
2511115
169838341
4231
,4321
CDDC
BA
ABBA
Vector MultiplicationThe dot product of two n x 1 vectors x & y is defined as
The inner product of two n x 1 vectors x & y is defined as
Example:
∑=
=n
kji
T yx1
yx
( ) ∑=
==n
kji
T yx,1
yxyx
( ) iiii,
iiiiii
iT
T
2118)55)(3()32)(2()1)(1(
912)55)(3()32)(2()1)(1(55321
,321
+=−+++−==⇒
+−=++−+−=⇒⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
+−−
=⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛=
yxyx
yxyx
Vector LengthThe length of an n x 1 vector x is defined as
Note here that we have used the fact that if x = a + bi, then
Example:
( )2/1
1
22/1
1
2/1 || ⎥⎦
⎤⎢⎣
⎡=⎥
⎦
⎤⎢⎣
⎡== ∑∑
==
n
kk
n
kkk xxx,xxx
( )
( ) 3016941
)43)(43()2)(2()1)(1(43
21
2/1
=+++=
−+++==⇒⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
+= ii,
ixxxx
( )( ) 222 xbabiabiaxx =+=−+=⋅
OrthogonalityTwo n x 1 vectors x & y are orthogonal if (x,y) = 0. Example:
( ) 0)1)(3()4)(2()11)(1(14
11
321
=−+−+=⇒⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−−=
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛= yxyx ,
Identity MatrixThe multiplicative identity matrix I is an n x n matrix given by
For any square matrix A, it follows that AI = IA = A. The dimensions of I depend on the context. For example,
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
=
100
010001
L
MOMM
L
L
I
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛=
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛⎟⎟⎠
⎞⎜⎜⎝
⎛=
987654321
987654321
100010001
,4321
1001
4321
IBAI
Inverse MatrixA square matrix A is nonsingular, or invertible, if there exists a matrix B such that that AB = BA = I. Otherwise Ais singular. The matrix B, if it exists, is unique and is denoted by A-1
and is called the inverse of A. It turns out that A-1 exists iff detA ≠ 0, and A-1 can be found using row reduction (also called Gaussian elimination) on the augmented matrix (A|I), see example on next slide. The three elementary row operations:
Interchange two rows.Multiply a row by a nonzero scalar.Add a multiple of one row to another row.
Example: Finding the Identity Matrix (1 of 2)
Use row reduction to find the inverse of the matrix A below, if it exists.
Solution: If possible, use elementary row operations to reduce (A|I),
such that the left side is the identity matrix, for then the right side will be A-1. (See next slide.)
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−=
834301210
A
( ) ,100834010301001210
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−=IA
Example: Finding the Identity Matrix (2 of 2)
Thus
( )
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−−−
−−→
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−−−→
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−→
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−−−→
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−→
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−=
2/122/31001420102/372/9001
143200142010010301
143200001210010301
140430001210010301
100834001210010301
100834010301001210
IA
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−−−
−−=−
2/122/31422/372/9
1A
Matrix FunctionsThe elements of a matrix can be functions of a real variable. In this case, we write
Such a matrix is continuous at a point, or on an interval(a, b), if each element is continuous there. Similarly with differentiation and integration:
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
=
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
=
)()()(
)()()()()()(
)(,
)(
)()(
)(
21
22221
11211
2
1
tatata
tatatatatata
t
tx
txtx
t
mnmm
n
n
m L
MOMM
L
L
MAx
⎟⎠⎞⎜
⎝⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛= ∫∫
b
a ij
b
a
ij dttadttdt
dadtd )()(, AA
Example & Differentiation RulesExample:
Many of the rules from calculus apply in this setting. For example: ( )
( )
( )⎟⎠⎞
⎜⎝⎛+⎟
⎠⎞
⎜⎝⎛=
+=+
=
dtd
dtd
dtd
dtd
dtd
dtd
dtd
dtd
BABAAB
BABA
CACCA matrixconstant a is where,
⎟⎟⎠
⎞⎜⎜⎝
⎛
−=⇒
⎟⎟⎠
⎞⎜⎜⎝
⎛−
=⇒⎟⎟⎠
⎞⎜⎜⎝
⎛=
∫ πππ
410
)(
,0sin
cos64cos
sin3)(
3
0
2
dtt
ttt
dtd
ttt
t
A
AA
Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues
A system of n linear equations in n variables,
can be expressed as a matrix equation Ax = b:
If b = 0, then system is homogeneous; otherwise it is nonhomogeneous.
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
=
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
nnnnnn
n
n
b
bb
x
xx
aaa
aaaaaa
MM
L
MOMM
L
L
2
1
2
1
,2,1,
,22,21,2
,12,11,1
,,22,11,
2,222,211,2
1,122,111,1
nnnnnn
nn
nn
bxaxaxa
bxaxaxabxaxaxa
=+++
=+++
=+++
L
M
L
L
Nonsingular CaseIf the coefficient matrix A is nonsingular, then it is invertible and we can solve Ax = b as follows:
This solution is therefore unique. Also, if b = 0, it follows that the unique solution to Ax = 0 is x = A-10 = 0. Thus if A is nonsingular, then the only solution to Ax = 0 is the trivial solution x = 0.
bAxbAIxbAAxAbAx 1111 −−−− =⇒=⇒=⇒=
Example 1: Nonsingular Case (1 of 3)
From a previous example, we know that the matrix A below is nonsingular with inverse as given.
Using the definition of matrix multiplication, it follows that the only solution of Ax = 0 is x = 0:
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−−−
−−=
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−= −
2/122/31422/372/9
,834301210
1AA
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛=
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−−−
−−== −
000
000
2/122/31422/372/9
10Ax
Example 1: Nonsingular Case (2 of 3)
Now let’s solve the nonhomogeneous linear system Ax = bbelow using A-1:
This system of equations can be written as Ax = b, where
Then
08342301220
321
321
321
=+−−=++
=++
xxxxxxxxx
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛−−
=⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛−
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−−−
−−== −
71223
022
2/122/31422/372/9
1bAx
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛−=
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛=
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−=
022
,,834301210
3
2
1
bxAxxx
Example 1: Nonsingular Case (3 of 3)
Alternatively, we could solve the nonhomogeneous linear system Ax = b below using row reduction.
To do so, form the augmented matrix (A|b) and reduce, using elementary row operations.
( )
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛−−
=→==+−=+
→⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛ −→
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛ −→
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−−
−→
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−
−→
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−−=
71223
72223
710022102301
1420022102301
843022102301
083422102301
083423012210
3
32
31
x
bA
xxxxx
08342301220
321
321
321
=+−−=++
=++
xxxxxxxxx
Singular CaseIf the coefficient matrix A is singular, then A-1 does not exist, and either a solution to Ax = b does not exist, or there is more than one solution (not unique). Further, the homogeneous system Ax = 0 has more than one solution. That is, in addition to the trivial solution x = 0, there are infinitely many nontrivial solutions.The nonhomogeneous case Ax = b has no solution unless (b, y) = 0, for all vectors y satisfying A*y = 0, where A* is the adjoint of A. In this case, Ax = b has solutions (infinitely many), each of the form x = x(0) + ξ, where x(0) is a particular solution of Ax = b, and ξ is any solution of Ax = 0.
Example 2: Singular Case (1 of 3)
Solve the nonhomogeneous linear system Ax = b below using row reduction.
To do so, form the augmented matrix (A|b) and reduce, using elementary row operations.
( )
soln no1015312
100015301121
400015301121
353015301121
6106015301121
154506511121
3
32
321
→==+=−−
→⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛ −−→
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−
−−→
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−
−−→
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−
−−→
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−−−
−−=
xxxxxx
bA
154506511121
321
321
321
−=+−=++−=−−
xxxxxxxxx
Example 2: Singular Case (2 of 3)
Solve the nonhomogeneous linear system Ax = b below using row reduction.
Reduce the augmented matrix (A|b) as follows:
( )
072
27
21000
530121
25
21530
530121
51060530121
545651121
123
123
12
1
13
12
1
13
12
1
3
2
1
=−−→
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
−−
+−−
→
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
−
+−−
→
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−+
−−→
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−−
−−=
bbb
bbb
bbb
bb
bbb
bbbbb
bbb
bA
3321
2321
1321
545651121
bxxxbxxxbxxx
=+−=++−=−−
Example 2: Singular Case (3 of 3)
From the previous slide, we require
Suppose
Then the reduced augmented matrix (A|b) becomes:
ξxxxx +=⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−+⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛==→
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛−−
+⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛=→
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛−−
=→
==+=−−
→
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
−−
+−−
)0(3
3
3
3
3
32
321
123
12
1
357
001
13/53/7
001
3/53/71
00053112
27
21000
530121
cxx
xx
xxxxxx
bbb
bbb
072 123 =−− bbb
5,1,1 321 =−== bbb
Linear Dependence and IndependenceA set of vectors x(1), x(2),…, x(n) is linearly dependent if there exists scalars c1, c2,…, cn, not all zero, such that
If the only solution of
is c1= c2 = …= cn = 0, then x(1), x(2),…, x(n) is linearly independent.
0xxx =+++ )()2(2
)1(1
nnccc L
0xxx =+++ )()2(2
)1(1
nnccc L
Example 3: Linear Independence (1 of 2)
Determine whether the following vectors are linear dependent or linearly independent.
We need to solve
or
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛=
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−⇔
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛=
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛+⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−+⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
000
834301210
000
832
301
410
3
2
1
21
ccc
ccc
0xxx =++ )3(3
)2(2
)1(1 ccc
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛=
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−=
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛=
832
,301
,410
)3()2()1( xxx
Example 3: Linear Independence (2 of 2)
We thus reduce the augmented matrix (A|b), as before.
Thus the only solution is c1= c2 = …= cn = 0, and therefore the original vectors are linearly independent.
( )
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛=→
==+=+
→
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛→
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−=
000
00203
010002100301
083403010210
3
32
31
c
bA
ccccc
Example 4: Linear Dependence (1 of 2)
Determine whether the following vectors are linear dependent or linearly independent.
We need to solve
or
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛−=
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−
−=
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛−=
561
,452
,511
)3()2()1( xxx
0xxx =++ )3(3
)2(2
)1(1 ccc
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛=
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−−
−−⇔
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛=
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛−+⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−
−+⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛−
000
545651121
000
561
452
511
3
2
1
321
ccc
ccc
Example 4: Linear Dependence (2 of 2)
We thus reduce the augmented matrix (A|b), as before.
Thus the original vectors are linearly dependent, with
( )
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−=→
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛−−
=→==+=−−
→
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛ −−→
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−−
−−=
357
3/53/7
00053012
000005300121
054506510121
3
3
3
3
32
321
kc
cc
cccccc
cc
bA
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛=
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛−−⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−
−+⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛−
000
561
3452
5511
7
Linear Independence and InvertibilityConsider the previous two examples:
The first matrix was known to be nonsingular, and its column vectors were linearly independent. The second matrix was known to be singular, and its column vectors were linearly dependent.
This is true in general: the columns (or rows) of A are linearly independent iff A is nonsingular iff A-1 exists.Also, A is nonsingular iff detA ≠ 0, hence columns (or rows) of A are linearly independent iff detA ≠ 0.Further, if A = BC, then det(C) = det(A)det(B). Thus if the columns (or rows) of A and B are linearly independent, then the columns (or rows) of C are also.
Linear Dependence & Vector FunctionsNow consider vector functions x(1)(t), x(2)(t),…, x(n)(t), where
As before, x(1)(t), x(2)(t),…, x(n)(t) is linearly dependent on I if there exists scalars c1, c2,…, cn, not all zero, such that
Otherwise x(1)(t), x(2)(t),…, x(n)(t) is linearly independent on ISee text for more discussion on this.
( ) ( )βα ,,,,2,1,
)(
)()(
)(
)(
)(2
)(1
=∈=
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
= Itnk
tx
txtx
t
km
k
k
k KM
x
Ittctctc nn ∈=+++ allfor ,)()()( )()2(
2)1(
1 0xxx L
Eigenvalues and EigenvectorsThe eqn. Ax = y can be viewed as a linear transformation that maps (or transforms) x into a new vector y. Nonzero vectors x that transform into multiples of themselves are important in many applications. Thus we solve Ax = λx or equivalently, (A-λI)x = 0. This equation has a nonzero solution if we choose λ such that det(A-λI) = 0. Such values of λ are called eigenvalues of A, and the nonzero solutions x are called eigenvectors.
Example 5: Eigenvalues (1 of 3)
Find the eigenvalues and eigenvectors of the matrix A.
Solution: Choose λ such that det(A-λI) = 0, as follows.
⎟⎟⎠
⎞⎜⎜⎝
⎛−
=6332
A
( )
( )( ) ( )( )( )( )
7,373214
336263
32det
0001
6332
detdet
2
−==⇒+−=−+=
−−−−=
⎟⎟⎠
⎞⎜⎜⎝
⎛−−
−=
⎟⎟⎠
⎞⎜⎜⎝
⎛⎟⎟⎠
⎞⎜⎜⎝
⎛−⎟⎟⎠
⎞⎜⎜⎝
⎛−
=−
λλλλλλ
λλλ
λ
λλIA
Example 5: First Eigenvector (2 of 3)
To find the eigenvectors of the matrix A, we need to solve (A-λI)x = 0 for λ = 3 and λ = -7. Eigenvector for λ = 3: Solve
by row reducing the augmented matrix:
( ) ⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛⎟⎟⎠
⎞⎜⎜⎝
⎛−
−⇔⎟⎟
⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛⎟⎟⎠
⎞⎜⎜⎝
⎛−−
−⇔=−
00
9331
00
363332
2
1
2
1
xx
xx
0xIA λ
⎟⎟⎠
⎞⎜⎜⎝
⎛=→⎟⎟
⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛=→
==−
→⎟⎟⎠
⎞⎜⎜⎝
⎛ −→⎟⎟
⎠
⎞⎜⎜⎝
⎛−−
→⎟⎟⎠
⎞⎜⎜⎝
⎛−
−
13
choosearbitrary,133
00031
000031
093031
093031
)1(
2
2)1(
2
21
xx ccxx
xxx
Example 5: Second Eigenvector (3 of 3)
Eigenvector for λ = -7: Solve
by row reducing the augmented matrix:
( ) ⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛⎟⎟⎠
⎞⎜⎜⎝
⎛⇔⎟⎟
⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛⎟⎟⎠
⎞⎜⎜⎝
⎛+−
+⇔=−
00
1339
00
763372
2
1
2
1
xx
xx
0xIA λ
⎟⎟⎠
⎞⎜⎜⎝
⎛−=→⎟⎟
⎠
⎞⎜⎜⎝
⎛−=⎟⎟
⎠
⎞⎜⎜⎝
⎛−=→
==+
→⎟⎟⎠
⎞⎜⎜⎝
⎛→⎟⎟
⎠
⎞⎜⎜⎝
⎛→⎟⎟
⎠
⎞⎜⎜⎝
⎛
31
choosearbitrary,1
3/13/1
0003/11
00003/11
01303/11
013039
)2(
2
2)2(
2
21
xx ccxx
xxx
Normalized EigenvectorsFrom the previous example, we see that eigenvectors are determined up to a nonzero multiplicative constant. If this constant is specified in some particular way, then the eigenvector is said to be normalized. For example, eigenvectors are sometimes normalized by choosing the constant so that ||x|| = (x, x)½ = 1.
Algebraic and Geometric MultiplicityIn finding the eigenvalues λ of an n x n matrix A, we solvedet(A-λI) = 0. Since this involves finding the determinant of an n x nmatrix, the problem reduces to finding roots of an nth degree polynomial. Denote these roots, or eigenvalues, by λ1, λ2, …, λn. If an eigenvalue is repeated m times, then its algebraic multiplicity is m. Each eigenvalue has at least one eigenvector, and a eigenvalue of algebraic multiplicity m may have q linearly independent eigevectors, 1 ≤ q ≤ m, and q is called the geometric multiplicity of the eigenvalue.
Eigenvectors and Linear IndependenceIf an eigenvalue λ has algebraic multiplicity 1, then it is said to be simple, and the geometric multiplicity is 1 also. If each eigenvalue of an n x n matrix A is simple, then Ahas n distinct eigenvalues. It can be shown that the neigenvectors corresponding to these eigenvalues are linearly independent. If an eigenvalue has one or more repeated eigenvalues, then there may be fewer than n linearly independent eigenvectors since for each repeated eigenvalue, we may have q < m. This may lead to complications in solving systems of differential equations.
Example 6: Eigenvalues (1 of 5)
Find the eigenvalues and eigenvectors of the matrix A.
Solution: Choose λ such that det(A-λI) = 0, as follows.
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛=
011101110
A
( )
1,1,2)1)(2(
23
111111
detdet
221
2
3
−=−==⇒+−=
++−=
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−−
−=−
λλλλλ
λλ
λλ
λλIA
Example 6: First Eigenvector (2 of 5)
Eigenvector for λ = 2: Solve (A-λI)x = 0, as follows.
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛=→
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛=
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛=→
==−=−
→⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛−−
→⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛−−
→
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−−
−→
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−−
−→
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−−
−
111
choosearbitrary,111
00011011
000001100101
000001100211
033003300211
011201210211
021101210112
)1(
3
3
3)1(
3
32
31
xx ccxxx
xxxxx
Example 6: 2nd and 3rd Eigenvectors (3 of 5)
Eigenvector for λ = -1: Solve (A-λI)x = 0, as follows.
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−=
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−=→
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛−+⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛−=
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛ −−=→
===++
→⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛→
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
110
, 101
choose
arbitrary,where,101
011
00000111
000000000111
011101110111
)3()2(
3232
3
2
32)2(
3
2
321
xx
x xxxxxx
xx
xx
xxx
Example 6: Eigenvectors of A (4 of 5)
Thus three eigenvectors of A are
where x(2), x(3) correspond to the double eigenvalue λ = - 1.It can be shown that x(1), x(2), x(3) are linearly independent. Hence A is a 3 x 3 symmetric matrix (A = AT ) with 3 real eigenvalues and 3 linearly independent eigenvectors.
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−=
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−=
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛=
110
, 101
,111
)3()2()1( xxx
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛=
011101110
A
Example 6: Eigenvectors of A (5 of 5)
Note that we could have we had chosen
Then the eigenvectors are orthogonal, since
Thus A is a 3 x 3 symmetric matrix with 3 real eigenvalues and 3 linearly independent orthogonal eigenvectors.
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛−=
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−=
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛=
121
, 101
,111
)3()2()1( xxx
( ) ( ) ( ) 0,,0,,0, )3()2()3()1()2()1( === xxxxxx
Hermitian MatricesA self-adjoint, or Hermitian matrix, satisfies A = A*, where we recall that A* = AT . Thus for a Hermitian matrix, aij = aji. Note that if A has real entries and is symmetric (see last example), then A is Hermitian. An n x n Hermitian matrix A has the following properties:
All eigenvalues of A are real.There exists a full set of n linearly independent eigenvectors of A.If x(1) and x(2) are eigenvectors that correspond to different eigenvalues of A, then x(1) and x(2) are orthogonal. Corresponding to an eigenvalue of algebraic multiplicity m, it is possible to choose m mutually orthogonal eigenvectors, and hence A has a full set of n linearly independent orthogonal eigenvectors.
Ch 7.4: Basic Theory of Systems of First Order Linear EquationsThe general theory of a system of n first order linear equations
parallels that of a single nth order linear equation. This system can be written as x' = P(t)x + g(t), where
)()()()(
)()()()()()()()(
2211
222221212
112121111
tgxtpxtpxtpx
tgxtpxtpxtpxtgxtpxtpxtpx
nnnnnnn
nn
nn
++++=′
++++=′++++=′
K
M
K
K
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
=
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
=
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
=
)()()(
)()()()()()(
)(,
)(
)()(
)(,
)(
)()(
)(
21
22221
11211
2
1
2
1
tptptp
tptptptptptp
t
tg
tgtg
t
tx
txtx
t
nnnn
n
n
nn L
MOMM
L
L
MMPgx
Vector Solutions of an ODE SystemA vector x = φ(t) is a solution of x' = P(t)x + g(t) if the components of x,
satisfy the system of equations on I: α < t < β.
For comparison, recall that x' = P(t)x + g(t) represents our system of equations
Assuming P and g continuous on I, such a solution exists by Theorem 7.1.2.
),(,),(),( 2211 txtxtx nn φφφ === K
)()()()(
)()()()()()()()(
2211
222221212
112121111
tgxtpxtpxtpx
tgxtpxtpxtpxtgxtpxtpxtpx
nnnnnnn
nn
nn
++++=′
++++=′++++=′
K
M
K
K
Example 1Consider the homogeneous equation x' = P(t)x below, with the solutions x as indicated.
To see that x is a solution, substitute x into the equation and perform the indicated operations:
tt
t
eee
t 33
3
21
2)(;
1411
⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛=′ xxx
xx ′=⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛t
t
t
t
t
t
ee
ee
ee
3
3
3
3
3
3
23
63
21411
1411
Homogeneous Case; Vector Function NotationAs in Chapters 3 and 4, we first examine the general homogeneous equation x' = P(t)x.Also, the following notation for the vector functions x(1), x(2),…, x(k),… will be used:
KM
KMM
,
)(
)()(
)(,,
)(
)()(
)(,
)(
)()(
)( 2
1
)(
2
22
12
)2(
1
21
11
)1(
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
=
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
=
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
=
tx
txtx
t
tx
txtx
t
tx
txtx
t
nn
n
n
k
nn
xxx
Theorem 7.4.1If the vector functions x(1) and x(2) are solutions of the system x' = P(t)x, then the linear combination c1x(1) + c2x(2) is also a solution for any constants c1 and c2.
Note: By repeatedly applying the result of this theorem, it can be seen that every finite linear combination
of solutions x(1), x(2),…, x(k) is itself a solution to x' = P(t)x. )()()( )()2(
2)1(
1 tctctc kk xxxx +++= K
Example 2Consider the homogeneous equation x' = P(t)x below, with the two solutions x(1) and x(2) as indicated.
Then x = c1x(1) + c2x(2) is also a solution:
⎟⎟⎠
⎞⎜⎜⎝
⎛
−=⎟⎟
⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛=′
−
−
t
t
t
t
ee
tee
t2
)(,2
)(;1411 )2(
3
3)1( xxxx
x
x
′=⎟⎟⎠
⎞⎜⎜⎝
⎛−+⎟⎟⎠
⎞⎜⎜⎝
⎛=
⎟⎟⎠
⎞⎜⎜⎝
⎛−+⎟⎟⎠
⎞⎜⎜⎝
⎛=
⎟⎟⎠
⎞⎜⎜⎝
⎛
−⎟⎟⎠
⎞⎜⎜⎝
⎛+⎟⎟⎠
⎞⎜⎜⎝
⎛⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛
−
−
−
−
−
−
t
t
t
t
t
t
t
t
t
t
t
t
ee
cee
c
ecec
ecec
ecec
ecec
263
263
21411
21411
1411
23
3
1
2
23
1
31
2
23
1
31
Theorem 7.4.2If x(1), x(2),…, x(n) are linearly independent solutions of the system x' = P(t)x for each point in I: α < t < β, then each solution x = φ(t) can be expressed uniquely in the form
If solutions x(1),…, x(n) are linearly independent for each point in I: α < t < β, then they are fundamental solutions on I, and the general solution is given by
)()()( )()2(2
)1(1 tctctc n
nxxxx +++= K
)()()( )()2(2
)1(1 tctctc n
nxxxx +++= K
The Wronskian and Linear IndependenceThe proof of Thm 7.4.2 uses the fact that if x(1), x(2),…, x(n)
are linearly independent on I, then detX(t) ≠ 0 on I, where
The Wronskian of x(1),…, x(n) is defined as W[x(1),…, x(n)](t) = detX(t).
It follows that W[x(1),…, x(n)](t) ≠ 0 on I iff x(1),…, x(n) are linearly independent for each point in I.
,)()(
)()()(
1
111
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛=
txtx
txtxt
nnn
n
L
MOM
L
X
Theorem 7.4.3If x(1), x(2),…, x(n) are solutions of the system x' = P(t)x on I: α < t < β, then the Wronskian W[x(1),…, x(n)](t) is either identically zero on I or else is never zero on I.
This result enables us to determine whether a given set of solutions x(1), x(2),…, x(n) are fundamental solutions byevaluating W[x(1),…, x(n)](t) at any point t in α < t < β.
Theorem 7.4.4Let
Let x(1), x(2),…, x(n) be solutions of the system x' = P(t)x,α < t < β, that satisfy the initial conditions
respectively, where t0 is any point in α < t < β. Thenx(1), x(2),…, x(n) are fundamental solutions of x' = P(t)x.
⎟⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜⎜
⎝
⎛
=
⎟⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜⎜
⎝
⎛
=
⎟⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜⎜
⎝
⎛
=
10
00
,,
0
010
,
0
001
)()2()1( MK
MM
neee
,)(,,)( )(0
)()1(0
)1( nn tt exex == K
Ch 7.5: Homogeneous Linear Systems with Constant CoefficientsWe consider here a homogeneous system of n first order linear equations with constant, real coefficients:
This system can be written as x' = Ax, wherennnnnn
nn
nn
xaxaxax
xaxaxaxxaxaxax
+++=′
+++=′+++=′
K
M
K
K
2211
22221212
12121111
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
=
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
=
nnnn
n
n
m aaa
aaaaaa
tx
txtx
t
L
MOMM
L
L
M
21
22221
11211
2
1
,
)(
)()(
)( Ax
Equilibrium SolutionsNote that if n = 1, then the system reduces to
Recall that x = 0 is the only equilibrium solution if a ≠ 0. Further, x = 0 is an asymptotically stable solution if a < 0, since other solutions approach x = 0 in this case. Also, x = 0 is an unstable solution if a > 0, since other solutions depart from x = 0 in this case. For n > 1, equilibrium solutions are similarly found by solving Ax = 0. We assume detA ≠ 0, so that x = 0 is the only solution. Determining whether x = 0 is asymptotically stable or unstable is an important question here as well.
atetxaxx =⇒=′ )(
Phase PlaneWhen n = 2, then the system reduces to
This case can be visualized in the x1x2-plane, which is called the phase plane. In the phase plane, a direction field can be obtained by evaluating Ax at many points and plotting the resulting vectors, which will be tangent to solution vectors. A plot that shows representative solution trajectories is called a phase portrait. Examples of phase planes, directions fields and phase portraits will be given later in this section.
2221212
2121111
xaxaxxaxax
+=′+=′
Solving Homogeneous SystemTo construct a general solution to x' = Ax, assume a solution of the form x = ξert, where the exponent r and the constant vector ξ are to be determined. Substituting x = ξert into x' = Ax, we obtain
Thus to solve the homogeneous system of differential equations x' = Ax, we must find the eigenvalues and eigenvectors of A.Therefore x = ξert is a solution of x' = Ax provided that r is an eigenvalue and ξ is an eigenvector of the coefficient matrix A.
( ) 0ξIAAξξAξξ =−⇔=⇔= rreer rtrt
Example 1: Direction Field (1 of 9)
Consider the homogeneous equation x' = Ax below.
A direction field for this system is given below.Substituting x = ξert in for x, and rewriting system as (A-rI)ξ = 0, we obtain
xx ⎟⎟⎠
⎞⎜⎜⎝
⎛=′
1411
⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛⎟⎟⎠
⎞⎜⎜⎝
⎛−
−00
1411
1
1
ξξ
rr
Example 1: Eigenvalues (2 of 9)
Our solution has the form x = ξert, where r and ξ are found by solving
Recalling that this is an eigenvalue problem, we determine rby solving det(A-rI) = 0:
Thus r1 = 3 and r2 = -1.
⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛⎟⎟⎠
⎞⎜⎜⎝
⎛−
−00
1411
1
1
ξξ
rr
)1)(3(324)1(14
11 22 +−=−−=−−=−
−rrrrr
rr
Example 1: First Eigenvector (3 of 9)
Eigenvector for r1 = 3: Solve
by row reducing the augmented matrix:
( ) ⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛⎟⎟⎠
⎞⎜⎜⎝
⎛−
−⇔⎟⎟
⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛⎟⎟⎠
⎞⎜⎜⎝
⎛−
−⇔=−
00
2412
00
314131
2
1
2
1
ξξ
ξξ
0ξIA r
⎟⎟⎠
⎞⎜⎜⎝
⎛=→⎟⎟
⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛=→
==−
→⎟⎟⎠
⎞⎜⎜⎝
⎛ −→⎟⎟
⎠
⎞⎜⎜⎝
⎛−
−→⎟⎟
⎠
⎞⎜⎜⎝
⎛−
−
21
choosearbitrary,12/12/1
0002/11
00002/11
02402/11
024012
)1(
2
2)1(
2
21
ξξ ccξξ
ξξξ
Example 1: Second Eigenvector (4 of 9)
Eigenvector for r2 = -1: Solve
by row reducing the augmented matrix:
( ) ⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛⎟⎟⎠
⎞⎜⎜⎝
⎛⇔⎟⎟
⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛⎟⎟⎠
⎞⎜⎜⎝
⎛+
+⇔=−
00
2412
00
114111
2
1
2
1
ξξ
ξξ
0ξIA r
⎟⎟⎠
⎞⎜⎜⎝
⎛−
=→⎟⎟⎠
⎞⎜⎜⎝
⎛−=⎟⎟
⎠
⎞⎜⎜⎝
⎛−=→
==+
→⎟⎟⎠
⎞⎜⎜⎝
⎛→⎟⎟
⎠
⎞⎜⎜⎝
⎛→⎟⎟
⎠
⎞⎜⎜⎝
⎛
21
choosearbitrary,1
2/12/1
0002/11
00002/11
02402/11
024012
)2(
2
2)2(
2
21
ξξ ccξξ
ξξξ
Example 1: General Solution (5 of 9)
The corresponding solutions x = ξert of x' = Ax are
The Wronskian of these two solutions is
Thus x(1) and x(2) are fundamental solutions, and the general solution of x' = Ax is
tt etet −⎟⎟⎠
⎞⎜⎜⎝
⎛−
=⎟⎟⎠
⎞⎜⎜⎝
⎛=
21
)(,21
)( )2(3)1( xx
[ ] 0422
)(, 23
3)2()1( ≠−=
−= −
−
−t
tt
tt
eeeee
tW xx
tt ecec
tctct
−⎟⎟⎠
⎞⎜⎜⎝
⎛−
+⎟⎟⎠
⎞⎜⎜⎝
⎛=
+=
21
21
)()()(
23
1
)2(2
)1(1 xxx
Example 1: Phase Plane for x(1) (6 of 9)
To visualize solution, consider first x = c1x(1):
Now
Thus x(1) lies along the straight line x2 = 2x1, which is the line through origin in direction of first eigenvector ξ(1)
If solution is trajectory of particle, with position given by (x1, x2), then it is in Q1 when c1 > 0, and in Q3 when c1 < 0. In either case, particle moves away from origin as t increases.
ttt ecxecxecxx
t 312
311
31
2
1)1( 2,21
)( ==⇔⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛=x
121
2
1
13312
311 2
22, xx
cx
cxeecxecx ttt =⇔==⇔==
Example 1: Phase Plane for x(2) (7 of 9)
Next, consider x = c2x(2):
Then x(2) lies along the straight line x2 = -2x1, which is the line through origin in direction of 2nd eigenvector ξ(2)
If solution is trajectory of particle, with position given by (x1, x2), then it is in Q4 when c2 > 0, and in Q2 when c2 < 0. In either case, particle moves towards origin as t increases.
ttt ecxecxecxx
t −−− −==⇔⎟⎟⎠
⎞⎜⎜⎝
⎛−
=⎟⎟⎠
⎞⎜⎜⎝
⎛= 22212
2
1)2( 2,21
)(x
Example 1: Phase Plane for General Solution (8 of 9)
The general solution is x = c1x(1) + c2x(2):
As t →∞, c1x(1) is dominant and c2x(2) becomes negligible. Thus, for c1 ≠ 0, all solutions asymptotically approach the line x2 = 2x1 as t →∞. Similarly, for c2 ≠ 0, all solutions asymptotically approach the line x2 = -2x1 as t → - ∞. The origin is a saddle point,and is unstable. See graph.
tt ecect −⎟⎟⎠
⎞⎜⎜⎝
⎛−
+⎟⎟⎠
⎞⎜⎜⎝
⎛=
21
21
)( 23
1x
Example 1: Time Plots for General Solution (9 of 9)
The general solution is x = c1x(1) + c2x(2):
As an alternative to phase plane plots, we can graph x1 or x2as a function of t. A few plots of x1 are given below. Note that when c1 = 0, x1(t) = c2e-t → 0 as t →∞. Otherwise, x1(t) = c1e3t + c2e-t grows unbounded as t →∞. Graphs of x2 are similarly obtained.
⎟⎟⎠
⎞⎜⎜⎝
⎛
−+
=⎟⎟⎠
⎞⎜⎜⎝
⎛⇔⎟⎟
⎠
⎞⎜⎜⎝
⎛−
+⎟⎟⎠
⎞⎜⎜⎝
⎛=
−
−−
tt
tttt
ecececec
txtx
ecect2
31
23
1
2
12
31 22)(
)(21
21
)(x
Example 2: Direction Field (1 of 9)
Consider the homogeneous equation x' = Ax below.
A direction field for this system is given below.Substituting x = ξert in for x, and rewriting system as (A-rI)ξ = 0, we obtain
xx ⎟⎟⎠
⎞⎜⎜⎝
⎛
−−
=′2223
⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛⎟⎟⎠
⎞⎜⎜⎝
⎛
−−−−
00
2223
1
1
ξξ
rr
Example 2: Eigenvalues (2 of 9)
Our solution has the form x = ξert, where r and ξ are found by solving
Recalling that this is an eigenvalue problem, we determine rby solving det(A-rI) = 0:
Thus r1 = -1 and r2 = -4.
)4)(1(452)2)(3(22
23 2 ++=++=−−−−−=−−
−− rrrrrrr
r
⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛⎟⎟⎠
⎞⎜⎜⎝
⎛
−−−−
00
2223
1
1
ξξ
rr
Example 2: First Eigenvector (3 of 9)
Eigenvector for r1 = -1: Solve
by row reducing the augmented matrix:
( ) ⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛⎟⎟⎠
⎞⎜⎜⎝
⎛
−−
⇔⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛⎟⎟⎠
⎞⎜⎜⎝
⎛
+−+−
⇔=−00
1222
00
122213
2
1
2
1
ξξ
ξξ
0ξIA r
⎟⎟⎠
⎞⎜⎜⎝
⎛=→⎟⎟
⎠
⎞⎜⎜⎝
⎛=→
⎟⎟⎠
⎞⎜⎜⎝
⎛ −→⎟⎟
⎠
⎞⎜⎜⎝
⎛
−−
→⎟⎟⎠
⎞⎜⎜⎝
⎛
−−
21
choose2/2
00002/21
01202/21
012022
)1(
2
2)1( ξξξξ
Example 2: Second Eigenvector (4 of 9)
Eigenvector for r2 = -4: Solve
by row reducing the augmented matrix:
( ) ⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛⎟⎟⎠
⎞⎜⎜⎝
⎛⇔⎟⎟
⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛⎟⎟⎠
⎞⎜⎜⎝
⎛
+−+−
⇔=−00
2221
00
422243
2
1
2
1
ξξ
ξξ
0ξIA r
⎟⎟⎠
⎞⎜⎜⎝
⎛−=→
⎟⎟⎠
⎞⎜⎜⎝
⎛−=→⎟⎟
⎠
⎞⎜⎜⎝
⎛→⎟⎟
⎠
⎞⎜⎜⎝
⎛
12choose
2000021
022021
)2(
2
2)2(
ξ
ξξξ
Example 2: General Solution (5 of 9)
The corresponding solutions x = ξert of x' = Ax are
The Wronskian of these two solutions is
Thus x(1) and x(2) are fundamental solutions, and the general solution of x' = Ax is
tt etet 4)2()1(
12)(,
21
)( −−
⎟⎟⎠
⎞⎜⎜⎝
⎛−=⎟⎟
⎠
⎞⎜⎜⎝
⎛= xx
[ ] 032
2)(, 54
4)2()1( ≠=
−= −
−−
−−t
tt
tt
eeeeetW xx
tt ecec
tctct
421
)2(2
)1(1
12
21
)()()(
−−
⎟⎟⎠
⎞⎜⎜⎝
⎛−+⎟⎟
⎠
⎞⎜⎜⎝
⎛=
+= xxx
Example 2: Phase Plane for x(1) (6 of 9)
To visualize solution, consider first x = c1x(1):
Now
Thus x(1) lies along the straight line x2 = 2½x1, which is the line through origin in direction of first eigenvector ξ(1)
If solution is trajectory of particle, with position given by (x1, x2), then it is in Q1 when c1 > 0, and in Q3 when c1 < 0. In either case, particle moves towards origin as t increases.
ttt ecxecxecxx
t −−− ==⇔⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛= 12111
2
1)1( 2,21
)(x
121
2
1
11211 2
22, xx
cx
cxeecxecx ttt =⇔==⇔== −−−
Example 2: Phase Plane for x(2) (7 of 9)
Next, consider x = c2x(2):
Then x(2) lies along the straight line x2 = -2½x1, which is the line through origin in direction of 2nd eigenvector ξ(2)
If solution is trajectory of particle, with position given by (x1, x2), then it is in Q4 when c2 > 0, and in Q2 when c2 < 0. In either case, particle moves towards origin as t increases.
ttt ecxecxecxx
t 422
421
42
2
1)2( ,212)( −−− =−=⇔⎟⎟⎠
⎞⎜⎜⎝
⎛−=⎟⎟
⎠
⎞⎜⎜⎝
⎛=x
Example 2: Phase Plane for General Solution (8 of 9)
The general solution is x = c1x(1) + c2x(2):
As t →∞, c1x(1) is dominant and c2x(2) becomes negligible. Thus, for c1 ≠ 0, all solutions asymptotically approach origin along the line x2 = 2½x1 as t →∞. Similarly, all solutions are unbounded as t → - ∞. The origin is a node, and is asymptotically stable.
tt etet 4)2()1(
12)(,
21
)( −−
⎟⎟⎠
⎞⎜⎜⎝
⎛−=⎟⎟
⎠
⎞⎜⎜⎝
⎛= xx
Example 2: Time Plots for General Solution (9 of 9)
The general solution is x = c1x(1) + c2x(2):
As an alternative to phase plane plots, we can graph x1 or x2as a function of t. A few plots of x1 are given below. Graphs of x2 are similarly obtained.
⎟⎟⎠
⎞⎜⎜⎝
⎛
+−
=⎟⎟⎠
⎞⎜⎜⎝
⎛⇔⎟⎟
⎠
⎞⎜⎜⎝
⎛−+⎟⎟
⎠
⎞⎜⎜⎝
⎛=
−−
−−−−
tt
tttt
ecececec
txtx
ecect4
21
421
2
1421 2
2)()(
12
21
)(x
2 x 2 Case: Real Eigenvalues, Saddle Points and Nodes
The previous two examples demonstrate the two main cases for a 2 x 2 real system with real and different eigenvalues:
Both eigenvalues have opposite signs, in which case origin is a saddle point and is unstable.Both eigenvalues have the same sign, in which case origin is a node, and is asymptotically stable if the eigenvalues are negative and unstable if the eigenvalues are positive.
Eigenvalues, Eigenvectors and Fundamental Solutions
In general, for an n x n real linear system x' = Ax:All eigenvalues are real and different from each other.Some eigenvalues occur in complex conjugate pairs.Some eigenvalues are repeated.
If eigenvalues r1,…, rn are real & different, then there are ncorresponding linearly independent eigenvectors ξ(1),…, ξ(n). The associated solutions of x' = Ax are
Using Wronskian, it can be shown that these solutions are linearly independent, and hence form a fundamental set of solutions. Thus general solution is
trnntr netet )()()1()1( )(,,)( 1 ξxξx == K
trnn
tr necec )()1(1
1 ξξx ++= K
Hermitian Case: Eigenvalues, Eigenvectors & Fundamental Solutions
If A is an n x n Hermitian matrix (real and symmetric), then all eigenvalues r1,…, rn are real, although some may repeat. In any case, there are n corresponding linearly independent and orthogonal eigenvectors ξ(1),…, ξ(n). The associated solutions of x' = Ax are
and form a fundamental set of solutions.
trnntr netet )()()1()1( )(,,)( 1 ξxξx == K
Example 3: Hermitian Matrix (1 of 3)
Consider the homogeneous equation x' = Ax below.
The eigenvalues were found previously in Ch 7.3, and were: r1 = 2, r2 = -1 and r3 = -1.
Corresponding eigenvectors:
xx⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛=′
011101110
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−=
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−=
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛=
110
, 101
,111
)3()2()1( ξξξ
Example 3: General Solution (2 of 3)
The fundamental solutions are
with general solution
ttt eee −−
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−=
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−=
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛=
110
, 101
,111
)3()2(2)1( xxx
ttt ececec −−
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−+
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−+
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛=
110
101
111
322
1x
Example 3: General Solution Behavior (3 of 3)
The general solution is x = c1x(1) + c2x(2) + c3x(3):
As t →∞, c1x(1) is dominant and c2x(2) , c3x(3) become negligible. Thus, for c1 ≠ 0, all solns x become unbounded as t →∞,while for c1 = 0, all solns x → 0 as t →∞.The initial points that cause c1 = 0 are those that lie in plane determined by ξ(2) and ξ(3). Thus solutions that start in this plane approach origin as t →∞.
ttt ececec −−
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−+
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−+
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛=
110
101
111
322
1x
Complex Eigenvalues and Fundamental SolnsIf some of the eigenvalues r1,…, rn occur in complex conjugate pairs, but otherwise are different, then there are still n corresponding linearly independent solutions
which form a fundamental set of solutions. Some may be complex-valued, but real-valued solutions may be derived from them. This situation will be examined in Ch 7.6.
If the coefficient matrix A is complex, then complex eigenvalues need not occur in conjugate pairs, but solutions will still have the above form (if the eigenvalues are distinct) and these solutions may be complex-valued.
,)(,,)( )()()1()1( 1 trnntr netet ξxξx == K
Repeated Eigenvalues and Fundamental SolnsIf some of the eigenvalues r1,…, rn are repeated, then there may not be n corresponding linearly independent solutions of the form
In order to obtain a fundamental set of solutions, it may be necessary to seek additional solutions of another form. This situation is analogous to that for an nth order linear equation with constant coefficients, in which case a repeated root gave rise solutions of the form
This case of repeated eigenvalues is examined in Section 7.8.
trnntr netet )()()1()1( )(,,)( 1 ξxξx == K
K,,, 2 rtrtrt ettee
Ch 7.6: Complex EigenvaluesWe consider again a homogeneous system of n first order linear equations with constant, real coefficients,
and thus the system can be written as x' = Ax, where
,2211
22221212
12121111
nnnnnn
nn
nn
xaxaxax
xaxaxaxxaxaxax
+++=′
+++=′+++=′
K
M
K
K
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
=
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
=
nnnn
n
n
n aaa
aaaaaa
tx
txtx
t
L
MOMM
L
L
M
21
22221
11211
2
1
,
)(
)()(
)( Ax
Conjugate Eigenvalues and EigenvectorsWe know that x = ξert is a solution of x' = Ax, provided r is an eigenvalue and ξ is an eigenvector of A.The eigenvalues r1,…, rn are the roots of det(A-rI) = 0, and the corresponding eigenvectors satisfy (A-rI)ξ = 0. If A is real, then the coefficients in the polynomial equation det(A-rI) = 0 are real, and hence any complex eigenvalues must occur in conjugate pairs. Thus if r1 = λ + iμ is an eigenvalue, then so is r2 = λ - iμ.The corresponding eigenvectors ξ(1), ξ(2) are conjugates also.To see this, recall A and I have real entries, and hence( ) ( ) ( ) 0ξIA0ξIA0ξIA =−⇒=−⇒=− )2(
2)1(
1)1(
1 rrr
Conjugate SolutionsIt follows from the previous slide that the solutions
corresponding to these eigenvalues and eigenvectors are conjugates conjugates as well, since
)1()1()2()2( 22 xξξx === trtr ee
trtr ee 21 )2()2()1()1( , ξxξx ==
Real-Valued SolutionsThus for complex conjugate eigenvalues r1 and r2 , the corresponding solutions x(1) and x(2) are conjugates also.To obtain real-valued solutions, use real and imaginary parts of either x(1) or x(2). To see this, let ξ(1) = a + ib. Then
where
are real valued solutions of x' = Ax, and can be shown to be linearly independent.
( ) ( ) ( )( ) ( )
)()(cossinsincos
sincos)1()1(
titttiette
titeiett
tti
vubaba
baξx
+=++−=
++== +
μμμμ
μμλλ
λμλ
( ) ( ),cossin)(,sincos)( ttetttet tt μμμμ λλ bavbau +=−=
General SolutionTo summarize, suppose r1 = λ + iμ, r2 = λ - iμ, and that r3,…, rn are all real and distinct eigenvalues of A. Let the corresponding eigenvectors be
Then the general solution of x' = Ax is
where
trnn
tr necectctc )()3(321
3)()( ξξvux ++++= K
)()4()3()2()1( ,,,,, nii ξξξbaξbaξ K−=+=
( ) ( )ttetttet tt μμμμ λλ cossin)(,sincos)( bavbau +=−=
Example 1: Direction Field (1 of 7)
Consider the homogeneous equation x' = Ax below.
A direction field for this system is given below.Substituting x = ξert in for x, and rewriting system as (A-rI)ξ = 0, we obtain
xx ⎟⎟⎠
⎞⎜⎜⎝
⎛−−
−=′
2/1112/1
⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛⎟⎟⎠
⎞⎜⎜⎝
⎛−−−
−−00
2/1112/1
1
1
ξξ
rr
Example 1: Complex Eigenvalues (2 of 7)
We determine r by solving det(A-rI) = 0. Now
Thus
Therefore the eigenvalues are r1 = -1/2 + i and r2 = -1/2 - i.
( )4512/1
2/1112/1 22 ++=++=−−−
−−rrr
rr
iir ±−=±−
=−±−
=21
221
2)4/5(411 2
Example 1: First Eigenvector (3 of 7)
Eigenvector for r1 = -1/2 + i: Solve
by row reducing the augmented matrix:
Thus
( )
⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛⎟⎟⎠
⎞⎜⎜⎝
⎛−−
⇔⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛⎟⎟⎠
⎞⎜⎜⎝
⎛−−
−⇔
⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛⎟⎟⎠
⎞⎜⎜⎝
⎛−−−
−−⇔=−
00
11
00
11
00
2/1112/1
2
1
2
1
1
1
ξξ
ξξ
ξξ
ii
ii
rr
r 0ξIA
⎟⎟⎠
⎞⎜⎜⎝
⎛=→⎟⎟
⎠
⎞⎜⎜⎝
⎛−=→⎟⎟
⎠
⎞⎜⎜⎝
⎛→⎟⎟
⎠
⎞⎜⎜⎝
⎛−− i
iiii 1
choose00001
0101 )1(
2
2)1( ξξξξ
⎟⎟⎠
⎞⎜⎜⎝
⎛+⎟⎟⎠
⎞⎜⎜⎝
⎛=
10
01)1( iξ
Example 1: Second Eigenvector (4 of 7)
Eigenvector for r1 = -1/2 - i: Solve
by row reducing the augmented matrix:
Thus
( )
⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛⎟⎟⎠
⎞⎜⎜⎝
⎛−
−⇔⎟⎟
⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛⎟⎟⎠
⎞⎜⎜⎝
⎛−
⇔
⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛⎟⎟⎠
⎞⎜⎜⎝
⎛−−−
−−⇔=−
00
11
00
11
00
2/1112/1
2
1
2
1
1
1
ξξ
ξξ
ξξ
ii
ii
rr
r 0ξIA
⎟⎟⎠
⎞⎜⎜⎝
⎛−
=→⎟⎟⎠
⎞⎜⎜⎝
⎛=→⎟⎟
⎠
⎞⎜⎜⎝
⎛ −→⎟⎟
⎠
⎞⎜⎜⎝
⎛−
−i
iiii 1
choose00001
0101 )2(
2
2)2( ξξξξ
⎟⎟⎠
⎞⎜⎜⎝
⎛−
+⎟⎟⎠
⎞⎜⎜⎝
⎛=
10
01)2( iξ
Example 1: General Solution (5 of 7)
The corresponding solutions x = ξert of x' = Ax are
The Wronskian of these two solutions is
Thus u(t) and v(t) are real-valued fundamental solutions of x' = Ax, with general solution x = c1u + c2v.
⎟⎟⎠
⎞⎜⎜⎝
⎛=⎥
⎦
⎤⎢⎣
⎡⎟⎟⎠
⎞⎜⎜⎝
⎛+⎟⎟
⎠
⎞⎜⎜⎝
⎛=
⎟⎟⎠
⎞⎜⎜⎝
⎛−
=⎥⎦
⎤⎢⎣
⎡⎟⎟⎠
⎞⎜⎜⎝
⎛−⎟⎟
⎠
⎞⎜⎜⎝
⎛=
−−
−−
tt
ettet
tt
ettet
tt
tt
cossin
cos10
sin01
)(
sincos
sin10
cos01
)(
2/2/
2/2/
v
u
[ ] 0cossinsincos
)(, 2/2/
2/2/)2()1( ≠=
−= −
−−
−−t
tt
tt
etetetete
tW xx
Example 1: Phase Plane (6 of 7)
Given below is the phase plane plot for solutions x, with
Each solution trajectory approaches origin along a spiral path as t →∞, since coordinates are products of decaying exponential and sine or cosine factors. The graph of u passes through (1,0), since u(0) = (1,0). Similarly, the graph of v passes through (0,1). The origin is a spiral point, and is asymptotically stable.
⎟⎟⎠
⎞⎜⎜⎝
⎛+⎟⎟⎠
⎞⎜⎜⎝
⎛
−=⎟⎟
⎠
⎞⎜⎜⎝
⎛=
−
−
−
−
tete
ctete
cxx
t
t
t
t
cossin
sincos
2/
2/
22/
2/
12
1x
Example 1: Time Plots (7 of 7)
The general solution is x = c1u + c2v:
As an alternative to phase plane plots, we can graph x1 or x2as a function of t. A few plots of x1 are given below, each one a decaying oscillation as t →∞.
⎟⎟⎠
⎞⎜⎜⎝
⎛
+−+
=⎟⎟⎠
⎞⎜⎜⎝
⎛=
−−
−−
tectectectec
txtx
tt
tt
cossinsincos
)()(
2/2
2/1
2/2
2/1
2
1x
Spiral Points, Centers, Eigenvalues, and TrajectoriesIn previous example, general solution was
The origin was a spiral point, and was asymptotically stable. If real part of complex eigenvalues is positive, then trajectories spiral away, unbounded, from origin, and hence origin would be an unstable spiral point. If real part of complex eigenvalues is zero, then trajectories circle origin, neither approaching nor departing. Then origin is called a center and is stable, but not asymptotically stable. Trajectories periodic in time. The direction of trajectory motion depends on entries in A.
⎟⎟⎠
⎞⎜⎜⎝
⎛+⎟⎟⎠
⎞⎜⎜⎝
⎛
−=⎟⎟
⎠
⎞⎜⎜⎝
⎛=
−
−
−
−
tete
ctete
cxx
t
t
t
t
cossin
sincos
2/
2/
22/
2/
12
1x
Example 2: Second Order System with Parameter (1 of 2)
The system x' = Ax below contains a parameter α.
Substituting x = ξert in for x and rewriting system as (A-rI)ξ = 0, we obtain
Next, solve for r in terms of α :
xx ⎟⎟⎠
⎞⎜⎜⎝
⎛−
=′022α
⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛⎟⎟⎠
⎞⎜⎜⎝
⎛−−
−00
22
1
1
ξξα
rr
21644)(
22 2
2 −±=⇒+−=+−=
−−− αααα
αrrrrr
rr
Example 2: Eigenvalue Analysis (2 of 2)
The eigenvalues are given by the quadratic formula above.For α < -4, both eigenvalues are real and negative, and hence origin is asymptotically stable node. For α > 4, both eigenvalues are real and positive, and hence the origin is an unstable node.For -4 < α < 0, eigenvalues are complex with a negative real part, and hence origin is asymptotically stable spiral point.For 0 < α < 4, eigenvalues are complex with a positive real part, and the origin is an unstable spiral point.For α = 0, eigenvalues are purely imaginary, origin is a center. Trajectories closed curves about origin & periodic.For α = ± 4, eigenvalues real & equal, origin is a node (Ch 7.8)
2162 −±
=ααr
Second Order Solution Behavior and Eigenvalues: Three Main CasesFor second order systems, the three main cases are:
Eigenvalues are real and have opposite signs; x = 0 is a saddle point.Eigenvalues are real, distinct and have same sign; x = 0 is a node.Eigenvalues are complex with nonzero real part; x = 0 a spiral point.
Other possibilities exist and occur as transitions between two of the cases listed above:
A zero eigenvalue occurs during transition between saddle point and node. Real and equal eigenvalues occur during transition between nodes and spiral points. Purely imaginary eigenvalues occur during a transition between asymptotically stable and unstable spiral points.
aacbbr
242 −±−
=
Ch 7.7: Fundamental MatricesSuppose that x(1)(t),…, x(n)(t) form a fundamental set of solutions for x' = P(t)x on α < t < β. The matrix
whose columns are x(1)(t),…, x(n)(t), is a fundamental matrix for the system x' = P(t)x. This matrix is nonsingular since its columns are linearly independent, and hence detΨ ≠ 0. Note also that since x(1)(t),…, x(n)(t) are solutions of x' = P(t)x, Ψ satisfies the matrix differential equation Ψ' = P(t)Ψ.
,)()(
)()()(
)()1(
)(1
)1(1
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
=txtx
txtxt
nnn
n
L
MOM
L
Ψ
Example 1:Consider the homogeneous equation x' = Ax below.
In Chapter 7.5, we found the following fundamental solutions for this system:
Thus a fundamental matrix for this system is
xx ⎟⎟⎠
⎞⎜⎜⎝
⎛=′
1411
tt etet −⎟⎟⎠
⎞⎜⎜⎝
⎛−
=⎟⎟⎠
⎞⎜⎜⎝
⎛=
21
)(,21
)( )2(3)1( xx
⎟⎟⎠
⎞⎜⎜⎝
⎛
−=
−
−
tt
tt
eeee
t22
)( 3
3
Ψ
Fundamental Matrices and General SolutionThe general solution of x' = P(t)x
can be expressed x = Ψ(t)c, where c is a constant vector with components c1,…, cn:
)()1(1 )( n
nctc xxx ++= L
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
==
nn
nn
n
c
c
txtx
txtxt M
L
MOM
L 1
)()1(
)(1
)1(1
)()(
)()()( cΨx
Fundamental Matrix & Initial Value ProblemConsider an initial value problem
x' = P(t)x, x(t0) = x0
where α < t0 < β and x0 is a given initial vector.Now the solution has the form x = Ψ(t)c, hence we choose cso as to satisfy x(t0) = x0. Recalling Ψ(t0) is nonsingular, it follows that
Thus our solution x = Ψ(t)c can be expressed as
00
100 )()( xΨcxcΨ tt −=⇒=
00
1 )()( xΨΨx tt −=
Recall: Theorem 7.4.4Let
Let x(1),…, x(n) be solutions of x' = P(t)x on I: α < t < β that satisfy the initial conditions
Then x(1),…, x(n) are fundamental solutions of x' = P(t)x.
⎟⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜⎜
⎝
⎛
=
⎟⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜⎜
⎝
⎛
=
⎟⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜⎜
⎝
⎛
=
10
00
,,
0
010
,
0
001
)()2()1( MK
MM
neee
βα <<== 0)(
0)()1(
0)1( ,)(,,)( ttt nn exex K
Fundamental Matrix & Theorem 7.4.4Suppose x(1)(t),…, x(n)(t) form the fundamental solutions given by Thm 7.4.4. Denote the corresponding fundamental matrix by Φ(t). Then columns of Φ(t) are x(1)(t),…, x(n)(t), and hence
Thus Φ-1(t0) = I, and the hence general solution to the corresponding initial value problem is
It follows that for any fundamental matrix Ψ(t),
I=
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
=Φ
100
010001
)( 0
L
MOMM
L
L
t
000
1 )()()( xΦxΦΦx ttt == −
)()()()()()( 0100
01 tttttt −− =⇒== ΨΨΦxΦxΨΨx
The Fundamental Matrix Φand Varying Initial ConditionsThus when using the fundamental matrix Φ(t), the general solution to an IVP is
This representation is useful if same system is to be solved for many different initial conditions, such as a physical system that can be started from many different initial states. Also, once Φ(t) has been determined, the solution to each set of initial conditions can be found by matrix multiplication, as indicated by the equation above.Thus Φ(t) represents a linear transformation of the initial conditions x0 into the solution x(t) at time t.
000
1 )()()( xΦxΦΦx ttt == −
Example 2: Find Φ(t) for 2 x 2 System (1 of 5)
Find Φ(t) such that Φ(0) = I for the system below.
Solution: First, we must obtain x(1)(t) and x(2)(t) such that
We know from previous results that the general solution is
Every solution can be expressed in terms of the general solution, and we use this fact to find x(1)(t) and x(2)(t).
xx ⎟⎟⎠
⎞⎜⎜⎝
⎛=′
1411
tt ecec −⎟⎟⎠
⎞⎜⎜⎝
⎛−
+⎟⎟⎠
⎞⎜⎜⎝
⎛=
21
21
23
1x
⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛=
10
)0(,01
)0( )2()1( xx
Example 2: Use General Solution (2 of 5)
Thus, to find x(1)(t), express it terms of the general solution
and then find the coefficients c1 and c2. To do so, use the initial conditions to obtain
or equivalently,
tt ecect −⎟⎟⎠
⎞⎜⎜⎝
⎛−
+⎟⎟⎠
⎞⎜⎜⎝
⎛=
21
21
)( 23
1)1(x
⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛−
+⎟⎟⎠
⎞⎜⎜⎝
⎛=
01
21
21
)0( 21)1( ccx
⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛⎟⎟⎠
⎞⎜⎜⎝
⎛− 0
12211
2
1
cc
Example 2: Solve for x(1)(t) (3 of 5)
To find x(1)(t), we therefore solve
by row reducing the augmented matrix:
Thus2/12/1
2/1102/101
2/110111
240111
022111
2
1
==
→
⎟⎟⎠
⎞⎜⎜⎝
⎛→⎟⎟
⎠
⎞⎜⎜⎝
⎛→⎟⎟
⎠
⎞⎜⎜⎝
⎛−−
→⎟⎟⎠
⎞⎜⎜⎝
⎛−
cc
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
−
+=⎟⎟⎠
⎞⎜⎜⎝
⎛−
+⎟⎟⎠
⎞⎜⎜⎝
⎛=
−
−−
tt
tttt
ee
eeeet3
33)1( 2
121
21
21
21
21)(x
⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛⎟⎟⎠
⎞⎜⎜⎝
⎛− 0
12211
2
1
cc
Example 2: Solve for x(2)(t) (4 of 5)
To find x(2)(t), we similarly solve
by row reducing the augmented matrix:
Thus4/14/1
4/1104/101
4/110011
140011
122011
2
1
−==
→
⎟⎟⎠
⎞⎜⎜⎝
⎛−
→⎟⎟⎠
⎞⎜⎜⎝
⎛−
→⎟⎟⎠
⎞⎜⎜⎝
⎛−
→⎟⎟⎠
⎞⎜⎜⎝
⎛−
cc
⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜
⎝
⎛
+
−=⎟⎟
⎠
⎞⎜⎜⎝
⎛−
−⎟⎟⎠
⎞⎜⎜⎝
⎛=
−
−
−
tt
tt
tt
ee
eeeet
21
21
41
41
21
41
21
41)(
3
3
3)2(x
⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛⎟⎟⎠
⎞⎜⎜⎝
⎛− 1
02211
2
1
cc
Example 2: Obtain Φ(t) (5 of 5)
The columns of Φ(t) are given by x(1)(t) and x(2)(t), and thus from the previous slide we have
Note Φ(t) is more complicated than Ψ(t) found in Ex 1. However, now that we have Φ(t), it is much easier to determine the solution to any set of initial conditions.
⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜
⎝
⎛
+−
−+=
−−
−−
tttt
tttt
eeee
eeeet
21
21
41
41
21
21
)(33
33
Φ
⎟⎟⎠
⎞⎜⎜⎝
⎛
−=
−
−
tt
tt
eeee
t22
)(3
3
Ψ
Matrix Exponential FunctionsConsider the following two cases:
The solution to x' = ax, x(0) = x0, is x = x0eat, where e0 = 1.The solution to x' = Ax, x(0) = x0, is x = Φ(t)x0, where Φ(0) = I.
Comparing the form and solution for both of these cases, we might expect Φ(t) to have an exponential character. Indeed, it can be shown that Φ(t) = eAt, where
is a well defined matrix function that has all the usual properties of an exponential function. See text for details. Thus the solution to x' = Ax, x(0) = x0, is x = eAtx0.
∑∑∞
=
∞
=
+==10 !! n
nn
n
nnt
nt
nte AIAA
Example 3: Matrix Exponential FunctionConsider the diagonal matrix A below.
Then
In general,
Thus
⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛== ∑∑
∞
=
∞
=t
t
n
nn
n
nnt
ee
tn
nn
te2
00 00
!/200!/1
!AA
⎟⎟⎠
⎞⎜⎜⎝
⎛=
2001
A
K,2001
2001
2001
,2001
2001
2001
323
22
⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛⎟⎟⎠
⎞⎜⎜⎝
⎛= AA
⎟⎟⎠
⎞⎜⎜⎝
⎛= n
n
2001
A
Coupled Systems of EquationsRecall that our constant coefficient homogeneous system
written as x' = Ax with
is a system of coupled equations that must be solved simultaneously to find all the unknown variables.
,2211
12121111
nnnnnn
nn
xaxaxax
xaxaxax
+++=′
+++=′
K
M
K
,,)(
)()(
1
1111
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛=
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛=
nnn
n
n aa
aa
tx
txt
L
MOM
L
M Ax
Uncoupled Systems & Diagonal MatricesIn contrast, if each equation had only one variable, solved for independently of other equations, then task would be easier.In this case our system would have the form
or x' = Dx, where D is a diagonal matrix:
,00
0000
21
21112
21111
nnnn
n
n
xdxxx
xxdxxxxxdx
+++=′
+++=′+++=′
K
M
K
K
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
=⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛=
nnn d
dd
tx
txt
L
MOMM
L
L
M
00
0000
,)(
)()( 22
111
Dx
Uncoupling: Transform Matrix TIn order to explore transforming our given system x' = Ax of coupled equations into an uncoupled system x' = Dx, where Dis a diagonal matrix, we will use the eigenvectors of A.Suppose A is n x n with n linearly independent eigenvectors ξ(1),…, ξ(n), and corresponding eigenvalues λ1,…, λn.Define n x n matrices T and D using the eigenvalues & eigenvectors of A:
Note that T is nonsingular, and hence T-1 exists.
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
=⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
=
n
nnn
n
λ
λλ
ξξ
ξξ
L
MOMM
L
L
L
MOM
L
00
0000
, 2
1
)()1(
)(1
)1(1
DT
Uncoupling: T-1AT = DRecall here the definitions of A, T and D:
Then the columns of AT are Aξ(1),…, Aξ(n), and hence
It follows that T-1AT = D.
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
=⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
=⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛=
n
nnn
n
nnn
n
aa
aa
λ
λλ
ξξ
ξξ
L
MOMM
L
L
L
MOM
L
L
MOM
L
00
0000
,, 2
1
)()1(
)(1
)1(1
1
111
DTA
TDAT =⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
=)()1(
1
)(1
)1(11
nnnn
nn
ξλξλ
ξλξλ
L
MOM
L
Similarity TransformationsThus, if the eigenvalues and eigenvectors of A are known, then A can be transformed into a diagonal matrix D, with
T-1AT = DThis process is known as a similarity transformation, and Ais said to be similar to D. Alternatively, we could say that Ais diagonalizable.
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
=⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
=⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛=
n
nnn
n
nnn
n
aa
aa
λ
λλ
ξξ
ξξ
L
MOMM
L
L
L
MOM
L
L
MOM
L
00
0000
,, 2
1
)()1(
)(1
)1(1
1
111
DTA
Similarity Transformations: Hermitian CaseRecall: Our similarity transformation of A has the form
T-1AT = Dwhere D is diagonal and columns of T are eigenvectors of A. If A is Hermitian, then A has n linearly independent orthogonal eigenvectors ξ(1),…, ξ(n), normalized so that (ξ(i), ξ(i)) =1 for i = 1,…, n, and (ξ(i), ξ(k)) = 0 for i ≠ k. With this selection of eigenvectors, it can be shown that T-1 = T*. In this case we can write our similarity transform as
T*AT = D
Nondiagonalizable AFinally, if A is n x n with fewer than n linearly independent eigenvectors, then there is no matrix T such that T-1AT = D. In this case, A is not similar to a diagonal matrix and A is not diagonlizable.
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
=⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
=⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛=
n
nnn
n
nnn
n
aa
aa
λ
λλ
ξξ
ξξ
L
MOMM
L
L
L
MOM
L
L
MOM
L
00
0000
,, 2
1
)()1(
)(1
)1(1
1
111
DTA
Example 4: Find Transformation Matrix T (1 of 2)
For the matrix A below, find the similarity transformation matrix T and show that A can be diagonalized.
We already know that the eigenvalues are λ1 = 3, λ2 = -1 with corresponding eigenvectors
Thus
⎟⎟⎠
⎞⎜⎜⎝
⎛=
1411
A
⎟⎟⎠
⎞⎜⎜⎝
⎛−
=⎟⎟⎠
⎞⎜⎜⎝
⎛=
21
)(,21
)( )2()1( tt ξξ
⎟⎟⎠
⎞⎜⎜⎝
⎛−
=⎟⎟⎠
⎞⎜⎜⎝
⎛−
=1003
,2211
DT
Example 4: Similarity Transformation (2 of 2)
To find T-1, augment the identity to T and row reduce:
Then
Thus A is similar to D, and hence A is diagonalizable.
⎟⎟⎠
⎞⎜⎜⎝
⎛−
=→⎟⎟⎠
⎞⎜⎜⎝
⎛−
→
⎟⎟⎠
⎞⎜⎜⎝
⎛−
→⎟⎟⎠
⎞⎜⎜⎝
⎛−−
→⎟⎟⎠
⎞⎜⎜⎝
⎛−
−
4/12/14/12/1
4/12/1104/12/101
4/12/1100111
12400111
10220111
1T
D
ATT
=⎟⎟⎠
⎞⎜⎜⎝
⎛−
=⎟⎟⎠
⎞⎜⎜⎝
⎛ −⎟⎟⎠
⎞⎜⎜⎝
⎛−
=
⎥⎦
⎤⎢⎣
⎡⎟⎟⎠
⎞⎜⎜⎝
⎛−⎟⎟
⎠
⎞⎜⎜⎝
⎛⎟⎟⎠
⎞⎜⎜⎝
⎛−
=−
1003
2613
4/12/14/12/1
2211
1411
4/12/14/12/11
Fundamental Matrices for Similar Systems (1 of 3)
Recall our original system of differential equations x' = Ax.If A is n x n with n linearly independent eigenvectors, then Ais diagonalizable. The eigenvectors form the columns of the nonsingular transform matrix T, and the eigenvalues are the corresponding nonzero entries in the diagonal matrix D.Suppose x satisfies x' = Ax, let y be the n x 1 vector such that x = Ty. That is, let y be defined by y = T-1x.Since x' = Ax and T is a constant matrix, we have Ty' = ATy, and hence y' = T-1ATy = Dy.Therefore y satisfies y' = Dy, the system similar to x' = Ax.Both of these systems have fundamental matrices, which we examine next.
Fundamental Matrix for Diagonal System (2 of 3)
A fundamental matrix for y' = Dy is given by Q(t) = eDt. Recalling the definition of eDt, we have
( )
( )
⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜
⎝
⎛
=
⎟⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜⎜
⎝
⎛
=⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
==
∑
∑∑∑
∞
=
∞
=∞
=
∞
=
t
t
n
nn
n
n
n
n
nn
n
n
nn
ne
e
nt
nt
nt
ntt
λ
λ
λ
λ
λ
λ
000000
!00
00
00!
!00
0000
!)(
1
0
0
1
0
1
0
O
OODQ
Fundamental Matrix for Original System (3 of 3)
To obtain a fundamental matrix Ψ(t) for x' = Ax, recall that the columns of Ψ(t) consist of fundamental solutions x satisfying x' = Ax. We also know x = Ty, and hence it follows that
The columns of Ψ(t) given the expected fundamental solutions of x' = Ax.
⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜
⎝
⎛
=⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜
⎝
⎛
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
==tn
nt
n
tnt
t
t
nnn
n
n
n
n ee
ee
e
e
λλ
λλ
λ
λ
ξξ
ξξ
ξξ
ξξ
)()1(
)(1
)1(1
)()1(
)(1
)1(1
1
11
000000
L
MOM
L
O
L
MOM
L
TQΨ
Example 5: Fundamental Matrices for Similar Systems
We now use the analysis and results of the last few slides.Applying the transformation x = Ty to x' = Ax below, this system becomes y' = T-1ATy = Dy:
A fundamental matrix for y' = Dy is given by Q(t) = eDt:
Thus a fundamental matrix Ψ(t) for x' = Ax is
yyxx ⎟⎟⎠
⎞⎜⎜⎝
⎛−
=′⇒⎟⎟⎠
⎞⎜⎜⎝
⎛=′
1003
1411
⎟⎟⎠
⎞⎜⎜⎝
⎛=
−t
t
ee
t0
0)(
3
Q
⎟⎟⎠
⎞⎜⎜⎝
⎛
−=⎟⎟
⎠
⎞⎜⎜⎝
⎛⎟⎟⎠
⎞⎜⎜⎝
⎛−
==−
−
− tt
tt
t
t
eeee
ee
t220
02211
)( 3
33
TQΨ
Ch 7.8: Repeated EigenvaluesWe consider again a homogeneous system of n first order linear equations with constant real coefficients x' = Ax.If the eigenvalues r1,…, rn of A are real and different, then there are n linearly independent eigenvectors ξ(1),…, ξ(n), and nlinearly independent solutions of the form
If some of the eigenvalues r1,…, rn are repeated, then there may not be n corresponding linearly independent solutions of the above form.In this case, we will seek additional solutions that are products of polynomials and exponential functions.
trnntr netet )()()1()1( )(,,)( 1 ξxξx == K
Example 1: Direction Field (1 of 12)
Consider the homogeneous equation x' = Ax below.
A direction field for this system is given below.Substituting x = ξert in for x, and rewriting system as (A-rI)ξ = 0, we obtain
xx ⎟⎟⎠
⎞⎜⎜⎝
⎛ −=′
3111
⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛⎟⎟⎠
⎞⎜⎜⎝
⎛−−−
00
3111
1
1
ξξ
rr
Example 1: Eigenvalues (2 of 12)
Solutions have the form x = ξert, where r and ξ satisfy
To determine r, solve det(A-rI) = 0:
Thus r1 = 2 and r2 = 2.
22 )2(441)3)(1(31
11−=+−=+−−=
−−−
rrrrrr
r
⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛⎟⎟⎠
⎞⎜⎜⎝
⎛−−−
00
3111
1
1
ξξ
rr
Example 1: Eigenvectors (3 of 12)
To find the eigenvectors, we solve
by row reducing the augmented matrix:
Thus there is only one eigenvector for the repeated eigenvalue r = 2.
( ) ⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛⎟⎟⎠
⎞⎜⎜⎝
⎛ −−⇔⎟⎟
⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛⎟⎟⎠
⎞⎜⎜⎝
⎛−−−
⇔=−00
1111
00
231121
2
1
2
1
ξξ
ξξ
0ξIA r
⎟⎟⎠
⎞⎜⎜⎝
⎛−
=→⎟⎟⎠
⎞⎜⎜⎝
⎛−=→
==+
→⎟⎟⎠
⎞⎜⎜⎝
⎛→⎟⎟
⎠
⎞⎜⎜⎝
⎛→⎟⎟
⎠
⎞⎜⎜⎝
⎛ −−
11
choose
00011
000011
011011
011011
)1(
2
2)1(
2
21
ξξξξ
ξξξ
Example 1: First Solution; andSecond Solution, First Attempt (4 of 12)
The corresponding solution x = ξert of x' = Ax is
Since there is no second solution of the form x = ξert, we need to try a different form. Based on methods for second order linear equations in Ch 3.5, we first try x = ξte2t. Substituting x = ξte2t into x' = Ax, we obtain
or
tet 2)1(
11
)( ⎟⎟⎠
⎞⎜⎜⎝
⎛−
=x
ttt tetee 222 2 Aξξξ =+
02 222 =−+ ttt teete Aξξξ
Example 1: Second Solution, Second Attempt (5 of 12)
From the previous slide, we have
In order for this equation to be satisfied for all t, it is necessary for the coefficients of te2t and e2t to both be zero.From the e2t term, we see that ξ = 0, and hence there is no nonzero solution of the form x = ξte2t.Since te2t and e2t appear in the above equation, we next consider a solution of the form
02 222 =−+ ttt teete Aξξξ
tt ete 22 ηξx +=
Example 1: Second Solution and its Defining Matrix Equations (6 of 12)
Substituting x = ξte2t + ηe2t into x' = Ax, we obtain
or
Equating coefficients yields Aξ = 2ξ and Aη = ξ + 2η, or
The first equation is satisfied if ξ is an eigenvector of Acorresponding to the eigenvalue r = 2. Thus
( )ttttt eteetee 22222 22 ηξAηξξ +=++
( ) tttt eteete 2222 22 AηAξηξξ +=++
( ) ( ) ξηIA0ξIA =−=− 2and2
⎟⎟⎠
⎞⎜⎜⎝
⎛−
=11
ξ
Example 1: Solving for Second Solution (7 of 12)
Recall that
Thus to solve (A – 2I)η = ξ for η, we row reduce the corresponding augmented matrix:
⎟⎟⎠
⎞⎜⎜⎝
⎛−
+⎟⎟⎠
⎞⎜⎜⎝
⎛−
=→⎟⎟⎠
⎞⎜⎜⎝
⎛−−
=→
−−=→⎟⎟⎠
⎞⎜⎜⎝
⎛ −→⎟⎟
⎠
⎞⎜⎜⎝
⎛−−
→⎟⎟⎠
⎞⎜⎜⎝
⎛−
−−
11
10
1
1000111
111111
111111
1
1
12
kηηη
η
ηη
⎟⎟⎠
⎞⎜⎜⎝
⎛−
=⎟⎟⎠
⎞⎜⎜⎝
⎛ −=
11
,3111
ξA
Example 1: Second Solution (8 of 12)
Our second solution x = ξte2t + ηe2t is now
Recalling that the first solution was
we see that our second solution is simply
since the last term of third term of x is a multiple of x(1).
ttt ekete 222
11
10
11
⎟⎟⎠
⎞⎜⎜⎝
⎛−
+⎟⎟⎠
⎞⎜⎜⎝
⎛−
+⎟⎟⎠
⎞⎜⎜⎝
⎛−
=x
,11
)( 2)1( tet ⎟⎟⎠
⎞⎜⎜⎝
⎛−
=x
,10
11
)( 22)2( tt etet ⎟⎟⎠
⎞⎜⎜⎝
⎛−
+⎟⎟⎠
⎞⎜⎜⎝
⎛−
=x
Example 1: General Solution (9 of 12)
The two solutions of x' = Ax are
The Wronskian of these two solutions is
Thus x(1) and x(2) are fundamental solutions, and the general solution of x' = Ax is
[ ] 0)(, 4222
22)2()1( ≠−=
−−= t
ttt
tt
eetee
teetW xx
⎥⎦
⎤⎢⎣
⎡⎟⎟⎠
⎞⎜⎜⎝
⎛−
+⎟⎟⎠
⎞⎜⎜⎝
⎛−
+⎟⎟⎠
⎞⎜⎜⎝
⎛−
=
+=
ttt etecec
tctct
222
21
)2(2
)1(1
10
11
11
)()()( xxx
ttt etetet 22)2(2)1(
10
11
)(,11
)( ⎟⎟⎠
⎞⎜⎜⎝
⎛−
+⎟⎟⎠
⎞⎜⎜⎝
⎛−
=⎟⎟⎠
⎞⎜⎜⎝
⎛−
= xx
Example 1: Phase Plane (10 of 12)
The general solution is
Thus x is unbounded as t →∞, and x → 0 as t → -∞. Further, it can be shown that as t → -∞, x → 0 asymptotic to the line x2 = -x1 determined by the first eigenvector. Similarly, as t →∞, x is asymptotic to a line parallel to x2 = -x1.
⎥⎦
⎤⎢⎣
⎡⎟⎟⎠
⎞⎜⎜⎝
⎛−
+⎟⎟⎠
⎞⎜⎜⎝
⎛−
+⎟⎟⎠
⎞⎜⎜⎝
⎛−
= ttt etecect 222
21 1
011
11
)(x
Example 1: Phase Plane (11 of 12)
The origin is an improper node, and is unstable. See graph. The pattern of trajectories is typical for two repeated eigenvalues with only one eigenvector. If the eigenvalues are negative, then the trajectories are similar but are traversed in the inward direction. In this case the origin is an asymptotically stable improper node.
Example 1: Time Plots for General Solution (12 of 12)
Time plots for x1(t) are given below, where we note that the general solution x can be written as follows.
⎟⎟⎠
⎞⎜⎜⎝
⎛
−+−+
=⎟⎟⎠
⎞⎜⎜⎝
⎛⇔
⎥⎦
⎤⎢⎣
⎡⎟⎟⎠
⎞⎜⎜⎝
⎛−
+⎟⎟⎠
⎞⎜⎜⎝
⎛−
+⎟⎟⎠
⎞⎜⎜⎝
⎛−
=
tt
tt
ttt
tececctecec
txtx
etecect
22
221
22
21
2
1
222
21
)()()(
10
11
11
)(x
General Case for Double EigenvaluesSuppose the system x' = Ax has a double eigenvalue r = ρand a single corresponding eigenvector ξ.The first solution is
x(1) = ξeρ t, where ξ satisfies (A-ρI)ξ = 0. As in Example 1, the second solution has the form
where ξ is as above and η satisfies (A-ρI)η = ξ.
Since ρ is an eigenvalue, det(A-ρI) = 0, and (A-ρI)η = bdoes not have a solution for all b. However, it can be shown that (A-ρI)η = ξ always has a solution. The vector η is called a generalized eigenvector.
tt ete ρρ ηξx +=)2(
Example 2: Fundamental Matrix Ψ (1 of 2)
Recall that a fundamental matrix Ψ(t) for x' = Ax has linearly independent solution for its columns. In Example 1, our system x' = Ax was
and the two solutions we found were
Thus the corresponding fundamental matrix is
⎟⎟⎠
⎞⎜⎜⎝
⎛−−−
=⎟⎟⎠
⎞⎜⎜⎝
⎛
−−−=
111
)( 2222
22
tt
eetee
teet t
ttt
tt
Ψ
ttt etetet 22)2(2)1(
10
11
)(,11
)( ⎟⎟⎠
⎞⎜⎜⎝
⎛−
+⎟⎟⎠
⎞⎜⎜⎝
⎛−
=⎟⎟⎠
⎞⎜⎜⎝
⎛−
= xx
xx ⎟⎟⎠
⎞⎜⎜⎝
⎛ −=′
3111
Example 2: Fundamental Matrix Φ (2 of 2)
The fundamental matrix Φ(t) that satisfies Φ(0) = I can be found using Φ(t) = Ψ(t)Ψ-1(0), where
where Ψ-1(0) is found as follows:
Thus
,1101
)0(,1101
)0( 1⎟⎟⎠
⎞⎜⎜⎝
⎛−−
=⎟⎟⎠
⎞⎜⎜⎝
⎛−−
= −ΨΨ
⎟⎟⎠
⎞⎜⎜⎝
⎛−−
→⎟⎟⎠
⎞⎜⎜⎝
⎛−
→⎟⎟⎠
⎞⎜⎜⎝
⎛−− 1110
010111100101
10110101
⎟⎟⎠
⎞⎜⎜⎝
⎛+−−
=⎟⎟⎠
⎞⎜⎜⎝
⎛−−⎟⎟
⎠
⎞⎜⎜⎝
⎛−−−
=1
11101
111
)( 22
tttt
ett
et ttΦ
Jordan FormsIf A is n x n with n linearly independent eigenvectors, then Acan be diagonalized using a similarity transform T-1AT = D.The transform matrix T consisted of eigenvectors of A, and the diagonal entries of D consisted of the eigenvalues of A. In the case of repeated eigenvalues and fewer than n linearly independent eigenvectors, A can be transformed into a nearly diagonal matrix J, called the Jordan form of A, with
T-1AT = J.
Example 3: Transform Matrix (1 of 2)
In Example 1, our system x' = Ax was
with eigenvalues r1 = 2 and r2 = 2 and eigenvectors
Choosing k = 0, the transform matrix T formed from the two eigenvectors ξ and η is
xx ⎟⎟⎠
⎞⎜⎜⎝
⎛ −=′
3111
⎟⎟⎠
⎞⎜⎜⎝
⎛−
+⎟⎟⎠
⎞⎜⎜⎝
⎛−
=⎟⎟⎠
⎞⎜⎜⎝
⎛−
=11
10
,11
kηξ
⎟⎟⎠
⎞⎜⎜⎝
⎛−−
=1101
T
Example 3: Jordan Form (2 of 2)
The Jordan form J of A is defined by T-1AT = J.Now
and hence
Note that the eigenvalues of A, r1 = 2 and r2 = 2, are on the main diagonal of J, and that there is a 1 directly above the second eigenvalue. This pattern is typical of Jordan forms.
⎟⎟⎠
⎞⎜⎜⎝
⎛−−
=⎟⎟⎠
⎞⎜⎜⎝
⎛−−
= −
1101
,1101 1TT
⎟⎟⎠
⎞⎜⎜⎝
⎛=⎥
⎦
⎤⎢⎣
⎡⎟⎟⎠
⎞⎜⎜⎝
⎛−−⎟⎟
⎠
⎞⎜⎜⎝
⎛ −⎟⎟⎠
⎞⎜⎜⎝
⎛−−
== −
2012
1101
3111
11011ATTJ
Ch 7.9: Nonhomogeneous Linear SystemsThe general theory of a nonhomogeneous system of equations
parallels that of a single nth order linear equation. This system can be written as x' = P(t)x + g(t), where
)()()()(
)()()()()()()()(
2211
222221212
112121111
tgxtpxtpxtpx
tgxtpxtpxtpxtgxtpxtpxtpx
nnnnnnn
nn
nn
++++=′
++++=′++++=′
K
M
K
K
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
=
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
=
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
=
)()()(
)()()()()()(
)(,
)(
)()(
)(,
)(
)()(
)(
21
22221
11211
2
1
2
1
tptptp
tptptptptptp
t
tg
tgtg
t
tx
txtx
t
nnnn
n
n
nn L
MOMM
L
L
MMPgx
General SolutionThe general solution of x' = P(t)x + g(t) on I: α < t < β has the form
where
is the general solution of the homogeneous system x' = P(t)xand v(t) is a particular solution of the nonhomogeneous system x' = P(t)x + g(t).
)()()()( )()2(2
)1(1 ttctctc n
n vxxxx ++++= K
)()()( )()2(2
)1(1 tctctc n
nxxx +++ K
DiagonalizationSuppose x' = Ax + g(t), where A is an n x n diagonalizable constant matrix. Let T be the nonsingular transform matrix whose columns are the eigenvectors of A, and D the diagonal matrix whose diagonal entries are the corresponding eigenvalues of A.Suppose x satisfies x' = Ax, let y be defined by x = Ty. Substituting x = Ty into x' = Ax, we obtain
Ty' = ATy + g(t).or y' = T-1ATy + T-1g(t)or y' = Dy + h(t), where h(t) = T-1g(t).Note that if we can solve diagonal system y' = Dy + h(t) for y, then x = Ty is a solution to the original system.
Solving Diagonal SystemNow y' = Dy + h(t) is a diagonal system of the form
where r1,…, rn are the eigenvalues of A. Thus y' = Dy + h(t) is an uncoupled system of n linear first order equations in the unknowns yk(t), which can be isolated
and solved separately, using methods of Section 2.1:
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
+
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
=
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
′
′′
⇔
⎪⎪⎭
⎪⎪⎬
⎫
++++=′
++++=′++++=′
nnnnnnn
n
n
h
hh
y
yy
r
rr
y
yy
thyryyy
thyyryythyyyry
MM
L
MOMM
L
L
M
K
M
K
K
2
1
2
1
2
1
2
1
121
22212
12111
00
0000
)(00
)(00)(00
nkthyry kkk ,,1),(1 K=+=′
nkecdssheeyt
t
trkk
srtrk
kkk ,,1,)(0
K=+= ∫ −
Solving Original SystemThe solution y to y' = Dy + h(t) has components
For this solution vector y, the solution to the original system x' = Ax + g(t) is then x = Ty. Recall that T is the nonsingular transform matrix whose columns are the eigenvectors of A. Thus, when multiplied by T, the second term on right side of yk produces general solution of homogeneous equation, while the integral term of yk produces a particular solution of nonhomogeneous system.
nkecdssheeyt
t
trkk
srtrk
kkk ,,1,)(0
K=+= ∫ −
Example 1: General Solution of Homogeneous Case (1 of 5)
Consider the nonhomogeneous system x' = Ax + g below.
Note: A is a Hermitian matrix, since it is real and symmetric.The eigenvalues of A are r1 = -3 and r2 = -1, with corresponding eigenvectors
The general solution of the homogeneous system is then
)(3
22112
tt
e t
gAxxx +=⎟⎟⎠
⎞⎜⎜⎝
⎛+⎟⎟
⎠
⎞⎜⎜⎝
⎛−
−=′
−
⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛−
=11
,11 )2()1( ξξ
tt ecect −−⎟⎟⎠
⎞⎜⎜⎝
⎛+⎟⎟
⎠
⎞⎜⎜⎝
⎛−
=11
11
)( 23
1x
Example 1: Transformation Matrix (2 of 5)
Consider next the transformation matrix T of eigenvectors. Using a Section 7.7 comment, and A Hermitian, we haveT-1 = T* = TT, provided we normalize ξ(1)and ξ(2) so that (ξ(1), ξ(1)) = 1 and (ξ(2), ξ(2)) = 1. Thus normalize as follows:
Then for this choice of eigenvectors,
⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛+
=
⎟⎟⎠
⎞⎜⎜⎝
⎛−
=⎟⎟⎠
⎞⎜⎜⎝
⎛−−−+
=
11
21
11
)1)(1()1)(1(1
,11
21
11
)1)(1()1)(1(1
)2(
)1(
ξ
ξ
⎟⎟⎠
⎞⎜⎜⎝
⎛ −=⎟⎟
⎠
⎞⎜⎜⎝
⎛−
= −
1111
21,
1111
21 1TT
Example 1: Diagonal System and its Solution (3 of 5)
Under the transformation x = Ty, we obtain the diagonal system y' = Dy + T-1g(t):
Then, using methods of Section 2.1,
⎟⎟⎠
⎞⎜⎜⎝
⎛
+−
+⎟⎟⎠
⎞⎜⎜⎝
⎛−−
=
⎟⎟⎠
⎞⎜⎜⎝
⎛⎟⎟⎠
⎞⎜⎜⎝
⎛ −+⎟⎟
⎠
⎞⎜⎜⎝
⎛⎟⎟⎠
⎞⎜⎜⎝
⎛−
−=⎟⎟
⎠
⎞⎜⎜⎝
⎛′′
−
−
−
tete
yy
te
yy
yy
t
t
t
3232
213
32
1111
21
1003
2
1
2
1
2
1
( ) ttt
ttt
ectteyteyy
ecteyteyy
−−−
−−−
+−+=⇒+=+′
+⎟⎠⎞
⎜⎝⎛ −−=⇒−=+′
2222
31111
12
322
32
91
323
22
2323
Example 1: Transform Back to Original System (4 of 5)
We next use the transformation x = Ty to obtain the solution to the original system x' = Ax + g(t):
( )
2,
2,
352
21
34
21
123
61
221
1111
1111
21
22
11
23
1
23
1
2
31
2
1
2
1
ckcktetekek
tetekek
ektte
ekte
yy
xx
ttt
ttt
tt
tt
==
⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜
⎝
⎛
+−+⎟⎠⎞
⎜⎝⎛ −+−
+−+⎟⎠⎞
⎜⎝⎛ ++
=
⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜
⎝
⎛
+−+
+⎟⎠⎞
⎜⎝⎛ −−
⎟⎟⎠
⎞⎜⎜⎝
⎛−
=⎟⎟⎠
⎞⎜⎜⎝
⎛⎟⎟⎠
⎞⎜⎜⎝
⎛−
=⎟⎟⎠
⎞⎜⎜⎝
⎛
−−−
−−−
−−
−−
Example 1: Solution of Original System (5 of 5)
Simplifying further, the solution x can be written as
Note that the first two terms on right side form the general solution to homogeneous system, while the remaining terms are a particular solution to nonhomogeneous system.
⎟⎟⎠
⎞⎜⎜⎝
⎛−⎟⎟
⎠
⎞⎜⎜⎝
⎛+⎟⎟
⎠
⎞⎜⎜⎝
⎛+⎟⎟
⎠
⎞⎜⎜⎝
⎛−
+⎟⎟⎠
⎞⎜⎜⎝
⎛+⎟⎟
⎠
⎞⎜⎜⎝
⎛−
=
⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜
⎝
⎛
+−+⎟⎠⎞
⎜⎝⎛ −+−
+−+⎟⎠⎞
⎜⎝⎛ ++
=⎟⎟⎠
⎞⎜⎜⎝
⎛
−−−−
−−−
−−−
54
31
21
11
11
21
11
11
352
21
34
21
23
1
23
1
23
1
2
1
tteeekek
tetekek
tetekek
xx
tttt
ttt
ttt
Nondiagonal CaseIf A cannot be diagonalized, (repeated eigenvalues and a shortage of eigenvectors), then it can be transformed to its Jordan form J, which is nearly diagonal. In this case the differential equations are not totally uncoupled, because some rows of J have two nonzero entries: an eigenvalue in diagonal position, and a 1 in adjacent position to the right of diagonal position. However, the equations for y1,…, yn can still be solved consecutively, starting with yn. Then the solution x to original system can be found using x = Ty.
Undetermined CoefficientsA second way of solving x' = P(t)x + g(t) is the method of undetermined coefficients. Assume P is a constant matrix, and that the components of g are polynomial, exponential or sinusoidal functions, or sums or products of these. The procedure for choosing the form of solution is usually directly analogous to that given in Section 3.6. The main difference arises when g(t) has the form ueλt,where λ is a simple eigenvalue of P. In this case, g(t) matches solution form of homogeneous system x' = P(t)x, and as a result, it is necessary to take nonhomogeneous solution to be of the form ateλt + beλt. This form differs from the Section 3.6 analog, ateλt.
Example 2: Undetermined Coefficients (1 of 5)
Consider again the nonhomogeneous system x' = Ax + g:
Assume a particular solution of the form
where the vector coefficents a, b, c, d are to be determined. Since r = -1 is an eigenvalue of A, it is necessary to include both ate-t and be-t, as mentioned on the previous slide.
tet
e tt
⎟⎟⎠
⎞⎜⎜⎝
⎛+⎟⎟
⎠
⎞⎜⎜⎝
⎛+⎟⎟
⎠
⎞⎜⎜⎝
⎛−
−=⎟⎟
⎠
⎞⎜⎜⎝
⎛+⎟⎟
⎠
⎞⎜⎜⎝
⎛−
−=′ −
−
30
02
2112
32
2112
xxx
dcbav +++= −− tetet tt)(
Example 2: Matrix Equations for Coefficients (2 of 5)
Substituting
in for x in our nonhomogeneous system x' = Ax + g,
we obtain
Equating coefficients, we conclude that
,30
02
2112
te t⎟⎟⎠
⎞⎜⎜⎝
⎛+⎟⎟
⎠
⎞⎜⎜⎝
⎛+⎟⎟
⎠
⎞⎜⎜⎝
⎛−
−=′ −xx
dcbav +++= −− tetet tt)(
cAdAcbaAbaAa =⎟⎟⎠
⎞⎜⎜⎝
⎛−=⎟⎟
⎠
⎞⎜⎜⎝
⎛−−=−= ,
30
,02
,
( ) teteteete ttttt⎟⎟⎠
⎞⎜⎜⎝
⎛+⎟⎟
⎠
⎞⎜⎜⎝
⎛++++=+−+− −−−−−
30
02
AdAcAbAacbaa
Example 2: Solving Matrix Equation for a (3 of 5)
Our matrix equations for the coefficients are:
From the first equation, we see that a is an eigenvector of Acorresponding to eigenvalue r = -1, and hence has the form
We will see on the next slide that α = 1, and hence
cAdAcbaAbaAa =⎟⎟⎠
⎞⎜⎜⎝
⎛−=⎟⎟
⎠
⎞⎜⎜⎝
⎛−−=−= ,
30
,02
,
⎟⎟⎠
⎞⎜⎜⎝
⎛=
αα
a
⎟⎟⎠
⎞⎜⎜⎝
⎛=
11
a
Example 2: Solving Matrix Equation for b (4 of 5)
Our matrix equations for the coefficients are:
Substituting aT = (α,α) into second equation,
Thus α = 1, and solving for b, we obtain
cAdAcbaAbaAa =⎟⎟⎠
⎞⎜⎜⎝
⎛−=⎟⎟
⎠
⎞⎜⎜⎝
⎛−−=−= ,
30
,02
,
⎟⎟⎠
⎞⎜⎜⎝
⎛−
=⎟⎟⎠
⎞⎜⎜⎝
⎛⎟⎟⎠
⎞⎜⎜⎝
⎛ −⇔⎟⎟
⎠
⎞⎜⎜⎝
⎛ −=⎟⎟
⎠
⎞⎜⎜⎝
⎛⎟⎟⎠
⎞⎜⎜⎝
⎛−
−⇔
⎟⎟⎠
⎞⎜⎜⎝
⎛ −=⎟⎟
⎠
⎞⎜⎜⎝
⎛⎥⎦
⎤⎢⎣
⎡⎟⎟⎠
⎞⎜⎜⎝
⎛+⎟⎟⎠
⎞⎜⎜⎝
⎛−
−⇔⎟⎟
⎠
⎞⎜⎜⎝
⎛−⎟⎟⎠
⎞⎜⎜⎝
⎛ −=
100112
1111
21001
21122
2
1
2
1
2
1
2
1
αα
αα
αα
αα
bb
bb
bb
bb
Ab
⎟⎟⎠
⎞⎜⎜⎝
⎛−
=→=→⎟⎟⎠
⎞⎜⎜⎝
⎛−⎟⎟⎠
⎞⎜⎜⎝
⎛=
10
0 choose10
11
bb kk
Example 2: Particular Solution (5 of 5)
Our matrix equations for the coefficients are:
Solving third equation for c, and then fourth equation for d, it is straightforward to obtain cT = (1, 2), dT = (-4/3, -5/3). Thus our particular solution of x' = Ax + g is
Comparing this to the result obtained in Example 1, we see that both particular solutions would be the same if we had chosen k = ½ for b on previous slide, instead of k = 0.
cAdAcbaAbaAa =⎟⎟⎠
⎞⎜⎜⎝
⎛−=⎟⎟
⎠
⎞⎜⎜⎝
⎛−−=−= ,
30
,02
,
⎟⎟⎠
⎞⎜⎜⎝
⎛−⎟⎟
⎠
⎞⎜⎜⎝
⎛+⎟⎟
⎠
⎞⎜⎜⎝
⎛−⎟⎟
⎠
⎞⎜⎜⎝
⎛= −−
54
31
21
10
11
)( tetet ttv
Variation of Parameters: PreliminariesA more general way of solving x' = P(t)x + g(t) is the method of variation of parameters. Assume P(t) and g(t) are continuous on α < t < β, and let Ψ(t) be a fundamental matrix for the homogeneous system. Recall that the columns of Ψ are linearly independent solutions of x' = P(t)x, and hence Ψ(t) is invertible on the interval α < t < β, and also Ψ'(t) = P(t)Ψ(t). Next, recall that the solution of the homogeneous system can be expressed as x = Ψ(t)c. Analogous to Section 3.7, assume the particular solution of the nonhomogeneous system has the form x = Ψ(t)u(t),where u(t) is a vector to be found.
Variation of Parameters: SolutionWe assume a particular solution of the form x = Ψ(t)u(t). Substituting this into x' = P(t)x + g(t), we obtain
Ψ'(t)u(t) + Ψ(t)u'(t) = P(t)Ψ(t)u(t) + g(t)Since Ψ'(t) = P(t)Ψ(t), the above equation simplifies to
u'(t) = Ψ-1(t)g(t)Thus
where the vector c is an arbitrary constant of integration.The general solution to x' = P(t)x + g(t) is therefore
cΨu += ∫ − dttgtt )()()( 1
( ) arbitrary,,)()()()( 11
1
βα∈+= ∫ − tdsssttt
tgΨΨcΨx
Variation of Parameters: Initial Value ProblemFor an initial value problem
x' = P(t)x + g(t), x(t0) = x(0), the general solution to x' = P(t)x + g(t) is
Alternatively, recall that the fundamental matrix Φ(t) satisfies Φ(t0) = I, and hence the general solution is
In practice, it may be easier to row reduce matrices and solve necessary equations than to compute Ψ-1(t) and substitute into equations. See next example.
∫ −− +=t
tdsssttt
0
)()()()()( 1)0(0
1 gΨΨxΨΨx
∫ −+=t
tdssstt
0
)()()()( 1)0( gΨΦxΦx
Example 3: Variation of Parameters (1 of 3)
Consider again the nonhomogeneous system x' = Ax + g:
We have previously found general solution to homogeneous case, with corresponding fundamental matrix:
Using variation of parameters method, our solution is given by x = Ψ(t)u(t), where u(t) satisfies Ψ(t)u'(t) = g(t), or
tet
e tt
⎟⎟⎠
⎞⎜⎜⎝
⎛+⎟⎟
⎠
⎞⎜⎜⎝
⎛+⎟⎟
⎠
⎞⎜⎜⎝
⎛−
−=⎟⎟
⎠
⎞⎜⎜⎝
⎛+⎟⎟
⎠
⎞⎜⎜⎝
⎛−
−=′ −
−
30
02
2112
32
2112
xxx
⎟⎟⎠
⎞⎜⎜⎝
⎛
−=
−−
−−
tt
tt
eeee
t 3
3
)(Ψ
⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛′′
⎟⎟⎠
⎞⎜⎜⎝
⎛
−
−
−−
−−
te
uu
eeee t
tt
tt
32
2
13
3
Example 3: Solving for u(t) (2 of 3)
Solving Ψ(t)u'(t) = g(t) by row reduction,
It follows that
2/312/3
2/31102/301
2/302/30
2/302
32202
32
2
321
32
33
3
3
3
t
tt
t
tt
tt
tt
tt
ttt
tt
ttt
tt
ttt
teuteeu
tetee
teetee
teeeee
teeeee
teeeee
+=′−=′
→⎟⎟⎠
⎞⎜⎜⎝
⎛
+−
→
⎟⎟⎠
⎞⎜⎜⎝
⎛
+−
→⎟⎟⎠
⎞⎜⎜⎝
⎛
+→
⎟⎟⎠
⎞⎜⎜⎝
⎛
+→⎟⎟
⎠
⎞⎜⎜⎝
⎛
−
−−
−−
−−
−−−
−−
−−−
−−
−−−
⎟⎟⎠
⎞⎜⎜⎝
⎛
+−+++−
=⎟⎟⎠
⎞⎜⎜⎝
⎛=
2
1332
2
1
2/32/36/2/2/
)(cetet
ceteeuu
t tt
ttt
u
Example 3: Solving for x(t) (3 of 3)
Now x(t) = Ψ(t)u(t), and hence we multiply
to obtain, after collecting terms and simplifying,
Note that this is the same solution as in Example 1.
⎟⎟⎠
⎞⎜⎜⎝
⎛
+−+++−
⎟⎟⎠
⎞⎜⎜⎝
⎛
−=
−−
−−
2
1332
3
3
2/32/36/2/2/
cetetcetee
eeee
tt
ttt
tt
tt
x
⎟⎟⎠
⎞⎜⎜⎝
⎛−⎟⎟
⎠
⎞⎜⎜⎝
⎛+⎟⎟
⎠
⎞⎜⎜⎝
⎛−
+⎟⎟⎠
⎞⎜⎜⎝
⎛+⎟⎟
⎠
⎞⎜⎜⎝
⎛+⎟⎟
⎠
⎞⎜⎜⎝
⎛−
= −−−−
54
31
21
11
21
11
11
11
23
1 teteecec ttttx
Laplace TransformsThe Laplace transform can be used to solve systems of equations. Here, the transform of a vector is the vector of component transforms, denoted by X(s):
Then by extending Theorem 6.2.1, we obtain
{ } { }{ } )(
)()(
)()(
)(2
1
2
1 stxLtxL
txtx
LtL Xx =⎟⎟⎠
⎞⎜⎜⎝
⎛=
⎭⎬⎫
⎩⎨⎧
⎟⎟⎠
⎞⎜⎜⎝
⎛=
{ } )0()()( xXx −=′ sstL
Example 4: Laplace Transform (1 of 5)
Consider again the nonhomogeneous system x' = Ax + g:
Taking the Laplace transform of each term, we obtain
where G(s) is the transform of g(t), and is given by
⎟⎟⎠
⎞⎜⎜⎝
⎛+⎟⎟
⎠
⎞⎜⎜⎝
⎛−
−=′
−
te t
32
2112
xx
( )⎟⎟⎠
⎞⎜⎜⎝
⎛ += 23
12)(
ss
sG
)()()0()( ssss GAXxX +=−
Example 4: Transfer Matrix (2 of 5)
Our transformed equation is
If we take x(0) = 0, then the above equation becomes
or
Solving for X(s), we obtain
The matrix (sI – A)-1 is called the transfer matrix.
)()()0()( ssss GAXxX +=−
)()()( ssss GAXX +=
( ) )()( 1 sss GAIX −−=
( ) )()( sss GXAI =−
Example 4: Finding Transfer Matrix (3 of 5)
Then
Solving for (sI – A)-1, we obtain
( ) ⎟⎟⎠
⎞⎜⎜⎝
⎛+−−+
=−⇒⎟⎟⎠
⎞⎜⎜⎝
⎛−
−=
2112
2112
ss
s AIA
( ) ⎟⎟⎠
⎞⎜⎜⎝
⎛+
+++
=− −
2112
)3)(1(11
ss
sss AI
Example 4: Transfer Matrix (4 of 5)
Next, X(s) = (sI – A)-1G(s), and hence
or
⎟⎟⎠
⎞⎜⎜⎝
⎛ +⎟⎟⎠
⎞⎜⎜⎝
⎛+
+++
= 23)1(2
2112
)3)(1(1)(
ss
ss
sssX
( )( ) ( )
( ) ( ) ⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜
⎝
⎛
+++
+++
+++
+++
=
)3(1)2(3
)3(12
)3(13
)3(122
)(22
22
ssss
ss
ssssss
sX
Example 4: Transfer Matrix (5 of 5)
Thus
To solve for x(t) = L-1{X(s)}, use partial fraction expansions of both components of X(s), and then Table 6.2.1 to obtain:
Since we assumed x(0) = 0, this solution differs slightly from the previous particular solutions.
( )( ) ( )
( ) ( ) ⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜
⎝
⎛
+++
+++
+++
+++
=
)3(1)2(3
)3(12
)3(13
)3(122
)(22
22
ssss
ss
ssssss
sX
⎟⎟⎠
⎞⎜⎜⎝
⎛−⎟⎟
⎠
⎞⎜⎜⎝
⎛+⎟⎟
⎠
⎞⎜⎜⎝
⎛+⎟⎟
⎠
⎞⎜⎜⎝
⎛+⎟⎟
⎠
⎞⎜⎜⎝
⎛−
−= −−−
54
31
21
11
12
11
32 3 tteee tttx
Summary (1 of 2)
The method of undetermined coefficients requires no integration but is limited in scope and may involve several sets of algebraic equations.Diagonalization requires finding inverse of transformation matrix and solving uncoupled first order linear equations. When coefficient matrix is Hermitian, the inverse of transformation matrix can be found without calculation, which is very helpful for large systems.The Laplace transform method involves matrix inversion, matrix multiplication, and inverse transforms. This method is particularly useful for problems with discontinuous or impulsive forcing functions.
Summary (2 of 2)
Variation of parameters is the most general method, but it involves solving linear algebraic equations with variable coefficients, integration, and matrix multiplication, and hence may be the most computationally complicated method. For many small systems with constant coefficients, all of these methods work well, and there may be little reason to select one over another.
Top Related