Formular i

2
1 Iterative methods for linear systems Consider the linear system Ax = b, with det A = 0. We use the splitting A = L + D + U, where D is diagonal, L is stricly lower triangular and U is strictly upper triangular. Here “strictly” means that the diagonal entries are zero. 1.1 Method of Jacobi Assume that the diagonal elements of A are dierent from zero. A step of the method of Jacobi is given by x (k+1) i = 1 aii j=i aij x (k) j + bi , i =1, 2,...,n. If we write the method as x (k+1) = BJ x (k) + c, then BJ = D 1 (L + U) and c = D 1 b. 1.2 Method of Gauss-Seidel Assume that the diagonal elements of A are dierent from zero. A step of the method of Gauss-Seidel is given by x (k+1) i = 1 aii j<i aij x (k+1) j j>i aij x (k) j + bi , i =1, 2,...,n. If we write the method as x (k+1) = BGSx (k) +c, then BGS = (L +D) 1 U and c =(L +D) 1 b. 1.3 Method of Successive Over-Relaxation (SOR) Assume that the diagonal elements of A are dierent from zero. A step of the method of SOR, with parameter ω, is given by r (k) i = 1 aii j<i aij x (k+1) j ji aij x (k) j + bi , x (k+1) i = x (k) i + ωr (k) i . If we write the method as x (k+1) = Bωx (k) + cω, then Bω =(I + ωD 1 L) 1 [(1 ω)I ωD 1 U] and cω = ω(I + ωD 1 L) 1 D 1 b. If we denote ¯ L = D 1 L and ¯ U = D 1 U, then the iteration matrix is Bω =(I + ω ¯ L) 1 [(1 ω)I ω ¯ U] and cω = ω(I + ω ¯ L) 1 D 1 b. 1 2 Eigenvalues and eigenvectors Let A be an n-by-n matrix. 2.1 Power Method Let z (0) be an initial vector (it is usual to take z (0) = (1,..., 1) T ). The Power Method is then y (k) = z (k) z (k) 2 , z (k+1) = Ay (k) , σk = (y (k) ) T Ay (k) (y (k) ) T y (k) =(y (k) ) T z (k+1) . Under suitable conditions, the sequence {σk}k converges to the eigenvalue with the largest modulus, and {y (k) }k (or {z (k) }k) converges to the corresponding eigenvector. 2.2 Method of Jacobi Let A be a real n-by-n symmetric matrix. Assume we are in the k-th step of the Jacobi method and, to simplify the notation, we dene B = Ak. Assume that we have chosen two indices (p, q) such that bpq = 0 and we want to rotate in this plane such that the new matrix satises bpq = bqp = 0. The angle of rotation is given by cot(2ϕ)= bpp bqq 2bpq , which means that the angle ϕ can be chosen in the interval (π 4 , π 4 ). Then, it is not dicult to nd explicit expressions for c = cos ϕ and s = sin ϕ: If bpp = bqq, then σ = sign(bpq), c = 2 2 and s = σ 2 2 . If bpp = bqq, then σ = sign[bpq(bpp bqq)], d = (bpp bqq) 2 +4b 2 pq 1 2 and c = 1 2 1+ |bpp bqq| d 1 2 , s = σ 1 2 1 |bpp bqq| d 1 2 . The transformed matrix ¯ B = R T BR (R denotes the previous rotation) is given by ¯ bpj = ¯ bjp = bpj c + bqj s, (j =1,...,n, j = p, j = q), ¯ biq = ¯ bqi = bips + biqc, (i =1,...,n, i = p, i = q), ¯ bpp = bppc 2 +2bpqcs + bqqs 2 = bpp + bpqt, ¯ bqq = bpps 2 2bpqcs + bqqc 2 = bqq bpqt, and ¯ bqp = ¯ bpq = 0, where t = s/c. 2 2.3 QR factorization We will use Householder matrices: for any v R n \{0}, the matrix Pv = I 2 v T v vv T , is a Householder matrix. 1. Given a vector x R n , there is a Householder matrix Pv such that Pvx ∈�e1. The vector v is obtained as follows: µ = x2. v = x. If µ = 0, β = x1 + sign(x1)µ, vj = 1 β vj (j =2,...,n). v1 = 1. We will refer to this algorithm as v = householder(x). 2. Algorithm for PvA (only using the vector v): β = 2 v T v . w = βA T v. A = A + vw T . We will refer to this algorithm as A = premult.h(A, v). 3. Algorithm for APv (only using the vector v): β = 2 v T v . w = βAv. A = A + wv T . We will refer to this algorithm as A = postmult.h(A, v). 4. The QR algorithm can be written, in compact form, as follows. Let A be a m-by-n matrix (m n): FOR j = 1 to n v(j : m) = householder(A(j : m, j) A(j : m, j : n) = premult.h(A(j : m, j : n),v(j : m)) IF j<m A(j +1: m, j)= v(j +1: m) ENDIF ENDFOR 3 4 Approximation 4.1 Orthogonal polynomials Let us consider the scalar product f,g= b a f (x)g(x)w(x) dx, w :[a, b] R + and continuous, dened on the real vector space of continuous functions C 0 R ([a, b]). The recurrence ϕi+1(x)=(x δi+1)ϕi(x) γ 2 i+1 ϕi1(x), i =0, 1,...,n 1, where ϕ1(x) 0, ϕ0(x) 1 and δi+1 = xϕi, ϕiϕi, ϕi, γ 2 i+1 = 0 if i =0, ϕi, ϕiϕi1, ϕi1if i =1, 2,...,n 1, produces the family of orthogonal polynomials such that ϕi is a monic polynomial of degree i. 4.1.1 Legendre polynomials The Legendre polynomials Pn are orthogonal polynomials for the scalar product f,g= 1 1 f (x)g(x) dx, and can be dened as P0(x) = 1, Pn(x) = 1 2 n (n!) d n dx n (x 2 1) n , n =1, 2,... They satisfy Pn,Pn= 2 2n +1 and, for n 0, Pn+1(x)= 2n +1 n +1 xPn(x) n n +1 Pn1(x), P1(x) 0, P0(x) 1. 4.1.2 Discrete case. Gram polynomials Let us consider the discrete scalar product f,g= m i=0 f (xi)g(xi), xi = 1+ 2i m , i =0, 1,...,m. The Gram polynomials {Pn,m} m n=0 are orthogonal polynomials for this scalar product. They can be dened by the recurrence Pn+1,m(x)= αn,mxPn,m(x) γn,mPn1,m(x), where P1,m(x) 0, P0,m(x) (m + 1) 1/2 and αn,m = m n +1 4(n + 1) 2 1 (m + 1) 2 (n + 1) 2 1/2 , γn,m = αn,m αn1,m . 4

description

formulas

Transcript of Formular i

Page 1: Formular i

1 Iterative methods for linear systems

Consider the linear system Ax = b, with detA �= 0. We use the splitting A = L+D+U , whereD is diagonal, L is stricly lower triangular and U is strictly upper triangular. Here “strictly”means that the diagonal entries are zero.

1.1 Method of Jacobi

Assume that the diagonal elements of A are different from zero. A step of the method of Jacobiis given by

x(k+1)i =

1

aii

j �=i

aijx(k)j + bi

, i = 1, 2, . . . , n.

If we write the method as x(k+1) = BJx(k) + c, then BJ = −D−1(L+ U) and c = D−1b.

1.2 Method of Gauss-Seidel

Assume that the diagonal elements of A are different from zero. A step of the method ofGauss-Seidel is given by

x(k+1)i =

1

aii

j<i

aijx(k+1)j −

j>i

aijx(k)j + bi

, i = 1, 2, . . . , n.

If we write the method as x(k+1) = BGSx(k)+ c, then BGS = −(L+D)−1U and c = (L+D)−1b.

1.3 Method of Successive Over-Relaxation (SOR)

Assume that the diagonal elements of A are different from zero. A step of the method of SOR,with parameter ω, is given by

r(k)i =

1

aii

j<i

aijx(k+1)j −

j≥i

aijx(k)j + bi

,

x(k+1)i = x

(k)i + ωr

(k)i .

If we write the method as x(k+1) = Bωx(k) + cω, then Bω = (I + ωD−1L)−1[(1− ω)I − ωD−1U ]

and cω = ω(I + ωD−1L)−1D−1b. If we denote L = D−1L and U = D−1U , then the iterationmatrix is Bω = (I + ωL)−1[(1− ω)I − ωU ] and cω = ω(I + ωL)−1D−1b.

1

2 Eigenvalues and eigenvectors

Let A be an n-by-n matrix.

2.1 Power Method

Let z(0) be an initial vector (it is usual to take z(0) = (1, . . . , 1)T ). The Power Method is then

y(k) =z(k)

�z(k)�2,

z(k+1) = Ay(k),

σk =(y(k))TAy(k)

(y(k))T y(k)= (y(k))T z(k+1).

Under suitable conditions, the sequence {σk}k converges to the eigenvalue with the largestmodulus, and {y(k)}k (or {z(k)}k) converges to the corresponding eigenvector.

2.2 Method of Jacobi

Let A be a real n-by-n symmetric matrix. Assume we are in the k-th step of the Jacobi methodand, to simplify the notation, we define B = Ak.

Assume that we have chosen two indices (p, q) such that bpq �= 0 and we want to rotate inthis plane such that the new matrix satisfies bpq = bqp = 0. The angle of rotation is given by

cot(2ϕ) =bpp − bqq2bpq

,

which means that the angle ϕ can be chosen in the interval (−π4 ,

π4 ). Then, it is not difficult to

find explicit expressions for c = cosϕ and s = sinϕ:

• If bpp = bqq, then σ = sign(bpq), c =√22 and s = σ

√22 .

• If bpp �= bqq, then σ = sign[bpq(bpp − bqq)], d =�(bpp − bqq)

2 + 4b2pq� 12 and

c =

�1

2

�1 +

|bpp − bqq|d

�� 12

, s = σ

�1

2

�1− |bpp − bqq|

d

�� 12

.

The transformed matrix B = RTBR (R denotes the previous rotation) is given by

bpj = bjp = bpjc+ bqjs, (j = 1, . . . , n, j �= p, j �= q),

biq = bqi = −bips+ biqc, (i = 1, . . . , n, i �= p, i �= q),

bpp = bppc2 + 2bpqcs+ bqqs

2 = bpp + bpqt,

bqq = bpps2 − 2bpqcs+ bqqc

2 = bqq − bpqt,

and bqp = bpq = 0, where t = s/c.

2

2.3 QR factorization

We will use Householder matrices: for any v ∈ Rn \ {0}, the matrix

Pv = I − 2

vT vvvT ,

is a Householder matrix.

1. Given a vector x ∈ Rn, there is a Householder matrix Pv such that Pvx ∈ �e1�. The vectorv is obtained as follows:

• µ = �x�2.• v = x.

• If µ �= 0, β = x1 + sign(x1)µ, vj =1β vj (j = 2, . . . , n).

• v1 = 1.

We will refer to this algorithm as v = householder(x).

2. Algorithm for PvA (only using the vector v):

• β = − 2vT v

.

• w = βAT v.

• A = A+ vwT .

We will refer to this algorithm as A = premult.h(A, v).

3. Algorithm for APv (only using the vector v):

• β = − 2vT v

.

• w = βAv.

• A = A+ wvT .

We will refer to this algorithm as A = postmult.h(A, v).

4. The QR algorithm can be written, in compact form, as follows. Let A be a m-by-n matrix(m ≥ n):

FOR j = 1 to nv(j : m) = householder(A(j : m, j)A(j : m, j : n) = premult.h(A(j : m, j : n), v(j : m))IF j < m

A(j + 1 : m, j) = v(j + 1 : m)ENDIF

ENDFOR

3

4 Approximation

4.1 Orthogonal polynomials

Let us consider the scalar product

�f, g� =� b

af(x)g(x)w(x) dx, w : [a, b] → R+ and continuous,

defined on the real vector space of continuous functions C0R([a, b]). The recurrence

ϕi+1(x) = (x− δi+1)ϕi(x)− γ2i+1ϕi−1(x), i = 0, 1, . . . , n− 1,

where ϕ−1(x) ≡ 0, ϕ0(x) ≡ 1 and

δi+1 =�xϕi,ϕi��ϕi,ϕi�

, γ2i+1 =

0 if i = 0,�ϕi,ϕi�

�ϕi−1,ϕi−1�if i = 1, 2, . . . , n− 1,

produces the family of orthogonal polynomials such that ϕi is a monic polynomial of degree i.

4.1.1 Legendre polynomials

The Legendre polynomials Pn are orthogonal polynomials for the scalar product

�f, g� =� 1

−1f(x)g(x) dx,

and can be defined as

P0(x) = 1,

Pn(x) =1

2n(n!)

dn

dxn�(x2 − 1)n

�, n = 1, 2, . . .

They satisfy �Pn, Pn� =2

2n+ 1and, for n ≥ 0,

Pn+1(x) =2n+ 1

n+ 1xPn(x)−

n

n+ 1Pn−1(x), P−1(x) ≡ 0, P0(x) ≡ 1.

4.1.2 Discrete case. Gram polynomials

Let us consider the discrete scalar product

�f, g� =m�

i=0

f(xi)g(xi), xi = −1 +2i

m, i = 0, 1, . . . ,m.

The Gram polynomials {Pn,m}mn=0 are orthogonal polynomials for this scalar product. They canbe defined by the recurrence

Pn+1,m(x) = αn,mxPn,m(x)− γn,mPn−1,m(x),

where P−1,m(x) ≡ 0, P0,m(x) ≡ (m+ 1)−1/2 and

αn,m =m

n+ 1

�4(n+ 1)2 − 1

(m+ 1)2 − (n+ 1)2

�1/2

, γn,m =αn,m

αn−1,m.

4

Page 2: Formular i

4.2 Periodic functions

We focus on the real vector space of continous functions defined on [0, 2π], with the standardscalar product �f, g� =

� 2π0 f(θ)g(θ) dθ. Let us define P2n+1 as the linear subspace generated by

the functions 12 , cos θ, sin θ, cos 2θ, sin 2θ, ..., cosnθ, sinnθ. Then, the best approximation to a

function f ∈ C0R([0, 2π]) by f∗ ∈ P2n+1 is given by

f∗(θ) =a02

+

n�

j=1

aj cos jθ + bj sin jθ,

where

a0 =1

π

� 2π

0f(θ) dθ, aj =

1

π

� 2π

0f(θ) cos jθ dθ, bj =

1

π

� 2π

0f(θ) sin jθ dθ.

4.2.1 Discrete case

Consider the discrete case given by the scalar product

�f, g�m =

m�

i=0

f(θi)g(θi), θi =2πi

m+ 1, i = 0, 1, . . . ,m.

If 2n ≤ m, the best approximation to a periodic function f by f∗ ∈ P2n+1 is given by

f∗(θ) =a02

+

n�

j=1

aj cos jθ + bj sin jθ,

where

a0 =2

m+ 1

m�

i=0

f(θi), aj =2

m+ 1

m�

i=0

f(θi) cos jθi, bj =2

m+ 1

m�

i=0

f(θi) sin jθi.

5

5 Ordinary differential equations

Consider the initial value problem

x = f(t, x)x(t0) = x0

�,

where x ∈ Rn. The Euler method is defined by

tn+1 = tn + hn,

xn+1 = xn + hnf(tn, xn).

The Implicit Euler method is given by

tn+1 = tn + hn,

xn+1 = xn + hnf(tn+1, xn+1).

5.1 Some explicit one-step methods

Explicit one-step methods can be written as

tn+1 = tn + hn,

xn+1 = ϕ(hn; tn, xn).

.

• Heun method: ϕ(h; t, x) = x+ h2 [f(t, x) + f(t+ h, x+ hf(t, x))].

• Modified Euler method: ϕ(h; t, x) = x+ hf(t+ 12h, x+ 1

2hf(t, x)).

• Runge-Kutta method of order 4: ϕ(h; t, x) = x+ h6 [k1 + k2 + k3 + k4], where

k1 = f(t, x), k2 = f(t+1

2h, x+

1

2hk1), k3 = f(t+

1

2h, x+

1

2hk2), k4 = f(t+h, x+hk3).

6