Computer Algebra using Maple Part IV: [Numerical] Linear Algebra

10
> > > > > > Computer Algebra using Maple Part IV: [Numerical] Linear Algebra Winfried Auzinger, Dirk Praetorius (SS2012) restart: with(LinearAlgebra); 1 Vectors and Matrices Long and short forms: Vector([a,b,c]), <a,b,c>; # , means new line > > > > > > > > > > > > > > > > > > Vector[row]([1,2,3]), <1|2|3>; # | means new column M:=Matrix([[a,b,c], [d,e,f]]); <<a|b|c>,<d|e|f>>; # specification row-wise <<a,d>|<b,e>|<c,f>>; # specification column-wise Row(M,2), Column(M,3); Angabe einer `generating function' (i,j)->f(i,j), die die Einträge definiert: f:=(i,j)->i+j: A:=Matrix(2,4,f); Playing LEGO (block form): x:=<1,2,3>: Matrix([x,x]); For special internal formats, see ? shape ? storage

Transcript of Computer Algebra using Maple Part IV: [Numerical] Linear Algebra

>

>

>

>

>

>

Com

puter Algebra using M

aplePart IV

:

[Num

erical] Linear A

lgebra

Winfried A

uzinger, Dirk Praetorius (SS2012)

restart:

with(LinearAlgebra);

1 Vectors and M

atrices

Long and short forms:

Vector([a,b,c]), <a,b,c>; # , means new line

>

>

>

>

>

>

>

>

>

>

>

>

>

>

>

>

>

> Vector[row]([1,2,3]), <1|2|3>; # | means new column

M:=Matrix([[a,b,c],

[d,e,f]]);

<<a|b|c>,<d|e|f>>; # specification row-wise

<<a,d>|<b,e>|<c,f>>; # specification column-wise

Row(M,2), Column(M,3);

Angabe einer `generating function' (i,j)->f(i,j), die die Einträge definiert:

f:=(i,j)->i+j: A:=Matrix(2,4,f);

Playing LEGO

(block form):

x:=<1,2,3>: Matrix([x,x]);

For special internal formats, see

? shape

? storage

>

>

>

>

>

>

>

>

>

>

>

>

>

>

>

>

2 Package LinearA

lgebra: Basic O

perationsx:=Vector([alpha,beta,gamma]);

Dimension(x);

3A:=Matrix([[1,2,3],[1,a,b]]);

Dimension(A);

RowDimension(A), ColumnDimension(A);

Rank(A); # generic rank!

2Transpose(A);

In LinearAlgebra, m

any functions have long names

You can also define shortcuts:

alias(H=HermitianTranspose): H(A);

Matrix-V

ector and (non-comm

utative) Matrix-M

atrix multiplication

is specified as follows:

MatrixVectorMultiply(A,x), A.x;

MatrixMatrixMultiply(A,H(A)), A.H(A);

>

>

>

>

>

>

>

>

>

>

>

>

>

>

>

>

You can perform

symbolic / num

eric / mixed calculations.

A:=Matrix(3,3,(i,j)->a[i]*b[j]); # rank-1-Matrix

Determinant(A);

0MatrixInverse(A), A^(-1);

Error, (in LinearAlgebra:-MatrixInverse) singular matrixA:=Matrix([[1,a],[2,b]]);

A^(-1); # generically regular (for b<>2*a)

A := evalf(RandomMatrix(3,3));

A^(-1);

For row or colum

s vectors, the dot operator evaluates the inner product:

x:=Vector[row](4,symbol='xi');

y:=Vector[row](4,symbol='eta');

>

>

>

>

>

>

>

>

>

>

>

>

>

>

>

>

>

>

>

> x.y;

This is equivalent to:

DotProduct(x,y);

If you are assuming real data, avoid conjugation:

DotProduct(x,y,conjugate=false);

Num

eric data:

x,y:=Vector([1+I,2+I,3+I]),Vector([1,2,3]);

x.y;

x,y:=RandomVector(5),RandomVector(5):

x.y;

11752

Euclidian norm:

sqrt(x.x), Norm(x,2);

3 Some useful general functions from

LinearA

lgebra

For Vector:

n:=3:

x,y:=Vector(3,symbol='xi'),Vector(3,symbol='eta');

>

>

>

>

>

>

>

>

>

>

>

>

>

> Transpose(x); # convert to row vector

Transpose(%);

x.Transpose(y);

OuterProductMatrix(x,y);

DotProduct(x,y); # inner product

CrossProduct(x,y); # outer product

Special Vectors:

seq(UnitVector(i,n),i=1..n);

>

>

>

>

>

>

>

>

>

>

>

>

>

>

>

>

>

> ConstantVector(c,n);

For Matrix:

n:=3:

A:=Matrix(n,n,symbol='a');

Diagonal(A);

Determinant(A);

CharacteristicPolynomial(A,lambda);

Special Matrices:

ZeroMatrix(n), IdentityMatrix(n);

ConstantMatrix(c,n);

V:=VandermondeMatrix(x);

>

>

>

>

>

>

>

>

>

>

>

>

The names of m

ost functions in LinearA

lgebra are self-explanatory.

If you know w

hat the companion m

atrix of a polynomial is, use C

ompanionM

atrix :

p:=1+2*t+3*t^2+t^3;

C:=CompanionMatrix(p,t);

CharacteristicPolynomial(C,t);

Solution of a linear system of equations:

b:=Vector([alpha,beta,gamma]);

LinearSolve(V,b);

4 Finding a rule by experiment

restart:

>

>

>

>

>

> with(LinearAlgebra);

Given: a bidiagonal M

atrix

n:=4;

B:=DiagonalMatrix([seq(lambda[i],i=1..n)]):

for i from 1 to n-1 do B[i,i+1]:=1 end do:

B,B^2,map(expand,B^3);

>

>

>

>

>

>

>

>

>

> map(expand,B^4);

From this observation, you m

ay guess the general form of B

^p. N

ot obvious, but not VER

Y difficult.

Special case: Jordan block

B:=DiagonalMatrix([seq(lambda,i=1..n)]):

for i from 1 to n-1 do B[i,i+1]:=1 end do:

B,B^5;

5 Num

erical Linear A

lgebra I: Basics

Many algorithm

s from linear algebra m

ake only sense in general for numerical data.

n:=3;

A:=evalf(RandomMatrix(n,n));

>

>

>

>

>

>

>

>

>

> b:=evalf(RandomVector(n));

Solve linear system of equations:

Algorithm

: Stabilized Gaussian elim

ination based on PLU-decom

position of A

LinearSolve(A,b);

Solve eigenproblem for A

:

Algorithm

: QR

iteration

Eigenvalues(A); # Vector of eigenvalues

evalues,evectors:=Eigenvectors(A); # Vector of eigenvalues,

# Matrix of eigenvectors

(columns)

Evidently, the eigensystem is real. 0. I is an im

aginary rounding noise.

Convert to real:

evalues:=Re(evalues);

evectors:=Re(evectors);

>

>

>

>

>

>

Check:

evalf(A-evectors.DiagonalMatrix(evalues).evectors^(-1));

Extract eigenvectors from m

atrix:

for i from 1 to n do

ev[i] := Column(evectors,i)

end do;

6 Num

erical Linear A

lgebra II: Hardw

are floatsrestart;

with(LinearAlgebra):

For efficient solution of larger problems, resort to hardw

are float operations (64 bit IEEE double precision). The data type is hfloat = float[8]

>

>

>

>

>

>

>

>

>

>

>

>

>

>

>

>

>

>

Use H

Float or evalhf instead of evalf to generate a hfloat object.

half:=HFloat(0.5);

whattype(half),

type(half,hfloat),

type(half,float[8]);

WA

RN

ING

: evalhf evaluates an expression in hardware floating point arithm

etic, but the result is given back as a norm

al float (sfloat).

pi2:=evalhf(Pi^2);

type(pi2,hfloat), type(pi2,sfloat);

Use convert(...,hfloat) to convert an object to a hfloat:

convert(pi2,hfloat);

9.86960440108936type(%,hfloat);

true This looks a little bit com

plicated.

For practice, change the value of ebvoronment variable U

seHardw

areFloats :

UseHardwareFloats;

deducedUseHardwareFloats:=true;

Now

, all operations involving exclusively hfloat data are automatically perform

ed in machine

arithmetic,

and the result is given back as a hfloat object.

A:=HilbertMatrix(4,4,datatype=hfloat); # generate matrix with

hfloat entries

A^(-1);

>

>

>

>

>

>

>

>

>

>

>

>

>

>

>

>

>

>

>

>

>

> type(%[1,1],hfloat);

trueeig:=Eigenvalues(A);

type(eig[1],hfloat);

false

Hilbert m

atrix is symm

etric, with real eigenvalues. The num

erical result contains im

aginary rounding noise. Thefore the data type is complex[8] instead of float[8]:

type(eig[1],complex[8]);

trueeig:=Re(eig);

type(eig[1],hfloat);

true Perform

ance comparison: W

e invert the 120x120 Hilbert m

atrix

- exactly, - in softw

are floating point arithemtic,

- in hardware floating point arithm

etic.

n:=120;

H := HilbertMatrix(n,n):

start:=time():

H^(-1):

time()-start;

12.823Hs := evalf(H):

start:=time():

>

>

>

>

>

>

>

>

>

>

>

>

>

>

>

> Hs^(-1):

time()-start;

0.016Hh:=evalhf(H): # this works: result matrix stored as hfloat

object

type(Hh[1,1],hfloat);

truestart:=time():

Hf^(-1):

time()-start;

0.

7 Num

erical Linear A

lgebra III: Exam

plesrestart:

with(LinearAlgebra):

UseHardwareFloats:=true;

For hfloat data, the operations in LinearAlgebra autom

atically resort to an optim

ized built-in numerical linear algebra library.

6.1 Solution of a least squares problem

Function LeastSquares solves a least squares problem

||Ax-b||_2 --> m

in!

We use it to com

pute a regression line for given data.

n:=7;

t := [seq(HFloat(i),i=1..n)];

y := [seq(HFloat(i^3),i=1..n)];

V := VandermondeMatrix(t,n,2,datatype=hfloat);

Y := convert(y,Vector): type(Y[1],hfloat);

true

>

>

>

>

>

>

>

>

>

>

>

> LinearSolve(V,Y); # too many equations, no solution

Error, (in LinearAlgebra:-LinearSolve) inconsistent systema := LeastSquares(V,Y);

line := tau->a[1]+tau*a[2];

p1 := plot(line(tau),tau=t[1]-1..t[n]+1,thickness=6,color=

blue):

p2 := plots[pointplot]([seq([t[i],y[i]],i=1..n)],

symbolsize=20,symbol=circle,color=red):

plots[display](p1,p2,title="regression line to given data");

t1

23

45

67

80

100

200

300

regression line to given data

6.2 Matrix pow

er based on eigendecomposition

(a `spectral method')

For certain classes of matrices A

, computing a m

atrix power A

^p can

>

>

>

>

>

>

>

>

>

>

>

>

>

>

>

>

>

>

be performed m

ore efficiently using its spectral decomposition.

E.g., if A is sym

metric, its eigenvalues are real and its full set of eigenvetors is orthogonal.

I.e., A = Q

.S.Q^T, w

ith S real diagonal, Q orthogonal.

Then, A^p = Q

. S^P . Q^T.

The following procedure autom

atically uses the symm

etric part of A.

spectralpower:=proc(A::Matrix,p)

uses LinearAlgebra: # automatically activate LinearAlgebra

option hfloat: # enforce hardware floating point arithmetic

local As,S,Q:

As := 0.5.(A+Transpose(A)):

S,Q:=Eigenvectors(As): # S real, Q orthogonal

S:=Re(DiagonalMatrix(S)):

Q:=Re(Q):

return Q.S^p.Transpose(Q):

end proc:

A:=RandomMatrix(100,100,datatype=hfloat,shape=symmetric):

A:=A/Norm(A,infinity):

print(type(A[1,1],hfloat)):

truep:=10;

start:=time():

Ap1:=spectralpower(A,p):

time()-start;

0.109print(type(Ap1[1,1],hfloat)):true

Now

compare w

ith MatrixPow

er: We see that this even perform

s better. Evidently, LinearA

lgebra also uses a spectral, or related method

for hfloat data in this case.

How

ever, algorithms are not docum

ented in the help system.

start:=time(): Ap2:=A^p: time()-start;

0.016print(type(Ap2[1,1],hfloat)):true

Norm(Ap1-Ap2,infinity); # compare results

6.3 Polynomial differentiation w

eights A

ssume that you w

ant to compute the derivatives of (m

any) polynomials

of degree n at a given, fixed evaluation point tau.

>

>

>

>

>

>

>

>

>

>

>

>

We assum

e that the polynomials are specified by their values at n+1 nodes.

This can be expressed by algebraic operations.

- For nodes t[1],...,t[n+1] : - G

iven y[1]:=p(t[1]), ..., y[n+1] := p(t[n+1]) - For an evaluation point tau: Find form

ula p'(tau) = sum

_(i=1)^(n+1) alpha[i](tau)*p(t[i])

- You can find it by hand or delegate this job to M

aple.

We consider a particular case:

n:=4;

t:=[0,1,2,3,4];

p:=unapply(add(c[i]*tau^i,i=0..n),tau): p(tau);

desired_identity:=D(p)(tau)-add(alpha[i]*p(t[i]),i=1..n+1); #

this should be 0

Now

we com

pare coefficients in the c[i]:

for i from 0 to n do

eq[i]:=coeff(desired_identity,c[i])

end do;

These are 5 equations in 5 unknowns, depending on tau.

We can solve them

diirectly using solve, but for practice we

convert it into a linear system:

A:=Matrix([seq([seq(coeff(eq[i],alpha[j]),j=1..n+1)],i=0..n)]

);

>

>

>

>

>

> b := -Vector([seq(subs(seq(alpha[j]=0,j=1..n+1),eq[i]),i=0..

n)]);

Alpha := LinearSolve(A,b);

These are the weights w

e have been looking for. We check it for tau=10:

for i from 1 to n+1 do alpha[i]:=subs(tau=10,Alpha[i]) end

do;

>

>

>

>

Now

we com

pare:

add(alpha[i]*p(t[i]),i=1..n+1); # am inner product

D(p)(10); # OK

REM

AR

KS:

- The result of this computation is a vector alpha. The inner product of alpha w

ith the given p(t[i]) gives you the result value p'(tau), a sim

ple numerical operation.

- For several tau's, you can generate a `differentiation matrix' row

by row.

Then, evaluation of several derivative values is reduced to numerical m

atrix-vector m

ultiplication (using hfloat objects in practice).

- For a given set of nodes t[i] and evaluation points tau[k], generating the coefficients of the differentiation m

atrix is easy to perform in M

aple, as shown above.

- How

ever, if you want to prove a general form

ula (for arbitrary n, t[i], tau), you have to do analysis by hand. A

nyway, M

aple may help you, at least for checking special

cases.

- Such coefficients are, e.g., required for certain numerical approxim

ation methods for

solving differential equations. What happens is

-- set up and solve the system for the desired coefficients, as indicated above,

-- save the computed coefficients (e.g., to a data file) and use them

in your numerical code

(whatever language you are using).

== end of part IV ==