# Chapter 7: Interpolation Topics 7: Interpolation Topics: ... Michael T. Heath Scientiﬁc Computing...

### Transcript of Chapter 7: Interpolation Topics 7: Interpolation Topics: ... Michael T. Heath Scientiﬁc Computing...

Chapter 7: Interpolation

❑ Topics:

❑ Examples

❑ Polynomial Interpolation – bases, error, Chebyshev, piecewise

❑ Orthogonal Polynomials

❑ Splines – error, end conditions

❑ Parametric interpolation

❑ Multivariate interpolation: f(x,y)

InterpolationPolynomial Interpolation

Piecewise Polynomial Interpolation

MotivationChoosing InterpolantExistence and Uniqueness

Interpolation

Basic interpolation problem: for given data

(t1, y1), (t2, y2), . . . (tm, ym) with t1 < t2 < · · · < tm

determine function f : R ! R such that

f(ti) = yi, i = 1, . . . ,m

f is interpolating function, or interpolant, for given data

Additional data might be prescribed, such as slope ofinterpolant at given points

Additional constraints might be imposed, such assmoothness, monotonicity, or convexity of interpolant

f could be function of more than one variable, but we willconsider only one-dimensional case

Michael T. Heath Scientific Computing 3 / 56

Note: We might look at multi-dimensional case, even though the text does not.

InterpolationPolynomial Interpolation

Piecewise Polynomial Interpolation

MotivationChoosing InterpolantExistence and Uniqueness

Purposes for Interpolation

Plotting smooth curve through discrete data points

Reading between lines of table

Differentiating or integrating tabular data

Quick and easy evaluation of mathematical function

Replacing complicated function by simple one

Michael T. Heath Scientific Computing 4 / 56

InterpolationPolynomial Interpolation

Piecewise Polynomial Interpolation

MotivationChoosing InterpolantExistence and Uniqueness

Purposes for Interpolation

Plotting smooth curve through discrete data points

Reading between lines of table

Differentiating or integrating tabular data

Quick and easy evaluation of mathematical function

Replacing complicated function by simple one

Michael T. Heath Scientific Computing 4 / 56

¡ Basis functions for function approximation in numerical solution of ordinary and partial differential equations (ODEs and PDEs).

¡ Basis functions for developing integration rules. ¡ Basis functions for developing differentiation techniques.

(Not just tabular data…)

InterpolationPolynomial Interpolation

Piecewise Polynomial Interpolation

MotivationChoosing InterpolantExistence and Uniqueness

Interpolation vs Approximation

By definition, interpolating function fits given data pointsexactly

Interpolation is inappropriate if data points subject tosignificant errors

It is usually preferable to smooth noisy data, for exampleby least squares approximation

Approximation is also more appropriate for special functionlibraries

Michael T. Heath Scientific Computing 5 / 56

InterpolationPolynomial Interpolation

Piecewise Polynomial Interpolation

MotivationChoosing InterpolantExistence and Uniqueness

Issues in Interpolation

Arbitrarily many functions interpolate given set of data points

What form should interpolating function have?

How should interpolant behave between data points?

Should interpolant inherit properties of data, such asmonotonicity, convexity, or periodicity?

Are parameters that define interpolating functionmeaningful?

If function and data are plotted, should results be visuallypleasing?

Michael T. Heath Scientific Computing 6 / 56

For example, function values, slopes, etc. ?

InterpolationPolynomial Interpolation

Piecewise Polynomial Interpolation

MotivationChoosing InterpolantExistence and Uniqueness

Choosing Interpolant

Choice of function for interpolation based on

How easy interpolating function is to work withdetermining its parametersevaluating interpolantdifferentiating or integrating interpolant

How well properties of interpolant match properties of datato be fit (smoothness, monotonicity, convexity, periodicity,etc.)

Michael T. Heath Scientific Computing 7 / 56

! Conditioning? !!

InterpolationPolynomial Interpolation

Piecewise Polynomial Interpolation

MotivationChoosing InterpolantExistence and Uniqueness

Functions for Interpolation

Families of functions commonly used for interpolationinclude

PolynomialsPiecewise polynomialsTrigonometric functionsExponential functionsRational functions

For now we will focus on interpolation by polynomials andpiecewise polynomials

We will consider trigonometric interpolation (DFT) later

Michael T. Heath Scientific Computing 8 / 56

A Classic Polynomial Interpolation Problem

• Suppose you’re asked to tabulate data such that linear interpolation between tabulated values is correct to 4 digits.

• How many entries are required on, say, [0,1]?

• How many digits should you have in the tabulated data?

A Classic Polynomial Interpolation Problem

An important polynomial interpolation result for f(x) 2 C

n

:

If p(x) 2 lPn�1 and p(x

j

) = f(xj

), j = 1, . . . , n, then thereexists a ✓ 2 [x1, x2, . . . , xn, x] such that

f(x)� p(x) =f

n(✓)

n!(x� x1)(x� x2) · · · (x� x

n

).

In particular, for linear interpolation, we have

f(x)� p(x) =f

00(✓)

2(x� x1)(x� x2)

|f(x)� p(x)| max[x1:x2]

|f 00|2

h

2

4= max

[x1:x2]

h

2|f 00|8

where the latter result pertains to x 2 [x1, x2].

2

A Classic Polynomial Interpolation Problem

Example: f(x) = cos(x)

We know that |f 00| 1 and thus, for linear interpolation

|f(x)� p(x)| h

2

8.

If we want 4 decimal places of accuracy, accounting for rounding, we need

|f(x)� p(x)| h

2

8 1

2⇥ 10�4

h

2 4⇥ 10�4

h 0.02

x cos x

0.00 1.00000

0.02 0.99980

0.04 0.99920

0.06 0.99820

0.08 0.99680

3

InterpolationPolynomial Interpolation

Piecewise Polynomial Interpolation

MotivationChoosing InterpolantExistence and Uniqueness

Basis Functions

Family of functions for interpolating given data points isspanned by set of basis functions �1(t), . . . ,�n(t)

Interpolating function f is chosen as linear combination ofbasis functions,

f(t) =

nX

j=1

xj�j(t)

Requiring f to interpolate data (ti, yi) means

f(ti) =

nX

j=1

xj�j(ti) = yi, i = 1, . . . ,m

which is system of linear equations Ax = y for n-vector xof parameters xj , where entries of m⇥ n matrix A aregiven by aij = �j(ti)

Michael T. Heath Scientific Computing 9 / 56

InterpolationPolynomial Interpolation

Piecewise Polynomial Interpolation

MotivationChoosing InterpolantExistence and Uniqueness

Existence, Uniqueness, and Conditioning

Existence and uniqueness of interpolant depend onnumber of data points m and number of basis functions n

If m > n, interpolant usually doesn’t exist

If m < n, interpolant is not unique

If m = n, then basis matrix A is nonsingular provided datapoints ti are distinct, so data can be fit exactly

Sensitivity of parameters x to perturbations in datadepends on cond(A), which depends in turn on choice ofbasis functions

Michael T. Heath Scientific Computing 10 / 56

InterpolationPolynomial Interpolation

Piecewise Polynomial Interpolation

Monomial, Lagrange, and Newton InterpolationOrthogonal PolynomialsAccuracy and Convergence

Polynomial Interpolation

Simplest and most common type of interpolation usespolynomials

Unique polynomial of degree at most n� 1 passes throughn data points (ti, yi), i = 1, . . . , n, where ti are distinct

There are many ways to represent or compute interpolatingpolynomial, but in theory all must give same result

< interactive example >

Michael T. Heath Scientific Computing 11 / 56

(i.e., in infinite precision arithmetic)

InterpolationPolynomial Interpolation

Piecewise Polynomial Interpolation

Monomial, Lagrange, and Newton InterpolationOrthogonal PolynomialsAccuracy and Convergence

Monomial Basis

Monomial basis functions

�j(t) = t

j�1, j = 1, . . . , n

give interpolating polynomial of form

pn�1(t) = x1 + x2t+ · · ·+ xntn�1

with coefficients x given by n⇥ n linear system

Ax =

2

6664

1 t1 · · · t

n�11

1 t2 · · · t

n�12

...... . . . ...

1 tn · · · t

n�1n

3

7775

2

6664

x1

x2...xn

3

7775=

2

6664

y1

y2...yn

3

7775= y

Matrix of this form is called Vandermonde matrix

Michael T. Heath Scientific Computing 12 / 56

InterpolationPolynomial Interpolation

Piecewise Polynomial Interpolation

Monomial, Lagrange, and Newton InterpolationOrthogonal PolynomialsAccuracy and Convergence

Example: Monomial Basis

Determine polynomial of degree two interpolating threedata points (�2,�27), (0,�1), (1, 0)Using monomial basis, linear system is

Ax =

2

41 t1 t

21

1 t2 t

22

1 t3 t

23

3

5

2

4x1

x2

x3

3

5=

2

4y1

y2

y3

3

5= y

For these particular data, system is2

41 �2 4

1 0 0

1 1 1

3

5

2

4x1

x2

x3

3

5=

2

4�27

�1

0

3

5

whose solution is x =

⇥�1 5 �4

⇤T , so interpolatingpolynomial is

p2(t) = �1 + 5t� 4t

2

Michael T. Heath Scientific Computing 13 / 56

InterpolationPolynomial Interpolation

Piecewise Polynomial Interpolation

Monomial, Lagrange, and Newton InterpolationOrthogonal PolynomialsAccuracy and Convergence

Monomial Basis, continued

< interactive example >

Solving system Ax = y using standard linear equationsolver to determine coefficients x of interpolatingpolynomial requires O(n

3) work

Michael T. Heath Scientific Computing 14 / 56

InterpolationPolynomial Interpolation

Piecewise Polynomial Interpolation

Monomial, Lagrange, and Newton InterpolationOrthogonal PolynomialsAccuracy and Convergence

Monomial Basis, continued

For monomial basis, matrix A is increasingly ill-conditionedas degree increases

Ill-conditioning does not prevent fitting data points well,since residual for linear system solution will be small

But it does mean that values of coefficients are poorlydetermined

Both conditioning of linear system and amount ofcomputational work required to solve it can be improved byusing different basis

Change of basis still gives same interpolating polynomialfor given data, but representation of polynomial will bedifferent

Michael T. Heath Scientific Computing 15 / 56

InterpolationPolynomial Interpolation

Piecewise Polynomial Interpolation

Monomial, Lagrange, and Newton InterpolationOrthogonal PolynomialsAccuracy and Convergence

Monomial Basis, continued

Conditioning with monomial basis can be improved byshifting and scaling independent variable t

�j(t) =

✓t� c

d

◆j�1

where, c = (t1 + tn)/2 is midpoint and d = (tn � t1)/2 ishalf of range of data

New independent variable lies in interval [�1, 1], which alsohelps avoid overflow or harmful underflow

Even with optimal shifting and scaling, monomial basisusually is still poorly conditioned, and we must seek betteralternatives

< interactive example >

Michael T. Heath Scientific Computing 16 / 56

Polynomial Interpolation

❑ Two types: Global or Piecewise

❑ Choices: ❑ A: points are given to you ❑ B: you choose the points

❑ Case A: piecewise polynomials are most common – STABLE. ❑ Piecewise linear ❑ Splines ❑ Hermite (matlab “pchip” – piecewise cubic Hermite int. polynomial)

❑ Case B: high-order polynomials are OK if points chosen wisely ❑ Roots of orthogonal polynomials ❑ Convergence is exponential: err ~ Ce-¾ n, instead of algebraic: err~Cn-k

Piecewise Polynomial Interpolation

❑ Example – Given the table below,

❑ Q: What is f(x=0.75) ?

xj fj

0.6 1.20.8 2.01.0 2.4

Polynomial interpolation error formula:

If p(x) 2 lPn�1 and p(xj) = f(xj), j = 1, . . . , n, then there

exists a ✓ 2 [x1, x2, . . . , xn, x] such that

f(x)� p(x) =f

n(✓)

n!(x� x1)(x� x2) · · · (x� xn)

=f

n(✓)

n!qn(x), qn(x) 2 lPn.

1

Polynomial Interpolation

❑ Example – Given the table below,

❑ Q: What is f(x=0.75) ? ❑ A: 1.8 --- You’ve just done (piecewise) linear interpolation.

❑ Moreover, you know the error is · (0.2)2 f’’ / 8.

xj fj

0.6 1.20.8 2.01.0 2.4

Polynomial interpolation error formula:

If p(x) 2 lPn�1 and p(xj) = f(xj), j = 1, . . . , n, then there

exists a ✓ 2 [x1, x2, . . . , xn, x] such that

f(x)� p(x) =f

n(✓)

n!(x� x1)(x� x2) · · · (x� xn)

=f

n(✓)

n!qn(x), qn(x) 2 lPn.

1

General Polynomial Interpolation

xj fj

0.6 1.2

0.8 2.0

1.0 2.4

• Whether interpolating on segments or globally, error formula applies over

the interval.

If p(t) 2 lPn�1 and p(tj) = f(tj), j = 1, . . . , n, then there

exists a ✓ 2 [t1, t2, . . . , tn, t] such that

f(t)� p(t) =

f

n(✓)

n!

(t� t1)(t� t2) · · · (t� tn)

=

f

n(✓)

n!

qn(t), qn(t) 2 lPn.

• We generally have no control over f

n(✓), so instead seek to optimize

choice of the tj in order to minimize

max

t2[t1,tn]|qn(t)| .

• Such a problem is called a minimax problem and the solution is given

by the tjs being the roots of a Chebyshev polynomial, as we will discuss

shortly.

• First, however, we turn to the problem of constructing p(t) 2 lPn�1(t).

1

Constructing High-Order Polynomial Interpolants

p(t) =nX

j=1

fj lj(t)

lj(t) = 1 t = tj

lj(ti) = 0 t = ti, i 6= j

lj(t) 2 lPn�1(t)

lj(t) =1

C(t� t1)(t� t2) · · · (t� tj�1)(t� tj+1) · · · (t� tn)

C = (tj � t1)(tj � t2) · · · (tj � tj�1)(tj � tj+1) · · · (tj � tn)

lj(t) =

✓t� t1tj � t1

◆✓t� t2tj � t2

◆· · ·

✓t� tj�1

tj � tj�1

◆✓t� tj+1

tj � tj+1

◆· · ·

✓t� tntj � tn

◆.

1

❑ Lagrange Polynomials

The lj(t) polynomials are chosen so that p(tj) = f(tj) := fj

The lj(t)s are sometimes called the Lagrange cardinal functions.

Constructing High-Order Polynomial Interpolants

p(t) =nX

j=1

fj lj(t)

lj(t) = 1 t = tj

lj(ti) = 0 t = ti, i 6= j

lj(t) 2 lPn�1(t)

lj(t) =1

C(t� t1)(t� t2) · · · (t� tj�1)(t� tj+1) · · · (t� tn)

C = (tj � t1)(tj � t2) · · · (tj � tj�1)(tj � tj+1) · · · (tj � tn)

lj(t) =

✓t� t1tj � t1

◆✓t� t2tj � t2

◆· · ·

✓t� tj�1

tj � tj�1

◆✓t� tj+1

tj � tj+1

◆· · ·

✓t� tntj � tn

◆.

1

❑ Lagrange Polynomials

• Construct p(t) =P

j fjlj(t).

• lj(t) given by above.

• Error formula f(t)� p(t) given as before.

• Can choose tj’s to minimize error polynomial qn(t).

• lj(t) is a polynomial of degree n� 1

• It is zero at t = ti, i 6= j.

• Choose C so that it is 1 at t = tj.

p(t) =

nX

j=1

fj lj(t)

lj(t) = 1 t = tj

lj(ti) = 0 t = ti, i 6= j

lj(t) 2 lPn�1(t)

lj(t) =

1

C(t� t1)(t� t2) · · · (t� tj�1)(t� tj+1) · · · (t� tn)

C = (tj � t1)(tj � t2) · · · (tj � tj�1)(tj � tj+1) · · · (tj � tn)

lj(t) =

✓t� t1tj � t1

◆✓t� t2tj � t2

◆· · ·

✓t� tj�1

tj � tj�1

◆✓t� tj+1

tj � tj+1

◆· · ·

✓t� tntj � tn

◆.

1

Constructing High-Order Polynomial Interpolants

• Construct p(t) =P

j fjlj(t).

• lj(t) given by above.

• Error formula f(t)� p(t) given as before.

• Can choose tj’s to minimize error polynomial qn(t).

• lj(t) is a polynomial of degree n� 1

• It is zero at t = ti, i 6= j.

• Choose C so that it is 1 at t = tj.

p(t) =

nX

j=1

fj lj(t)

lj(t) = 1 t = tj

lj(ti) = 0 t = ti, i 6= j

lj(t) 2 lPn�1(t)

lj(t) =

1

C(t� t1)(t� t2) · · · (t� tj�1)(t� tj+1) · · · (t� tn)

C = (tj � t1)(tj � t2) · · · (tj � tj�1)(tj � tj+1) · · · (tj � tn)

lj(t) =

✓t� t1tj � t1

◆✓t� t2tj � t2

◆· · ·

✓t� tj�1

tj � tj�1

◆✓t� tj+1

tj � tj+1

◆· · ·

✓t� tntj � tn

◆.

1

Constructing High-Order Polynomial Interpolants

• Construct p(t) =P

j fjlj(t).

• lj(t) given by above.

• Error formula f(t)� p(t) given as before.

• Can choose tj’s to minimize error polynomial qn(t).

• lj(t) is a polynomial of degree n� 1

• It is zero at t = ti, i 6= j.

• Choose C so that it is 1 at t = tj.

p(t) =

nX

j=1

fj lj(t)

lj(t) = 1 t = tj

lj(ti) = 0 t = ti, i 6= j

lj(t) 2 lPn�1(t)

lj(t) =

1

C(t� t1)(t� t2) · · · (t� tj�1)(t� tj+1) · · · (t� tn)

C = (tj � t1)(tj � t2) · · · (tj � tj�1)(tj � tj+1) · · · (tj � tn)

lj(t) =

✓t� t1tj � t1

◆✓t� t2tj � t2

◆· · ·

✓t� tj�1

tj � tj�1

◆✓t� tj+1

tj � tj+1

◆· · ·

✓t� tntj � tn

◆.

1

Constructing High-Order Polynomial Interpolants

❑ Although a bit tedious to do by hand, these formulas are relatively easy to evaluate with a computer.

❑ So, to recap – Lagrange polynomial interpolation:

• Construct p(t) =P

j fjlj(t).

• lj(t) given by above.

• Error formula f(t)� p(t) given as before.

• Can choose tj’s to minimize error polynomial qn(t).

• lj(t) is a polynomial of degree n� 1

• It is zero at t = ti, i 6= j.

• Choose C so that it is 1 at t = tj.

p(t) =

nX

j=1

fj lj(t)

lj(t) = 1 t = tj

lj(ti) = 0 t = ti, i 6= j

lj(t) 2 lPn�1(t)

lj(t) =

1

C(t� t1)(t� t2) · · · (t� tj�1)(t� tj+1) · · · (t� tn)

C = (tj � t1)(tj � t2) · · · (tj � tj�1)(tj � tj+1) · · · (tj � tn)

lj(t) =

✓t� t1tj � t1

◆✓t� t2tj � t2

◆· · ·

✓t� tj�1

tj � tj�1

◆✓t� tj+1

tj � tj+1

◆· · ·

✓t� tntj � tn

◆.

1

Lagrange Basis Functions, n=2 (linear)

Lagrange Basis Functions, n=3 (quadratic)

Also have the Newton Basis

❑ These formulas are interesting because they are adaptive. (More details are in the text, but this is the essence of the method.)

• Given pn(tj) = fj, j = 1, . . . , n and pn 2 lPn�1.

• Let

pn+1(t) := pn(t) + C(t� t1)(t� t2) · · · (t� tn)

such that pn+1(tn+1) = fn+1

• Set

C =fn+1 � pn(tn+1)

qn(tn+1),

with qn(t) := (t� t1)(t� t2) · · · (t� tn) 2 lPn

1

InterpolationPolynomial Interpolation

Piecewise Polynomial Interpolation

Monomial, Lagrange, and Newton InterpolationOrthogonal PolynomialsAccuracy and Convergence

Lagrange Interpolation

For given set of data points (ti, yi), i = 1, . . . , n, Lagrangebasis functions are defined by

`j(t) =

nY

k=1,k 6=j

(t� tk) /

nY

k=1,k 6=j

(tj � tk), j = 1, . . . , n

For Lagrange basis,

`j(ti) =

⇢1 if i = j

0 if i 6= j

, i, j = 1, . . . , n

so matrix of linear system Ax = y is identity matrix

Thus, Lagrange polynomial interpolating data points (ti, yi)

is given by

pn�1(t) = y1`1(t) + y2`2(t) + · · ·+ yn`n(t)

Michael T. Heath Scientific Computing 18 / 56

InterpolationPolynomial Interpolation

Piecewise Polynomial Interpolation

Monomial, Lagrange, and Newton InterpolationOrthogonal PolynomialsAccuracy and Convergence

Lagrange Basis Functions

< interactive example >

Lagrange interpolant is easy to determine but moreexpensive to evaluate for given argument, compared withmonomial basis representationLagrangian form is also more difficult to differentiate,integrate, etc.

Michael T. Heath Scientific Computing 19 / 56

These concerns are important when computing by hand, but not important when using a computer.

InterpolationPolynomial Interpolation

Piecewise Polynomial Interpolation

Monomial, Lagrange, and Newton InterpolationOrthogonal PolynomialsAccuracy and Convergence

Example: Lagrange Interpolation

Use Lagrange interpolation to determine interpolatingpolynomial for three data points (�2,�27), (0,�1), (1, 0)

Lagrange polynomial of degree two interpolating threepoints (t1, y1), (t2, y2), (t3, y3) is given by p2(t) =

y1(t� t2)(t� t3)

(t1 � t2)(t1 � t3)+ y2

(t� t1)(t� t3)

(t2 � t1)(t2 � t3)+ y3

(t� t1)(t� t2)

(t3 � t1)(t3 � t2)

For these particular data, this becomes

p2(t) = �27

t(t� 1)

(�2)(�2� 1)

+ (�1)

(t+ 2)(t� 1)

(2)(�1)

Michael T. Heath Scientific Computing 20 / 56

InterpolationPolynomial Interpolation

Piecewise Polynomial Interpolation

Monomial, Lagrange, and Newton InterpolationOrthogonal PolynomialsAccuracy and Convergence

Interpolating Continuous Functions

If data points are discrete sample of continuous function,how well does interpolant approximate that functionbetween sample points?

If f is smooth function, and pn�1 is polynomial of degree atmost n� 1 interpolating f at n points t1, . . . , tn, then

f(t)� pn�1(t) =f

(n)(✓)

n!

(t� t1)(t� t2) · · · (t� tn)

where ✓ is some (unknown) point in interval [t1, tn]

Since point ✓ is unknown, this result is not particularlyuseful unless bound on appropriate derivative of f isknown

Michael T. Heath Scientific Computing 33 / 56

InterpolationPolynomial Interpolation

Piecewise Polynomial Interpolation

Monomial, Lagrange, and Newton InterpolationOrthogonal PolynomialsAccuracy and Convergence

Interpolating Continuous Functions, continued

If |f (n)(t)| M for all t 2 [t1, tn], and

h = max{ti+1 � ti : i = 1, . . . , n� 1}, then

max

t2[t1,tn]|f(t)� pn�1(t)| Mh

n

4n

Error diminishes with increasing n and decreasing h, butonly if |f (n)

(t)| does not grow too rapidly with n

< interactive example >

Michael T. Heath Scientific Computing 34 / 56

InterpolationPolynomial Interpolation

Piecewise Polynomial Interpolation

Monomial, Lagrange, and Newton InterpolationOrthogonal PolynomialsAccuracy and Convergence

Convergence

Polynomial interpolating continuous function may notconverge to function as number of data points andpolynomial degree increases

Equally spaced interpolation points often yieldunsatisfactory results near ends of interval

If points are bunched near ends of interval, moresatisfactory results are likely to be obtained withpolynomial interpolation

Use of Chebyshev points distributes error evenly andyields convergence throughout interval for any sufficientlysmooth function

Michael T. Heath Scientific Computing 36 / 56

37

Unstable and Stable Interpolating Basis Sets

❑ Examples of unstable bases are: ❑ Monomials (modal): φi = xi ❑ High-order Lagrange interpolants (nodal) on uniformly-spaced points.

❑ Examples of stable bases are: ❑ Orthogonal polynomials (modal), e.g.,

❑ Legendre polynomials: Lk(x), or

❑ bubble functions: φk(x) := Lk+1(x) – Lk-1(x). ❑ Lagrange (nodal) polynomials based on Gauss quadrature points

(e.g., Gauss-Legendre, Gauss-Chebyshev, Gauss-Lobatto-Legendre, etc.)

❑ Can map back and forth between stable nodal bases and Legendre or bubble function modal bases, with minimal information loss.

38

Unstable and Stable Interpolating Basis Sets

• Key idea for Chebyshev interpolation is to choose points that minimize

max |qn+1(x)| on interval I := [�1, 1].

qn+1(x) := (x� x0)(x� x1) . . . (x� xn)

:= x

n+ cn�1x

n�1+ . . . + c0

which is a monic polynomial of degree n+ 1.

• The roots of the Chebyshev polynomial Tn+1(x) yield such a set of points

by clustering near the endpoints.

39

Lagrange Polynomials: Good and Bad Point Distributions

N=4

N=7

φ2 φ4

N=8

Uniform Gauss-Lobatto-Legendre

Here, we see the max qn+1 for uniform (red) and Chebyshev points. Chebyshev converges much more rapidly.

Nth-order Gauss-Chebyshev Points

❑ Roots of Nth-order Chebyshev polynomial are projections of equispaced points on the circle, starting with µ = ±µ/2, then µ = 3±µ/2,…,¼-±µ/2.

N+1 Gauss-Lobatto Chebyshev Points

❑ N+1 GLC points are projections of equispaced points on the circle, starting with µ = 0, then µ = ¼/N, 2¼/N, … , k¼/N, … , ¼.

Nth-Order Gauss Chebyshev Points

❑ Matlab Demo

InterpolationPolynomial Interpolation

Piecewise Polynomial Interpolation

Monomial, Lagrange, and Newton InterpolationOrthogonal PolynomialsAccuracy and Convergence

Example: Runge’s Function

Polynomial interpolants of Runge’s function at equallyspaced points do not converge

< interactive example >Michael T. Heath Scientific Computing 37 / 56

InterpolationPolynomial Interpolation

Piecewise Polynomial Interpolation

Monomial, Lagrange, and Newton InterpolationOrthogonal PolynomialsAccuracy and Convergence

Example: Runge’s Function

Polynomial interpolants of Runge’s function at Chebyshevpoints do converge

< interactive example >

Michael T. Heath Scientific Computing 38 / 56

Chebyshev Convergence is exponential for smooth f(t).

46

Interpolation Testing

• Try a variety of methods for a variety of functions.

• Inspect by plotting the function and the interpolant.

• Compare with theoretical bounds. (Which are accurate!)

Typical Interpolation Experiment

• Given f(t), evaluate fj := f(tj), j = 1, . . . , n.

• Construct interpolant:

p(t) =nX

j=1

p̂j �j(t).

• Evaluate p(t) at t̃i, i = 1, . . . ,m, m � n. (Fine mesh, for plotting, say.)

• To check error, compare with original function on fine mesh, t̃i.

ei := p(t̃i) � f(t̃i)

emax

:=maxi |ei|maxi |fi|

⇡ max |p� f |max |f | .

(Remember, it’s an experiment.)

47

Interpolation Testing

• Try a variety of methods for a variety of functions.

• Inspect by plotting the function and the interpolant.

• Compare with theoretical bounds. (Which are accurate!)

Typical Interpolation Experiment

• Given f(t), evaluate fj := f(tj), j = 1, . . . , n.

• Construct interpolant:

p(t) =nX

j=1

p̂j �j(t).

• Evaluate p(t) at t̃i, i = 1, . . . ,m, m � n. (Fine mesh, for plotting, say.)

• To check error, compare with original function on fine mesh, t̃i.

ei := p(t̃i) � f(t̃i)

emax

:=maxi |ei|maxi |fi|

⇡ max |p� f |max |f | .

(Remember, it’s an experiment.)

48

• Preceding description is for one trial.

• Repeat for increasing n and plot emax

(n) on a log-log or semilog plot.

• Compare with other methods and with theory:

– methods – identify best method for given function / requirements

– theory – verify that experiment is correctly implemented

• Repeat with a di↵erent function.

49

Summary of Key Theoretical Results

• Piecewise linear interpolation:

maxt2[a,b]

| p � f | h2

8M,

(M := max✓2[a,b] |f 00(✓) |h := maxj2[2,...,n](tj � tj�1

), tj�1

< tj

• Polynomial interpolation through n points:

maxt2[a,b]

| p � f | qn(✓)

n!M,

hn

4nM, (for t 2 [a, b]),

with M := max✓2[a,b,t]

|fn(✓) |.

– Here, qn(✓) := (✓ � t1

)(✓ � t2

) · · · (✓ � tn).

– The first result also holds true for extrapolation, i.e., t 62[a, b].

• Natural cubic spline (s00(a) = s00(b) = 0):

maxt2[a,b]

| p � f | C h2 M, M = max✓2[a,b]

|f 00(✓) |,

unless f 00(a) = f 00(b) = 0, or other lucky circumstances.

• Clamped cubic spline (s0(a) = f 0(a), s0(b) = f 0(b)):

maxt2[a,b]

| p � f | C h4 M, M = max✓2[a,b]

|f iv(✓) |.

• Nyquist sampling theorem:

Roughly: The maximum frequency that can be resolved with n points is N = n/2.

There are other conditions, such as limits on the spacing of the sampling.

50

Summary of Key Theoretical Results

• Piecewise linear interpolation:

maxt2[a,b]

| p � f | h2

8M,

(M := max✓2[a,b] |f 00(✓) |h := maxj2[2,...,n](tj � tj�1

), tj�1

< tj

• Polynomial interpolation through n points:

maxt2[a,b]

| p � f | qn(✓)

n!M,

hn

4nM, (for t 2 [a, b]),

with M := max✓2[a,b,t]

|fn(✓) |.

– Here, qn(✓) := (✓ � t1

)(✓ � t2

) · · · (✓ � tn).

– The first result also holds true for extrapolation, i.e., t 62[a, b].

• Natural cubic spline (s00(a) = s00(b) = 0):

maxt2[a,b]

| p � f | C h2 M, M = max✓2[a,b]

|f 00(✓) |,

unless f 00(a) = f 00(b) = 0, or other lucky circumstances.

• Clamped cubic spline (s0(a) = f 0(a), s0(b) = f 0(b)):

maxt2[a,b]

| p � f | C h4 M, M = max✓2[a,b]

|f iv(✓) |.

• Nyquist sampling theorem:

Roughly: The maximum frequency that can be resolved with n points is N = n/2.

There are other conditions, such as limits on the spacing of the sampling.

51

• Methods:

– piecewise linear

– polynomial on uniform points

– polynomial on Chebyshev points

– natural cubic spline

• Tests:

– et

– ecos t

– sin t on [0, ⇡]

– sin t on [0, ⇡2

]

– sin 15t on [0, 2⇡]

– ecos 11ton [0, 2⇡]

– Runge function: 1

1+25t2 on [0, 1]

– Runge function: 1

1+25t2 on [�1, 1]

– Semi-circle:p1� t2 on [�1, 1]

– Polynomial: tn

– Extrapolation

– Other

interp_test.m interp_test_runge.m

52

• Methods:

– piecewise linear

– polynomial on uniform points

– polynomial on Chebyshev points

– natural cubic spline

• Tests:

– et

– ecos t

– sin t on [0, ⇡]

– sin t on [0, ⇡2

]

– sin 15t on [0, 2⇡]

– ecos 11ton [0, 2⇡]

– Runge function: 1

1+25t2 on [0, 1]

– Runge function: 1

1+25t2 on [�1, 1]

– Semi-circle:p1� t2 on [�1, 1]

– Polynomial: tn

– Extrapolation

– Other

interp_test.m interp_test_runge.m

InterpolationPolynomial Interpolation

Piecewise Polynomial Interpolation

Piecewise Polynomial InterpolationHermite Cubic InterpolationCubic Spline Interpolation

Piecewise Polynomial Interpolation

Fitting single polynomial to large number of data points islikely to yield unsatisfactory oscillating behavior ininterpolant

Piecewise polynomials provide alternative to practical andtheoretical difficulties with high-degree polynomialinterpolation

Main advantage of piecewise polynomial interpolation isthat large number of data points can be fit with low-degreepolynomials

In piecewise interpolation of given data points (ti, yi),different function is used in each subinterval [ti, ti+1]

Abscissas ti are called knots or breakpoints, at whichinterpolant changes from one function to another

Michael T. Heath Scientific Computing 40 / 56

InterpolationPolynomial Interpolation

Piecewise Polynomial Interpolation

Piecewise Polynomial InterpolationHermite Cubic InterpolationCubic Spline Interpolation

Piecewise Interpolation, continued

Simplest example is piecewise linear interpolation, inwhich successive pairs of data points are connected bystraight lines

Although piecewise interpolation eliminates excessiveoscillation and nonconvergence, it appears to sacrificesmoothness of interpolating function

We have many degrees of freedom in choosing piecewisepolynomial interpolant, however, which can be exploited toobtain smooth interpolating function despite its piecewisenature

< interactive example >

Michael T. Heath Scientific Computing 41 / 56

55

Piecewise Polynomial Bases: Linear and Quadratic

InterpolationPolynomial Interpolation

Piecewise Polynomial Interpolation

Piecewise Polynomial InterpolationHermite Cubic InterpolationCubic Spline Interpolation

Cubic Spline Interpolation

Spline is piecewise polynomial of degree k that is k � 1

times continuously differentiable

For example, linear spline is of degree 1 and has 0

continuous derivatives, i.e., it is continuous, but notsmooth, and could be described as “broken line”

Cubic spline is piecewise cubic polynomial that is twicecontinuously differentiable

As with Hermite cubic, interpolating given data andrequiring one continuous derivative imposes 3n� 4

constraints on cubic spline

Requiring continuous second derivative imposes n� 2

additional constraints, leaving 2 remaining free parameters

Michael T. Heath Scientific Computing 44 / 56

Spline conditions

Piecewise cubics:

• Interval Ij = [xj�1, xj], j = 1, . . . , n

pj(x) 2 lP3(x) on Ij

pj(x) = aj + bjx + cjx2 + djx

3

• 4n unknowns

pj(xj�1) = fj�1, j = 1, . . . , n

pj(xj) = fj, j = 1, . . . , n

p

0j(xj) = p

0j+1(xj), j = 1, . . . , n� 1

p

00j (xj) = p

00j+1(xj), j = 1, . . . , n� 1

• 4n� 2 equations

InterpolationPolynomial Interpolation

Piecewise Polynomial Interpolation

Piecewise Polynomial InterpolationHermite Cubic InterpolationCubic Spline Interpolation

Cubic Splines, continued

Final two parameters can be fixed in various ways

Specify first derivative at endpoints t1 and tn

Force second derivative to be zero at endpoints, whichgives natural spline

Enforce “not-a-knot” condition, which forces twoconsecutive cubic pieces to be same

Force first derivatives, as well as second derivatives, tomatch at endpoints t1 and tn (if spline is to be periodic)

Michael T. Heath Scientific Computing 45 / 56

• Force first derivatives at endpoints to match y’(x) – clamped spline.

InterpolationPolynomial Interpolation

Piecewise Polynomial Interpolation

Piecewise Polynomial InterpolationHermite Cubic InterpolationCubic Spline Interpolation

Example: Cubic Spline Interpolation

Determine natural cubic spline interpolating three datapoints (ti, yi), i = 1, 2, 3

Required interpolant is piecewise cubic function defined byseparate cubic polynomials in each of two intervals [t1, t2]

and [t2, t3]

Denote these two polynomials by

p1(t) = ↵1 + ↵2t+ ↵3t2+ ↵4t

3

p2(t) = �1 + �2t+ �3t2+ �4t

3

Eight parameters are to be determined, so we need eightequations

Michael T. Heath Scientific Computing 46 / 56

Cubic Spline Formulation – 2 Segments

Interpolatory

p1(t1) = y1

p1(t2) = y2

p2(t2) = y2

p2(t3) = y3

Continuity of Derivatives

p01(t2) = p02(t2)

p001(t2) = p002(t2)

End Conditions

p001(t1) = 0

p002(t3) = 0

1

Interpolatory

p1(t1) = y1

p1(t2) = y2

p2(t2) = y2

p2(t3) = y3

Continuity of Derivatives

p01(t2) = p02(t2)

p001(t2) = p002(t2)

End Conditions

p001(t1) = 0

p002(t3) = 0

1

8 Unknowns

p1(t) = ↵1 + ↵2t + ↵3t2+ ↵4t

3

p2(t) = �1 + �2t + �3t2+ �4t

3

8 Equations

Interpolatory

p1(t1) = y1

p1(t2) = y2

p2(t2) = y2

p2(t3) = y3

Continuity of Derivatives

p01(t2) = p02(t2)

p001(t2) = p002(t2)

End Conditions

p001(t1) = 0

p002(t3) = 0

1

8 Unknowns

p1(t) = ↵1 + ↵2t + ↵3t2+ ↵4t

3

p2(t) = �1 + �2t + �3t2+ �4t

3

(Natural Spline)

8 Equations

Interpolatory

p1(t1) = y1

p1(t2) = y2

p2(t2) = y2

p2(t3) = y3

Continuity of Derivatives

p01(t2) = p02(t2)

p001(t2) = p002(t2)

End Conditions

p001(t1) = 0

p002(t3) = 0

1

Some Cubic Spline Properties

❑ Continuous ❑ 1st derivative: continuous ❑ 2nd derivative: continuous

❑ “Natural Spline” minimizes integrated curvature: over all twice-differentiable f(x) passing through (xj,fj), j=1,…,n.

❑ Robust / Stable (unlike high-order polynomial interpolation) ❑ Commonly used in computer graphics, CAD software, etc. ❑ Usually used in parametric form (DEMO) ❑ There are other forms, e.g., tension-splines, that are also useful. ❑ For clamped boundary conditions, convergence is O(h4) ❑ For small displacements, natural spline is like a physical spline.

(DEMO)

InterpolationPolynomial Interpolation

Piecewise Polynomial Interpolation

Piecewise Polynomial InterpolationHermite Cubic InterpolationCubic Spline Interpolation

Hermite Cubic vs Spline Interpolation

Choice between Hermite cubic and spline interpolationdepends on data to be fit and on purpose for doinginterpolation

If smoothness is of paramount importance, then splineinterpolation may be most appropriate

But Hermite cubic interpolant may have more pleasingvisual appearance and allows flexibility to preservemonotonicity if original data are monotonic

In any case, it is advisable to plot interpolant and data tohelp assess how well interpolating function capturesbehavior of original data

< interactive example >

Michael T. Heath Scientific Computing 49 / 56

InterpolationPolynomial Interpolation

Piecewise Polynomial Interpolation

Piecewise Polynomial InterpolationHermite Cubic InterpolationCubic Spline Interpolation

Hermite Cubic vs Spline Interpolation

Michael T. Heath Scientific Computing 50 / 56

Matlab “pchip()” function

Parametric Interpolation

❑ Important when y(x) is not a function of x; then, define [ x(t), y(t) ] such that both are (preferably smooth) functions of t.

❑ Example 1: a circle.

Parametric Interpolation: Example 2

❑ Suppose we want to approximate a cursive letter.

❑ Use (minimally curvy) splines, parameterized.

Parametric Interpolation: Example 2

❑ Once we have our (xi , yi ) pairs, we still need to pick ti .

❑ One possibility: ti = i , but usually it’s better to parameterize by arclength, if x and y have the same units.

❑ An approximate arclength is:

❑ Note – can also have Lagrange parametric interpolation…

Stable Lagrange Interpolation

• An approximate arclength is

si =

iX

j=0

dsj, dsi := ||xi � xi�1||2

Parametric Interpolation: Example 2

Pseudo-Arclength-Based Index-Based

Multidimensional Interpolation

❑ Multidimensional interpolation has many applications in computer aided design (CAD), partial differential equations, high-parameter data fitting/assimilation.

❑ Costs considerations can be dramatically different (and of course, higher) than in the 1D case.

2D basis function, N=10

Multidimensional Interpolation

❑ There are many strategies for interpolating f(x,y) [ or f(x,y,z), etc.]. ❑ One easy one is to use tensor products of one-dimensional

interpolants, such as bicubic splines or tensor-product Lagrange polynomials.

pn(s, t) =nX

i=0

nX

j=0

li(s) lj(t) fij

=nX

i=0

nX

j=0

li(s) fij lj(t)

1

2D Example: n=2

Consider 1D Interpolation

˜f (s) =

nX

j=1

lj(s) fj

˜f (s) =

nX

j=1

lj(s) fj

˜fi =

nX

j=1

lij fj, lij := lj(si)

˜f = Lf

• si – fine mesh (i.e., target gridpoints)

• L is the matrix of Lagrange cardinal polynomials (or, say, spline

bases) evaluated at the target points, si, i = 1, . . . ,m.

1

Two-Dimensional Case (say, n x n à m x m)

f̃ (s, t) =

nX

i=1

nX

j=1

li(s) lj(t) fij

=

nX

i=1

nX

j=1

li(s) fij lj(t)

f̃pq := f̃ (sp, tq) :=

nX

i=1

nX

j=1

lpi fij lqj

:=

nX

i=1

nX

j=1

lpi fij lTjq

F̃ = LFLT

• Note that the storage of L ismn < m2+n2, which is the storage

of F̃ and F combined.

• That is, in higher space dimensions, the operator cost (L) is lessthan the data cost (F̃ , F ).

• This is even more dramatic in 3D, where the relative cost is mnto m3 + n3.

• Observation: It is di�cult to assess relevant operator costs

based on 1D model problems.

1

! matrix-matrix product, fast

Two-Dimensional Case (say, n x n à m x m)

f̃ (s, t) =

nX

i=1

nX

j=1

li(s) lj(t) fij

=

nX

i=1

nX

j=1

li(s) fij lj(t)

f̃pq := f̃ (sp, tq) :=

nX

i=1

nX

j=1

lpi fij lqj

:=

nX

i=1

nX

j=1

lpi fij lTjq

F̃ = LFLT

• Note that the storage of L ismn < m2+n2, which is the storage

of F̃ and F combined.

• That is, in higher space dimensions, the operator cost (L) is lessthan the data cost (F̃ , F ).

• This is even more dramatic in 3D, where the relative cost is mnto m3 + n3.

• Observation: It is di�cult to assess relevant operator costs

based on 1D model problems.

1

73

Aside: GLL Points and Legendre Polynomials